14
Dramatically Improve Compute-Intense Applications in the Supercomputing Cloud JUNE 2011 WHITE PAPER 1 Dramatically Improve Compute-Intense Applications in the Supercomputing Cloud A Practical Guide to High-Performance Computing with COMSOL Multiphysics and Microsoft HPC Server JUNE 2011 WHITE PAPER J2Methods By KONRAD JUETHNER

Dramatically Improve Compute-Intense Applications in the

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Dramatically Improve Compute-Intense Applications in the

Dramatically Improve Compute-Intense Applications in the Supercomputing Cloud June 2011 WhIte pAper

1

Dramatically Improve Compute-Intense Applications in the Supercomputing CloudA Practical Guide to High-Performance Computing with COMSOL Multiphysics and Microsoft HPC Server

June 2011 WhIte pAper

J2Methods

By KonrAD Juethner

Page 2: Dramatically Improve Compute-Intense Applications in the

Dramatically Improve Compute-Intense Applications in the Supercomputing Cloud June 2011 WhIte pAper

2

the Supercomputing Buzz

By now, we have all heard the buzz about paral-lel computing, clusters, and the cloud. But what does it all mean? In this white paper, I will share

with you my experience with setting up and running cluster simulations with Windows HPC.

While the benefits of large and collaborative com-puter networks have been obvious for many years, clusters have been available to a select few in well-funded environments such as government research and big industry.

The technology has become so mature and flexible that it is possible to configure very small experimental clusters ad hoc and excel at much grander scales, reach-ing previously unseen performance levels. COMSOL’s collaboration with Microsoft has led to a tight integra-tion of COMSOL Multiphysics with Windows HPC Ser-ver. This software combination reduces the upfront investment in equipment and technical expertise so dramatically that high-performance computing is now ready for the main stream. To enable this exploration and the production of real data, COMSOL and Micro-soft provided direct support, the necessary trial soft-ware, and a brief intro course to Windows HPC. Within a few days, they said, I would be up and running. This I had to see and took the bait.

Microsoft Clusters Are hereDuring the last year, Microsoft has increased its high-

performance computing staff from 50 to 500 and is bringing software and compute clusters to customers in many markets.

My first big surprise was Microsoft’s claim to be of-fering the most economical HPC solution. How could it be cheaper than a free and open source operating sys-tem such as Linux?

The answer, of course, is labor! Although the flexi-bility of Linux is attractive, its administration can be a handful. This is reflected in commercial production

C O n t e n t S

the Supercomputing Buzz ..........................................2

Microsoft Clusters Are here ........................................2

the Status Quo .......................................................................3

three Big Questions ..........................................................3

Workstation to Cluster transition..........................3

home Computer network ............................................4

CoWs as a Windows hpC Feature ...........................5

easy CoMSoL Installation ............................................5

taking a test Drive ..............................................................6

Budget-Cluster performance .....................................7

Microsoft Field Support Cluster ..............................8

embarrassingly parallel Computations ............8

Field Cluster performance ............................................9

taming any problem Size ...........................................10

the Verdict ..............................................................................12

© 2011, all rights reserved. This White Paper is published by COMSOL, Inc. and its associated companies. COMSOL and COMSOL Multiphysics are registered trademarks of COMSOL AB. Other products or brand names are trademarks or registered trademarks of their respective holders.

Page 3: Dramatically Improve Compute-Intense Applications in the

Dramatically Improve Compute-Intense Applications in the Supercomputing Cloud June 2011 WhIte pAper

3

environments where Windows administrative support is much more readily available and arguably cheaper.

My perception of the single workstation world is simi-lar. Where die-hard numerical experts typically gravita-te to Linux, the broad mass of engineering practitioners does favor the Windows platform due to their generally greater familiarity with its GUI-driven operation.

With COMSOL Multiphysics running on all relevant operating systems available today, the choice appears to be simply one of user preference. Personally, I tend to think Linux in single workstation settings. However, when reviewing Windows HPC, my mind experienced a para-digm shift in realizing that it is exactly this GUI familiarity with Windows OS that makes this product highly relevant in the cluster world and establishes a distinct competitive advantage. I followed this notion and decided to explore whether or not the combination of COMSOL Multiphysics and Windows HPC Server would make practical sense.

the Status QuoTo increase computational performance, we have

typically bought larger and larger workstations, in-creased utilization by centralizing servers, justified in-ternally managed clusters in industry and government, and have negotiated or purchased access to them. The last resort has been to simply declare a computational problem too large or too time-consuming to solve. So, where would this product combination take us?

three Big QuestionsTo convince myself of the value of the COMSOL

Multiphysics / Windows HPC combination, three ques-tions had to be answered:

1. Can an engineer with moderate IT skills configure a low-budget cluster easily, thus supporting the ease of use and low-cost claims?

2. Do parametric sweeps scale favorably, providing a distinct advantage over single workstations or servers?

3. Can contiguous memory blocks, originally required to solve very large problems, be segmented across cluster nodes?

Workstation to Cluster transitionTo address Big Question 1, how can we define “easy”

and “low cost”? I thought a good way was to enter an engineer’s home and see if it was possible to build a Microsoft HPC Cluster by only using what we can find. So, my home was about to be declared the test lab for this experiment.

Right off the bat, I ran into a relatively tall order for home networks and that was Microsoft Active Directory. It is a hard requirement for Windows HPC Server and you really need to set aside one compu-ter to become the domain controller. For those of you who are unfamiliar with network domains and Active Directory, let me just say that it elevates what you are used to seeing in Control Panel -> User Ac-counts on a single computer to a domain level user/computer administration tool that spans across a computer network.

A second, however less significant hurdle surfaced in the installation requirements of Windows Server 2008 R2 HPC Edition3. Head nodes, broker nodes, and com-pute nodes only run on 64-bit architectures. While this could be a show stopper for folks who are still holding on to their 32-bit tractors, I convinced myself by visit-ing electronic stores in the Boston area that new sys-tems for residential computing are practically 64-bit. What eases potential migration concerns even further is the recently introduced workstation node feature3 which empowers us to integrate x86 based processors (i.e. 32-bit) via Windows 7 Professional or Enterprise to form clusters of workstations (COWs).

Page 4: Dramatically Improve Compute-Intense Applications in the

Dramatically Improve Compute-Intense Applications in the Supercomputing Cloud June 2011 WhIte pAper

4

home Computer networkAn inventory of my home surfa-

ced two relatively late model 64-bit machines. To keep things separate from my existing network, I reactiva-ted an old router which had been collecting dust and established a separate cluster computing subnet. As I later learned, this turned out to be a good move since the domain controller (DC) was much happier with its own DNS server. The choice for the DC became clear during a cof-fee break. head node (AMD Athlon™ II X2 250u Processor 1.60 GHz, 4.00 GB Installed Memory), our all-in-one kitchen computer was only serving up electronic recipes and latest photo-graphs, and could be reconfigured temporarily.

I obtained domestic approval for such a drastic change by providing the ironclad guarantee that head node could be returned to its current operating state at a moment’s notice. This meant swapping out its hard drive for this cluster project. The next steps were cookie cutter: Download and install Microsoft’s stan-dard 180 day trial for Windows Server 2008 R2 HPC Edi-tion and assign to it DC as well as DNS server roles. In a production environment, you would typically separate the DC role from DNS server and computational head node. However, I was all about getting by with the least possible amount of hardware resources and therefore ended up with Windows HPC Server detecting Net-work Topology 5 as shown in Figure 1. This is a Micro-soft definition and describes that all nodes reside on the enterprise network. Since this topology routes all cluster traffic through the general enterprise network, it is not recommended for performance. However, it is lowest cost and therefore useful for our low-budget testing purposes.

Finally, the installation of Microsoft HPC Pack 2008 R2 (which is part of Microsoft’s 180 day trial offer) com-

pletes the configuration of HEADNODE and provides access to the cluster manager from where you admi-nister the cluster and configure a variety of cluster node types such as compute, broker, and workstation.

For convenient administration, Windows HPC ser-ver enables you to deploy cluster nodes directly from the head node across the network from “bare metal”

Figure 1: Microsoft Network Topology 5 can be deployed immediately in any simple net-work and is great for testing. That’s why I used it for this low-budget cluster. In a produc-tion environment, you would want to isolate cluster traffic from your main (i.e. enterprise) network and optimize it via Network Topology 3.

Figure 2: The low-budget cluster is shown in the home network context. Although a router is used to isolate all nodes from the home network, they still share the same subnet and therefore re-present Network Topology 5.

Page 5: Dramatically Improve Compute-Intense Applications in the

Dramatically Improve Compute-Intense Applications in the Supercomputing Cloud June 2011 WhIte pAper

5

via the specification of *.iso operating system images. However, I did not have any computers to spare that could as-sume the roles of dedicated compute nodes. I was looking for another way and found it in one of the latest featu-res of Windows Server 2008 R2 — HPC / Enterprise Edition.

CoWs as a Windows hpC FeatureAccording to Windows System Requi-

rements3, 64-bit and 32-bit workstations running Windows 7 Professional can as-sume workstation node status and join as well as collaborate with the pool of com-pute nodes. This was great news and the perfect option for my main Windows 7 Professional workhorse named WORKERNODE (Intel® Core™ i7 CPU Q820 @ 1.73 GHz, 8.00 GB Installed Memory). Following this concept, I moved WORKERNODE from the home network to the cluster subnet as shown in Figure 2, the cluster

domain, and carried out the brief client installation of Microsoft HPC Pack 2008 R2. The cluster manager could now be accessed from anywhere on the net-work and WORKERNODE utilized as a computational resource via the workstation node deployment met-hod as shown in Figure 3.

As a result , WORKERNODE was now listed in the cluster manager in Figure 4 and ready for work.

The integration was seamless and, to my amazement, even supported cluster computations while I was logged into WORKERNODE as a user. It quickly got even better.

easy CoMSoL InstallationOut of the box, COMSOL Multiphysics

can connect to a Windows HPC Cluster via the addition of a COMSOL cluster computing node in the model builder tree of any computational model as illustrated in Figure 5. Since COMSOL recommends one single physical

Figure 4: The HPC Cluster Manager provides you with true mission control. In this status view, both HEADNODE and WORKERNODE are reported as online and ready for computational tasks.

Figure 3: I was wowed by the variety and ease of Windows HPC node deployment methods. For instance, you can make any networked workstation deployable within a minute by installing the tiny Windows HPC Pack. When adding a node to your cluster, you are asked to make a simple choice as shown.

Page 6: Dramatically Improve Compute-Intense Applications in the

Dramatically Improve Compute-Intense Applications in the Supercomputing Cloud June 2011 WhIte pAper

6

installation on the head node and sharing it via UNC path, i.e. \\HEADNODE\comsol42, the installation was as straight-forward as on a standalone workstation.

While I chose a minimalistic setup and neglected all performance-enhancing recommendations such as role separation and parallel subnets to handle communica-tion and application data, I was taken aback by the flexi-bility of this software system and its ability for reconfigu-ration on the fly. In this context, it should be noted that the HPCS2008 Pack comes free with HPC Server and en-ables cluster access from any domain workstation. This option would be typical in fast LAN environments and preferable if end users require interaction with COMSOL Multiphysics high end graphics capabilities for mode-ling or report generation purposes.

taking a test DriveTo test this cluster, I loaded the Vented Louds-

peaker Enclosure shown in Figure 5. Similar to most COMSOL Multiphysics models, it is based on a simple

user choice of the appropriate physics in an intuiti-ve graphical user interface. In this case, the built-in acoustic-structure interaction formulation describes how an acoustic wave communicates physically with a structure which is what a loudspeaker membrane and its surrounding air pressure field do. COMSOL Multiphysics evaluates such formulation on each of the tetrahedral subdivisions shown in the computa-tional mesh of Figure 6 and finds a piecewise conti-nuous solution that spans the entire domain via the finite element method.

Within each element the solution is continuous and characterized by polynomial coefficients which repre-sent the unknown variables or degrees of freedom (DOF). The DOF grow with increasing mesh density — a fact we will later use to increase problem size.

Among the infinitely many ways to illustrate the re-sults of this computation, one could present a slice plot of the sound pressure field illustrated in Figure 7 and the mechanical displacement field of the speaker membrane in Figure 8.

Figure 5: In this COMSOL Multiphysics GUI view, the Vented Loudspeaker Enclosure model from the built-in Model Library is shown after the Cluster Computing node was added. This node establishes the connection to the desired compute cluster and can be added to any model. It is a nifty design that makes switching gears between workstation and cluster computing very convenient.

Page 7: Dramatically Improve Compute-Intense Applications in the

Dramatically Improve Compute-Intense Applications in the Supercomputing Cloud June 2011 WhIte pAper

7

Budget-Cluster performanceWhen used one at a time, HEADNODE and WOR-

KERNODE carried out baseline sweeps of 32 frequen-cies at 135,540 DOF in 2,711 and 1,420 seconds, re-spectively. When HEADNODE was instructed to utilize WORKERNODE as a workstation node in Figure 9, the same computation took 1,729 seconds.

While this was faster than what HEADNODE could ac-complish by itself, the low-budget cluster was slower than WORKERNODE. This probably amounts to cluster network traffic that WORKERNODE does not encounter by itself, my disregard for performance recommenda-tions, and my choice for the least desirable Topology 5. After all, this low-budget cluster was not intended to perform but to verify ease of configuration and use in the context of humble hardware resources.

Looking back, the configuration of this low-budget cluster took less than a day. And, now that I know what to do, I could probably do it again within one morning while comfortably sipping a cup of coffee.

To reach greater performance would mean invest-ment in additional computing and networking hard-ware. And, this is what we did in the old days.

Figure 7: Illustration of the qualitative sound pressure level field sliced along the geometry’s plane of symmetry

Figure 8: Illustration of the qualitative displacement field of the moving speaker components.

Figure 6: Computational mesh of the Vented Loudspeaker Enclosure

Page 8: Dramatically Improve Compute-Intense Applications in the

Dramatically Improve Compute-Intense Applications in the Supercomputing Cloud June 2011 WhIte pAper

8

Microsoft Field Support ClusterToday, we are seeing an onslaught of

hosted cluster solutions which are de-ployed to support high-performance computing applications such as data warehousing, transaction processing, and engineering simulation. Many big Internet and software companies have begun to offer such services. To take this investigation to the next le-vel and answer Big Question 2 about favorable scaling of embarrassingly parallel computations, no other than Microsoft came to the rescue by confi-guring the bigger and better field sup-port cluster shown in Figure 10.

Figure 10: Network flow chart of Microsoft EEC Field Support Cluster #2; note how compute nodes are isolated on separate private and application networks to represent Network Topology 3 (see also Figure 11)

Figure 9: View of the HPC Cluster Manager during Low-Budget Cluster testing; note that the request for one node fires up one node in the Heat Map.

Page 9: Dramatically Improve Compute-Intense Applications in the

Dramatically Improve Compute-Intense Applications in the Supercomputing Cloud June 2011 WhIte pAper

9

embarrassingly parallel ComputationsThe most trivial need for parallel computing arises

when the goal is to carry out many similar computa-tions. Think of the thousands of customers of an invest-ment bank whose portfolio performance needs to be predicted regularly based on an ever-changing invest-ment tactics. Since investment decisions are time-sensitive, it is easy to see that the edge goes to those brokers who can evaluate client portfolios the fastest. In-stead of figuring out one client at a time, the idea is to compute all portfolio predictions at once, i.e. in parallel. You can take this further and even fan out the computations for each indivi-dual stock. The happy medium will be anywhere between a fea-sible hardware price tag and ROI. With Microsoft Excel at the forefront of computations in many industries, it comes as no surprise that Win-dows Server HPC support for parallel Microsoft Excel was largely driven by the financial industry.

The analogous engineering problem is called pa-rametric and exemplified in the vented loudspeaker enclosure of the previous section. The parameter investigated in this case is the excitation frequency of the speaker which affects both the membrane de-formation and the surrounding air pressure. Unlike computing one frequency at a time as done earlier, we will utilize Windows Server 2008 R2 HPC Edition to solve as many parameters as possible at any given moment in time.

It should be noted that the following computations were carried out ad hoc and in the absence of a highly controlled benchmarking environment. This choice was quite deliberate to reflect realistic working condi-tions of the typical engineer who has to produce re-sults regardless of circumstance. But then, I was being

spoiled with the Microsoft cluster which was configu-red using the optimal cluster network Topology 3 as detected by the cluster manager. As Figure 11 shows, traffic from the enterprise network is routed through the head node and cluster traffic confined to its dedi-cated private and application networks.

I accessed this Microsoft cluster by sequentially VPN’ing into a Microsoft gateway machine and the head node \\node000. While we expected this Microsoft’s VPN service to be fast and reliable, I will admit that I have never seen anything faster. Such VPN connections are favorably light on WAN traffic and remarkable in their efficiency. However, there is a trade off in graphics per-formance which is inferior to running the cluster from a local workstation as discussed earlier.

Field Cluster performanceThe configuration of the cluster computing node to

communicate with the Field Cluster is analogous to that of the Low Budget Cluster in Figure 9. However, now we have the ability to request 16 nodes.

Shortly after invoking the Solve command in COMSOL, the heat map in the cluster manager in Figure 12 lights up and shows all 16 compute nodes at nearly full CPU capacity.

Figure 11: Microsoft Network Topology 3

Page 10: Dramatically Improve Compute-Intense Applications in the

Dramatically Improve Compute-Intense Applications in the Supercomputing Cloud June 2011 WhIte pAper

10

With this configuration, I was now able to measure computation time with respect to number of compute nodes assigned.

At zero compute nodes, head node node000 did all the work and finished in about 1,000 seconds or 18 mi-nutes as shown in Figure 13. Already faster than WOR-

KERNODE, the same figure indicates that this Microsoft test cluster achieved a speedup of a factor of 6 down to 200 seconds when using all 16 nodes

Of course, there are many dependencies such as number of parameters and problems size.To get a feeling for their significance, I divided the minimum and maximum element size re-quirements by 2 which increased the DOF from 135,540 to 727,788 and unleashed economies of scale of our cluster solution. With all 16 nodes engaged, the maximum speedup jumped from less than 6x for 135,540 DOF to more than 11x for 727,788 DOF as presented in Figures 13 and 14, respectively.

Given that engineering computations are routinely measured in days or weeks, an im-proved turnaround of a factor of 11 is commer-cially viable.

When running this last set, I noticed that the consumed amount of memory ranged around 15

GB which made me curious whether or not this larger problem would still run on the low-budget cluster. It did not which I interpreted as an out-of-memory issue and a perfect entry point for Big Question 3 about taming pro-blem size via memory segregation or decomposition.

Figure 14: Field Cluster performance for embarrassingly parallel computations using a finite element model with 727,788 DOF. At 16 compute nodes, the speedup exceeds 11x.

Figure 13: Field Cluster performance for embarrassingly parallel computations using a finite element model with 135,540 DOF. At 16 compute nodes, the speedup nearly reaches 6x.

Figure 12: View of the HPC Cluster Manager during Field Cluster testing; note that the request for 16 nodes fires up 16 nodes in the Heat Map.

Page 11: Dramatically Improve Compute-Intense Applications in the

Dramatically Improve Compute-Intense Applications in the Supercomputing Cloud June 2011 WhIte pAper

11

taming any problem SizeAs the size of the numerical

problem increases, we can in-struct COMSOL Multiphysics to distribute the assembled matri-ces across all available nodes via Microsoft HPC Server. With this technique, any out-of-reach pro-blem can be brought back into scope by simply adding compute nodes to the cluster.

Since the low-budget clus-ter did not have sufficient me-mory, I decided to upgrade my notebook from Window 7 Home to Windows 7 Professional, install HPC Pack 2008 R2, and add NOTEBOOKNODE (Intel® ATOM™ CPU @ 1.60 GHz, 3.00 GB Installed Memory) to in-crease computational capacity to the low-budget cluster as depicted in Figures 15 and 16. And, this did the trick.

Without a doubt, a large amount of memory is con-sumed in this particular model by storing all solutions

in the model tree. Although it would be much more ef-ficient to only store selected points or derived values such as average, maximum, or integral, it was my de-liberate intent to produce an out-of-memory situation and solve it by adding cluster hardware.

Resorting to brute force in this fashion has never been easer and can make a lot of business sense. This becomes particularly attractive when tapping your

organization’s creative genius is expensive or model simplification just not possible. What a trick to have up your sleeve to simp-ly add a few nodes to increment collective cluster computing power.

I stood no chance to beat the Microsoft field cluster in performance, but was able to solve a problem that would have been out of scope without the integrating Windows HPC platform.

Note how different this is from out-of-core strategies that utilize the hard drive. Instead of off-loading excess demand to slow hard drives, all computations are firmly kept in the much faster memory context, making the scaling of hardware a real and not only theoretical solution proposition.

Figure 15: Adding further memory capacity to the Low-Budget Cluster which allows you to tackle larger problems via COMSOL Multiphysics memory segregation algorithms.

Figure 16: After adding NOTEBOOKNODE to the cluster, its contribution can be monitored via the Heat Map. It is intriguing to think of linking up the combined resources of departments or even companies to address ad-hoc and temporary supercomputing needs in this way.

Page 12: Dramatically Improve Compute-Intense Applications in the

Dramatically Improve Compute-Intense Applications in the Supercomputing Cloud June 2011 WhIte pAper

12

Going Large Scale with AzureSince there are only so many computers to salvage in

any environment, we would eventually incur the capital expense of additional hardware. This sounds expensive and highly committed. However, with the increasing number of hosted cluster solutions coming online daily, the idea of adding temporary resources becomes incre-asingly tangible.

One of the most recent and exciting propositions in this context comes again from Microsoft. Its Windows Azure platform interfaces seamlessly with Windows HPC as pointed out by David Chappell in September of 20101. COMSOL Multiphysics support for Azure is expec-ted to be available in a near future. What this means is that, similarly to adding workstation nodes like we did in the low-budget cluster section, you can augment your existing On-Premises Compute Cluster with any num-ber of Azure compute nodes as shown in Figure 17. It is important that you could do so temporarily and at the flip of a switch in situations where on-premises capacity is exceeded.

Furthermore, there is nothing to stop you from off-loading all compute nodes to the Windows Azure Data Center as shown in Figure 18 and only retaining the head node on-premises.

The idea is to rent clusters of any size, at any time, and for any time frame. Revolutionary from a business

perspective is, according to Chappell1 that “this al-lows tilting HPC costs away from capital expense and toward operating expense, something that’s attractive to many organizations.”

the VerdictPlaying guinea pig as an engineer with elementary IT

skills, I was able to understand the available network to-pologies and configure the low-budget cluster. In fact, the experience was quite enjoyable.

A welcome surprise was Windows HPC Server’s con-figuration flexibility and viability in very small networks like my own. Workstation node integration on the fly enables standard business computers as compute no-des and the temporary metamorphosis of entire busi-ness networks into COWs that play supercomputer on nights and weekends. While the concept is neither very complicated nor new, Windows HPC Server is the first software system that has pulled this vision together feasibly for the main stream. Out of this world is the ability to manage these changes centrally via one con-figuration manager without any additional hardware and physical configuration requirements.

Exploratory speedup factors of 6x and 11x in the context of embarrassingly parallel COMSOL Multip-

Figure 18: Replacement of On-Premises Compute Cluster with Windows Azure Data Center 1

Figure 17: Augmentation of On-Premises Compute Cluster with Windows Azure Data Center 1

Page 13: Dramatically Improve Compute-Intense Applications in the

Dramatically Improve Compute-Intense Applications in the Supercomputing Cloud June 2011 WhIte pAper

13

hysics computations provide a powerful business jus-tification for Windows HPC. The ability to divide and conquer by distributing the memory required of any problem size allows us to draw conclusions to pro-blems we can’t even fathom today.

In addition to integrating business networks with traditional HPC Clusters, Windows Azure expands the flexible configuration concept to the domain of in-credibly fast growing cloud computing services. The blend of all three provides us with a powerful tactical toolset that enables you to conquer today’s largest and toughest technical challenges.

If you have been thinking about a COMSOL Cluster solution, there is no time to waste. COMSOL Inc has introduced an extremely generous cluster licensing scheme which consumes only one floating network license (FNL) key per cluster. In other words, if you in-tend to run ten thousand nodes in parallel, you will only need one FNL key.

“High-performance computing” has entered “a new era. The enormous scale and low cost of cloud com-puting resources is sure to change how and where HPC applications are run. Ignoring this change isn’t an option.” 1 n

references1 Windows HPC Server and Windows Azure — High-

Performance Computing in the Cloud, David Chappell, September 2010, Sponsored by Microsoft Corporation http://www.microsoft.com/windowsazure/Whitepapers/HPCServerAndAzure/default.aspx

2 Microsoft HPC Server 2008 R2 Suite — Technical Resources http://www.microsoft.com/hpc/en/us/technical-resources/overview.aspx

3 Windows HPC Server 2008 R2 Suite — System Requirements http://www.microsoft.com/hpc/en/us/product/system- requirements.aspx

4 COMSOL Multiphysics 4.2 Product Documentation

KonrAD JuethnerJ2Methodswww.j2methods.com

Konrad Juethner is a Consultant and Owner of J2Methods. As a physicist and mechanical engineer by training, he has accumulated an extensive background in simulation-driven engineering. J2Methods employs software integration solutions that deliver significant efficiency and quality gains for large engineering organizations.

J2Methods15 Knowlton DriveActon, MA 01720781-354-2764www.j2methods.com

Page 14: Dramatically Improve Compute-Intense Applications in the

COMSOL, Inc.1 New England Executive ParkSuite 350Burlington, MA 01803U.S.A.

www.comsol.com