81
ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 1 Welcome! The UberCloud Experiment ANSYS in the Cloud 2012 - 2018 ANSYS / UberCloud Compendium of Case Studies https://www.TheUberCloud.com

The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

Embed Size (px)

Citation preview

Page 1: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

1

Welcome!

The UberCloud Experiment ANSYS in the Cloud 2012 - 2018

ANSYS / UberCloud Compendium of Case Studies

https://www.TheUberCloud.com

Page 2: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

2

ANSYS Cloud Case Studies

- Six Years of UberCloud Experiments -

200 HPC cloud experiments, 80 case studies, and a ton of hands-on experience gained, that’s the harvest of almost six years of UberCloud HPC Experiments. We are now able to measure cloud computing progress, objectively. Looking back these five years at our first 50 cloud experiments in 2012, 26 of them failed or didn’t finish, and the average duration of the successful ones was about three months. Five years later, in early 2018, looking at our last 50 cloud experiments, none failed; and the average duration of these experiments is now just about three days. That includes defining the application case, preparing and accessing the engineering application software in the cloud, running the simulation jobs, evaluating the data via remote visualization, transferring the final results back on premise and writing a case study. The goal of the UberCloud Experiment is to perform engineering simulation experiments in the HPC cloud with real engineering applications in order to understand the roadblocks to success and how to overcome them. Our Compendiums of Case Studies are a way of sharing these results with our broader community of engineers and scientists and their service providers. UberCloud is the online community and marketplace where engineers and scientists discover, try, and buy Computing Power as a Service, on demand. Engineers and scientists can explore and discuss how to use this computing power to solve their demanding problems, and to identify the roadblocks and solutions, with a crowd-sourcing approach, jointly with our engineering and scientific community. Learn more about the UberCloud at: http://www.TheUberCloud.com. We are extremely grateful for all the support over the last 5 years for our UberCloud experiments by ANSYS, Hewlett Packard Enterprise and Intel, and our Media Sponsors Digital Engineering and HPCwire, and the invaluable case studies they generated. This Compendium is dedicated to one of our major sponsors of the UberCloud Experiments, ANSYS, which alone has supported 18 cloud experiments and case studies over the past 5 years. And here, special thanks go to Wim Slagter, ANSYS’ Director of HPC and Cloud, who supported all ANSYS experiments with great advice, and with many trial licenses. Thank You. Wolfgang Gentzsch and Burak Yenier The UberCloud, Los Altos, CA, January 2018

Please contact UberCloud at [email protected] before distributing this material in part or in full. © Copyright 2018 TheUberCloud™. UberCloud is a trademark of TheUberCloud, Inc.

Page 3: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

3

The UberCloud Experiment Sponsors

We are very grateful to our sponsors ANSYS, Hewlett Packard Enterprise and Intel, our Media Sponsor Digital Engineering and HPCwire. Their sponsorship allowed for running 200 UberCloud Experiments over the last 5 years, and building a sustainable and reliable UberCloud community around CAE in the Cloud.

Page 4: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

4

Table of Contents

2 Foreword: ANSYS Cloud Case Studies – Five

Years of UberCloud experiments

5 Team 8: Flash Dryer Simulation with Hot

Gas Used to Evaporate Water from a Solid

9 Team 9: Simulation of Flow in Irrigation

Systems to Improve Product Reliability

12 Team 34: Analysis of Vertical and

Horizontal Wind Turbines

15 Team 36: Advanced Combustion

Modeling for Diesel Engines

18 Team 54: Analysis of a Pool in a

Desalinization Plant

21 Team 56: Simulating Radial and Axial

Fan Performance

26 Team 94: Gas-liquid Two-phase Flow

Application

30 Team 118: Coupling In-house FE Code

with ANSYS Fluent CFD

34 Team 154: CFD Analysis of Geo-Thermal

Perforation in the Cloud

37 Team 160: Aerodynamics & Fluttering

on an Aircraft Wing Using Fluid Structure Interaction

43 Team 163: Finite Element Analysis for

3D Microelectronic Packaging in the Cloud

48 Team 165: Wind Turbine Aerodynamics

with UberCloud ANSYS Container in the Cloud

52 Team 171: Dynamic Study of Frontal Car

Crash with UberCloud ANSYS Container in the Cloud

58 Team 177: Combustion Training in the

Cloud

60 Team 184: Spray modeling of PMDI

dispensing device on the Microsoft Azure Cloud

66 Team 185: Air flow through an engine

intake manifold

71 Team 186: Airbag simulation with ANSYS LS-

DYNA

76 Team 193: Implantable Planar Antenna

Simulation with ANSYS HFSS in the Cloud

81 Join the UberCloud Experiment

Page 5: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

5

Team 8

Flash Dryer Simulation with Hot Gas Used to

Evaporate Water from a Solid

MEET THE TEAM

End User - Sam Zakrzewski

Zakrzewski is with FLSmidth, the leading supplier of complete plants, equipment and services to the

global minerals and cement industries.

Software Provider - Wim Slagter

Slagter is with ANSYS, which develops, markets and supports engineering simulation software.

Resource Provider - Marc Levrier

Levrier is with Bull, a manufacturer of HPC computers through its extreme factory (XF) HPC on

demand service.

HPC Expert – Ingo Seipp

Seipp, Science + Computing, provides IT services and solutions in HPC and technical computing

environments.

USE CASE

CFD multiphase flow models are used to simulate a flash dryer. Increasing plant sizes in the cement

and mineral industries mean that current designs need to be expanded to fulfill customers’ requests.

The process is described by the Process Department and the structural geometry by the Mechanical

Department – both departments come together using CFD tools that are part of end-user’s extensive

CAE portfolio.

Currently, the multiphase flow model takes about five days for a realistic particle loading scenario on

our local infrastructure (Intel Xeon X5667, 12M Cache, 3.06 GHz, 6.40 GT/s, 24 GB RAM). The

“The company was

interested in reducing the

solution time and, if possible,

increasing mesh size to

improve the accuracy of their

simulation results without

investing in a computing

cluster that would be utilized

only occasionally.”

Page 6: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

6

differential equation solver of the Lagrangian particle tracking model requires several GBs of

memory. ANSYS CFX 14 is used as the solver.

Simulations for this problem are made using 1.4 million cells, five species and a time step of one

millisecond for a total time of two seconds. A cloud solution should allow the end-user to run the

models faster to increase the turnover of sensitivity analyses and reducing time to customer

implementation. It also would allow the end-user to focus on engineering aspects instead of using

valuable time on IT and infrastructure problems.

Fig. 1 - Flash dryer model viewed with ANSYS CFD-Post

The Project

The most recent addition to the company's offerings is a flash dryer designed for a phosphate

processing plant in Morocco. The dryer takes a wet filter cake and produces a dry product suitable

for transport to markets around the world.

The company was interested in reducing the solution time and, if possible, increasing mesh size to

improve the accuracy of their simulation results without investing in a computing cluster that would

be utilized only occasionally.

The project goal was defined based on current experiences with the in-house compute power. For

the chosen model a challenge for reaching this goal was the scalability of the problem with the

number of cores.

Next, the end-user needed to register for XF. After organizational steps were completed, the XF

team integrated ANSYS CFX for the end-user into their web user interface. This made it easy for the

end-user to transfer data and run the application in the pre-configured batch system on the

dedicated XF resources.

Page 7: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

7

The model was then run on up to 128 Intel E5-2680 cores.

The work was accomplished in three phases:

• Setup phase – During the project period XF was very busy with production customers and

was also migrating their Bull B500 blades (Intel Xeon X5670 sockets, 2.93 GHz, 6 cores, 6.40

GT/s, 12 MB) to B510 blades (Intel E5-2680 sockets, 2.70 GHz, 8 cores, 8.0 GT/s, 20 MB). The

nodes are equipped with 64 GB Ram, 500 GB hard disks and connected with Infiniband QDR.

• Execution phase – After an initial hardware problem with the new blades, a solver run crashed after 35 hours due to a CFX stack memory overflow. This was handled by adding a new parameter to the job submission web form. A run using 64 cores still crashed after 12 hours despite 20% additional stack memory. This issue is not related to overall memory usage as the model never used more than 10% of the available memory as observed for one of the 64 cores runs.

Finally, a run on 128 cores and 30% additional stack memory successfully ran up to the 2s

point. An integer stack memory error occurred at a later point – this still needs to be looked

into.

• Post-processing phase – The XF team installed ANSYS CFD-Post, visualization software for

ANSYS CFX, and made it available from the portal in a 3D remote visualization session. It was

also possible to monitor the runs from the Solver Manager GUI and hence avoid

downloading large output log files.

Because the ANSYS CFX solver was designed from the ground up for parallel efficiency, all

numerically intensive tasks are performed in parallel and all physical models work in parallel. So

administrative tasks, such as simulation control and user interaction, as well as the input/output

phases of a parallel run were performed in sequential mode by the master process.

BENEFITS

The extreme factory team was quickly able to provide ANSYS CFX as SaaS and configure any kind of

HPC workflow in extreme factory Studio (XF’s web front end). The XF team spent around three man

days to setup, configure, execute and help debug the ANSYS CFX experiment. FLSmidth spent around

two man days in order to understand, setup and utilize the XF Portal methodology.

XF also provides 3D remote visualization with good performance, which helps solve the problem of

downloading large result files for local post-processing and checking the progress of the simulation.

Enabling HPC applications in a cloud requires a lot of experience and R&D, plus a team to deploy and

tune applications and support software users. For the end-user the primary goal of running the job

in one to two days was met. The runtime of the successful job was about 46.5 hours. There was not

enough time in the end to perform some scalability tests – these would have been helpful to

optimize the size of the resources required with the runtime of the job.

Page 8: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

8

Fig. 2 - ANSYS CFX job submission web form

The ANSYS CFX technology incorporates optimization for the latest multi-core processors and

benefits greatly from recent improvements in processor architecture, algorithms for model

partitioning combined with optimized communications, and dynamic load balancing between

processors.

CONCLUSIONS AND RECOMMENDATIONS

No special problems occurred during the project, only hardware provisioning delays. Pressure from

production made it difficult to find free resources and tuning phases to get good results.

Providing the HPC application in form of SaaS made it easy for the end-user to get started with the

cloud and concentrate on his core business.

It would be helpful to have some more information about cluster metrics beyond what is currently

readily available – e.g. memory, I/O-usage, etc.

Time needed for downloading the results files and minimizing risks to proprietary data need to be

considered for each use case.

Due to the size of output data and transfer speed limitations, we determined that a remote

visualization solution is required.

2012 – Case Study Authors – Ingo Seipp, Marc Levrier, Sam Zakrzewski, and Wim Slagter

Note: Some parts of this report are excerpted from a story on the project featured in the Digital

Manufacturing Report. You can read the full story at

http://www.digitalmanufacturingreport.com/dmr/2013-04-22/on_cloud_nine.html

Page 9: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

9

Team 9

Simulation of Flow in Irrigation Systems to

Improve Product Reliability

MEET THE TEAM

End User – Manufacturing company

The end-user is an American manufacturer in the residential and commercial irrigation product

manufacturing industry with approximately 1,200 total employees. Within the end-user corporation,

the engineers are members of the product sustaining team, which is responsible for redesigns and

refreshes of existing products. The simulation was applied to products that have been in the field for

a number of years and are currently undergoing a redesign to better tolerate dirt and debris from

entering fluid.

Software Provider – Wim Slagter

Slagter is with ANSYS, an ISV that develops, markets and supports engineering simulation software.

Resource Provider – Ron Hawkins

Hawkins is the Director of Industry Relations for the San Diego Supercomputing Center.

HPC Expert – Rick James

James is VP Consulting Services for the Simutech Group.

USE CASE

In the industry of residential and commercial irrigation products, product reliability is paramount –

customers want their equipment to work every time with low maintenance over a long product

lifetime. For engineers, this means designing affordable products that are rigorously tested before

the device begins production. Irrigation equipment companies employ a large force of designers,

researchers and engineers who use CAD packages to develop and manufacture the products, and

CAE analysis programs to determine the products’ reliability, specifications and features.

“HPC and cloud computing will

certainly be a valuable tool as our

company seeks to increase its

reliance on CFD simulation to

reduce costs and time associated

with the build-and-test iteration

model of prototyping and design.”

Page 10: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

10

CHALLENGES

As the industry continues to demand more efficiency along with greater environmental stewardship,

the usage rate of recycled and untreated water for irrigation grows. Fine silt and other debris often

exist in untreated water sources (e.g. lakes, rivers and wells), and cause malfunction of internal

components over the life of the product. In order to prevent against product failure, engineers are

turning to increasingly fine meshes for CFD analysis, outpacing the resources of in-house

workstations. To continue expanding the fidelity of these analyses within reasonable product design

cycles, manufacturers are looking to cloud-based and remote computing for the heavy computation

loads.

The single largest challenge we faced as end-users was the coordination with and application of the

various resources presented to us.

For example, one roadblock was that when presented with a high-powered cluster, we discovered

that the interface was Linux, which is prevalent throughout HPC. As industry engineers with a focus

on manufacturing, we have little or no experience with Linux and its navigation. In the end, we were

assigned another cluster with a Windows virtualization to allow for quicker adoption. We

consistently found that while the resources had great potential, we didn’t have the knowledge to

take full advantage of all of the possibilities because of the Linux interface and complications of HPC

cluster configurations.

Additionally, we found that HPC required becoming familiar with software programs that we were

not accustomed to. Engineers typically use multiple software packages on a daily basis, and the

addition of new operating environment, GUI, and user controls added another roadblock to the

process. The increased use of scripting and software automation increased the time of the learning

curve.

Knowledge of HPC-oriented simulation was lacking for the end-user. As the end-user engineer’s

knowledge was limited to in-house and small-scale simulation, optimizing the model and mesh(es)

for more powerful clusters proved to be cumbersome and time-intensive.

As we began to experiment with extremely fine mesh conditions, we ran into a major issue. While

the CFD-solver itself scaled well across the computing cluster, every increase in mesh size took

significantly more time for mesh-generation, in addition to dramatically slowing the set-up times.

Therefore with larger/finer meshes, the bottleneck moved from the solve time to the preparation

time.

BENEFITS

At the conclusion of the experiment, the end-user was able to determine the potential of HPC for

the future of simulation within the company.

Another crucial benefit was the comparison of mesh refinements to accurately compromise both

fidelity and practicality. A “sweet-spot” was suggested by the results – one that would balance user

set-up time with computing costs and would deliver timely, consistent, precise results. As suggested

Page 11: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

11

by the experiment, performing a fine mesh with 32 compute cores proved to be a balance of

affordable hardware and timely, accurate results.

CONCLUSIONS AND RECOMMENDATIONS

The original cluster configuration offered by SDSC was Linux, but the standard Linux interface

provided was not user-friendly for the end user’s purposes. In order to accommodate the end user’s

needs, the SDSC team decided to try running Windows in a large, virtual shared memory machine

using the vSMP software on SDSC’s ‘Gordon’ supercomputer. Using vSMP with Windows on the

Gordon supercomputer offers the opportunity to provision a one terabyte Windows virtual machine

which can provide a significant capability for large modeling and simulation problems that do not

scale well on a conventional cluster. Although the team was successful in getting ANSYS CFX to run

in this configuration on up to 16 cores (we discovered the 16-core limitation was due to the version

of Windows we installed on the virtual machine), various technical topics with remote access and

licensing could not be completely addressed within the timeframe of this project and precluded

running actual simulations for this phase. Following the Windows test, the SDSC team

recommended moving back to the proven Linux environment, which as noted previously was not

ideal for this particular end user.

Due to time constraints and the aforementioned Linux vs. Windows issues, end user simulations

were not run on the SDSC resources for this phase of the project. However, SDSC has made the

resource available for an additional time period should the end user desire to try simulations on the

SDSC system. The end user states that they learned a lot, and are still intending to benchmark the

results for the team members’ data, but do not have any performance or scalability data to show at

this time. The results given above in terms of HPC performance were gathered using the Simutech

Group’s Simucloud cloud computing HPC offerings.

From the SDSC perspective, this was a valuable exercise in interacting with and discovering the use

cases and requirements of a typical SME end user. The experiment in running CFX for Windows on a

large shared memory (1 TB) cluster was valuable and provided SDSC with an opportunity to explore

how this significant capability might be configured for scientific and industrial users computing on

“Big Data.“ Another finding is that offering workshops for SMEs in running simulation software at

HPC centers may be a service that SDSC can offer in the future, in conjunction with its Industrial

Affiliates (IA) program.

The end user noted, “Having short-term licenses which scale with the need of a simulation greatly

reduces our costs by preventing the purchase of under-utilized HPC packs for our company’s in-

house simulation.”

Summarizing his overall reaction to the project, the end user had this to say: “HPC and cloud

computing will certainly be a valuable tool as our company seeks to increase its reliance on CFD

simulation to reduce costs and time associated with the build-and-test iteration model of

prototyping and design.”

2012 – Case Study Authors – Rick James, Wim Slagter, and Ron Hawkins

Page 12: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

12

Team 34

Analysis of Vertical and Horizontal Wind Turbines

MEET THE TEAM End-user – Henrik Nordborg Nordborg is a professor at the HPC center of the University in Switzerland. Software provider – ANSYS Fluent and NICE visualization software Software ANSYS Fluent and three of its ANSYS HPC Packs because of its strengths in analyzing complex fluid dynamic systems. Resource Provider – Penguin Computing Penguin provides Linux-based servers, workstations, HPC systems and clusters, and Scyld ClusterWare. HPC/CAE Expert – Juan Enriquez Paraled Paraled is the manager of ANALISIS-DSC, a mechanical engineering service and consultancy company, specialized in fluid, structural and thermal solutions.

USE CASE The goal was to optimize the design of wind turbines using numerical simulations. The case of vertical axis turbines is particularly interesting, since the upwind turbine blades create vortices that interact with the blades downstream. The full influence of this can only be understood using transient flow simulations, requiring large models to run for a long time. CHALLENGES In order test the performance of a particular wind turbine design, a transient simulation had to be performed for each wind speed and each rotational velocity. This lead to a large number of very long simulations, even though each model might not be very large. Since the different wind speeds and rotational velocities were independent, the computations could be trivially distributed on a cluster or in the cloud.

“Cloud computing would be an

excellent option for this kind of

simulations if the HPC provider

offered remote visualization

and access to the required

software licenses.”

Page 13: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

13

Figure 1: 2D simulation of a rotating vertical wind turbine.

Another important use of HPC and cloud computing for wind power is parametric optimization. Again, if the efficiency of the turbine is used as target function, very long transient simulations will have to be performed to evaluate every configuration. BENEFITS The massive computing power required to optimize a wind turbine is typically not available locally. Since only some steps of the design require HPC and an on-site cluster would never be fully utilized, cloud computing offers an obvious solution.

CONCLUSIONS AND RECOMMENDATIONS The problem with cloud computing for simulations using commercial tools is that the number of licenses is typically the bottleneck. Obviously, having a large number of cores does not help if there are not enough parallel licenses. In our case, a number of test-licenses were provided by ANSYS, which was very helpful. It is not possible to transfer data back and forth between the cluster and a local workstation. Therefore, any HPC facility needs to provide remote access for interactive use. Unfortunately, this was not available in our case. A test performed on the Penguin cluster showed an 8% increase in speed (per core) as compared with our local Windows cluster. This speedup was surprisingly small, given that Penguin uses a newer generation of CPUs with a much better theoretical floating-point performance. This again demonstrates that simulations on an unstructured grid are bandwidth limited. To conclude, cloud computing would be an excellent option for these kinds of simulations if the HPC provider offered remote visualization and access to the required software licenses.

Page 14: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

14

Figure 2: CFD Simulation of a vertical wind turbine with 3 helical rotors.

2012 – Case Study Author – Juan Enriquez Paraled

Page 15: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

15

Team 36

Advanced Combustion Modeling for

Diesel Engines

MEET THE TEAM

End – User and HPC Expert: Dacolt

Dacolt, headquartered in the Netherlands, offers software and services for CFD (Computational Fluid

Dynamics) modeling of industrial combustion applications, by providing innovative tools and

expertise to support our customers in realizing their fuel efficiency and pollutant emissions design

goals.

Resource Provider – Penguin On Demand (POD)

POD is Penguin Computing's on demand HPC cloud service.

Software Provider – ANSYS, Inc.

ANSYS develops and globally markets engineering simulation software and technologies widely used

by engineers and designers.

USE CASE

Modeling combustion in Diesel engines with CFD is a challenging task. The physical phenomena

occurring in the short combustion cycle are not fully understood. This especially applies to the liquid

spray injection, the auto-ignition and flame development and formation of undesired emissions like

NOx, CO and soot.

Dacolt has developed an advanced combustion model named Dacolt PSR+PDF, specifically meant to

address these types of challenging cases where combustion initiating chemistry plays a large role.

This Dacolt PSR+PDF model has been implemented in ANSYS Fluent and was validated on an

academic test case (SAE paper 2012-01-0152.pdf). An IC engine case validation case is the next step,

tackled in the context of the HPC Experiment in the Penguin Computing HPC cloud.

“…remote clusters allow small

companies to conduct

simulations that previously

were only possible by large

companies and government

lab.”

Page 16: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

16

Simulation result showing the flame (red) located on top of the evaporating fuel spray

(light blue in the center)

CHALLENGES

Current challenges for the end-user operating with just his in-house resources include the fact that

the computational resources needed for these simulations are significant (i.e. more than 16 cpus and

one to three days of continuous running.

BENEFIT

The benefit for the end-user using remote resources was that the remote clusters allow small

companies to conduct simulations that previously were only possible by large companies and

government labs.

End-user findings on the provided cloud access include:

• Startup:

o POD environment setup went smoothly

o ANSYS software installation and licensing as well

• System:

o POD system OS comparable to OS used at Dacolt

o ANSYS Fluent version same as used at Dacolt

• Running:

o Getting used to POD job scheduling

o No portability issues of the CFD model in general

o Some MPI issues related to Dacolt’s User Defined Functions (UDFs)

o Solver crash during injection + combustion phase, to be investigated

Overall, we experienced easy-to use ssh access to the POD cluster. The environment and software

set-up went smoothly with collaboration between POD and ANSYS. The remote environment, which

nearly equaled the Dacolt environment, provided a head start. Main issue encountered: the

Page 17: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

17

uploaded Dacolt UDF library for Fluent did not work in parallel out of the box. It is likely the Dacolt

User Defined Functions would have to be recompiled on the remote system.

Project results

An IC-engine was successfully run until solver divergence, to be reviewed by Dacolt with ANSYS

support. Dacolt model validation seems promising.

Anticipated challenges included:

• Account set-up and end-user access

• Configuring end-user’s CFD environment with ANSYS Fluent v14.5

• Educating end-user in using the batch queuing system

• Get data in and out of the POD cloud

Actual barriers encountered:

• Running end-user UDFs with Fluent in parallel gave MPI problems

CONCLUSIONS AND RECOMMENDATIONS

• Use of POD remote HPC resources worked well with ANSYS Fluent

• Although the local and remote systems were quite comparable in terms of OS, etc, systems

like MPI may not work out of the box

• Local and remote network bandwidth was good enough for data transfer, but not for

tunneling CAE graphics using X

• Future use of remote HPC resources depends on availability of pay-as-you-go commercial

CFD licensing schemes

2013 – Case Study Author – Ferry Tap

Page 18: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

18

Team 54

Analysis of a Pool in a

Desalinization Plant

MEET THE TEAM End User – Juan Enriquez Paraled, Manager of ANALISIS-DSC Software Provider – ANSYS CFX and CEI Ensight Gold. We used ANSYS CFX software and three of its ANSYS HPC Packs because of its strengths in analyzing complex fluid dynamic systems. Additionally, we used the software CEI Ensight Gold to visualize and analyze the CFD results without the need of downloading big files over the Internet. Resource Provider – Gompute, cloud system with 48 dedicated cores. HPC/CAE Expert – Henrik Nordborg, professor at the HPC center of the University in Switzerland

USE CASE Many areas in the world have no available fresh water even though they are located in coastal areas. As a result, in recent years a completely new industry has been created to treat seawater and transform it into tap water. This water transformation requires that the water must be pumped into special equipment, which is very sensitive to cavitation. Therefore, a correct and precise water flow intake must be forecasted before building the installation.

The CFD analysis of air-water applications using free surfaces modeling is a highly complex modelization. The computational mesh must correctly capture the fluid interface and the number of iterations required to obtain physically and numerically converged solution is very high. If both previous requirements are not matched, the forecasted solution will not even be close to the real-world solution. CHALLENGES The end-user needed to obtain a physical solution in a short period of time as the time to analyze the current design stage was limited. The time limitation mandated the use of remote HPC resources to meet the customer’s time requirements. As usual the main problem was the result data transfer between the end-user and the HPC resources. To overcome this problem, the end-user used the

“The bottleneck in all

CAE simulations using

commercial software

is the cost of the

commercial CFD

licenses.”

Page 19: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

19

visualization software Ensight to look at the solution and obtain images and animations completely through the Internet.

The following table provides an evaluation of the Gompute on demand solution:

Criteria Inhouse cluster

Ideal cloud HPC Gompute on demand

Uploading speed 11.5 MB/s 2 MB/s 2-3 MB/s

Downloading speed 11.5 MB/s 2 MB/s 4-5 MB/s

Ease to use reasonable excellent excellent

Refresh rate excellent excellent good

Latency excellent excellent excellent

Command line access possible possible possible

Output file access possible possible possible

Run on the reserved cluster

easy easy easy

Run on the on demand cluster

N/A easy easy

Graphical node excellent excellent excellent

Using UDF’s on the cluster possible possible possible

State of the art hardware good good good

Scalability poor excellent excellent

Security excellent excellent good

Remote Visualization It is possible to request a graphically accelerated node when starting your programs with a GUI. This functionality substantially cuts virtual prototyping lead time, since all the data generated from a CAE simulation can be simulated directly in Gompute. Also, this omits time consuming data transfers and increases data security by removing the need to have multiple copies of the same data at different locations – sometimes on insecure workstations. The end user categorized the Gompute VNC-based solution as excellent.

Page 20: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

20

Gompute accelerators allows the use of the desktop over links with latency over 300 ms. This allows Gompute resources to be used by locations separated by as much as 160 degrees longitude – i. e., the user may be in India and the cluster in Detroit. Collaborative workflow is allowed by the Gompute remote desktop sharing option so two users at different geographical locations can work together on the same simulation. Ease of Use Gompute on demand provide a ready-to-use environment with an integrated repository of the applications requested, license connection, and queuing system based on SGE. In order to establish the connection to the cluster, you just open ports 22 and 443 on the company’s firewall. Downloading Gompute explorer and opening a remote desktop allows you to have the same user experience as working with your own in house machine. Compared to other tested HPC connection modes, Gompute connections were easy to setup and use. The connection allowed connecting and disconnecting to the HPC account to check how the calculations were progressing. As to costs, the Gompute quotation clearly described the services provided. Also, the technical support from Gompute personal was good.

BENEFITS • Compute remotely

• Pre/post-process remotely

• Gompute can be used as an extension of in-house resources

• Able to burst into Gompute On-Demand from an in-house cluster

• Accelerated file transfers

• Possible to have exclusive desktops

• Support for multiple users on each graphics node

• Applications integrated and ready to use

• GPFS storage available

• Handles high latency links between the user and the Gompute cluster

• Facilitates collaboration with clients and support

CONCLUSIONS AND RECOMMENDATIONS The bottleneck using commercial software in CAE is the cost of the commercial CFD licenses. There were two lessons learned:

• ANSYS has no CFD on demand license to use the maximum number of available cores in a system while competitor software, such as Star-CCM+, already has such a license.

• Supercomputing centers must provide analysis/post-processing tools for customers to check results without the need to download result files – otherwise, many of the advantages of using cloud computing are lost because of long data file transfer times.

• The future for the wider use of supercomputing centers is to find a way to have commercial CAE (CFD and FEA) licenses on demand in order to pay for the actual software usage. Commercial software must take full advantage of current and future hardware developments for the wider spread of virtual engineering tools.

2013 – Case Study Authors – Juan Enriquez Paraled, Manager of ANALISIS-DSC; Ramon Diaz, Gompute

Page 21: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

21

Team 56

Simulating Radial and Axial Fan

Performance

MEET THE TEAM End User – A company specializing in the design, development, manufacturing, sales, distribution

and service of air and gas compressors.

Software Provider – Wim Slagter, ANSYS Inc., Netherlands

Resource Provider – Ramon Diaz, Gompute (Gridcore AB), Sweden

HPC Team Expert – Oleh Khoma, Eleks

Team Mentor – Dennis Nagy, BeyondCAE and UberCloud HPC Experiment.

USE CASE For the end user the aim of the exercise was to evaluate the HPC cloud service without the need to

obtain new engineering insights. That’s why a relatively basic test case was chosen – a case for

which they already had results from the end user’s own cluster, and which had a minimum of

confidential content. The test case was the simulation of the performance of an axial fan in a duct

similar to those found in the AMCA standard. A single ANSYS Fluent run simulated the performance

of a fan under 10 different conditions to reconstruct the fan curve. The mesh consisted of 12 million

tetrahedral cells and was suited to test parallel scalability.

CHALLENGES The main reason to look to HPC in the cloud is cost. The end user has a highly fluctuating load with

regard to simulations. This means that their current on-site cluster rarely has the correct capacity.

When it is too large, they are paying too much for hardware and licences; and when it is too small

they are losing money because the design teams are waiting for the results. With a flexible HPC

solution in the cloud the end user can theoretically avoid both costs.

“HPC in the cloud is

technically feasible. Most

remaining issues are

implementation related

that the resource

provider should be able

to solve..”

Page 22: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

22

Evaluation

HPC as a service will only be an alternative to the current on-site solution if it manages to meet a

series of well-defined criteria as set by the end user.

Criteria Local HPC Ideal cloud HPC Actual cloud

HPC

Pass/Fail

Upload speed 11.5 MB/s 2 MB/s 0.2 MB/s Fail

Download speed 11.5 MB/s 2 MB/s 4-5 MB/s Pass

Graphical output possible possible inconvenient Fail

Quality of the image excellent excellent good Pass

Refresh rate excellent excellent good Pass

Latency excellent excellent good Pass

Command line access possible possible possible Pass

Output file access possible possible possible Pass

Run on the reserved

cluster

easy easy easy Pass

Run on the on demand

cluster

N/A easy easy Pass

Graphical node excellent excellent good Pass

Using UDF’s on the cluster possible possible possible Pass

State of the art hardware reasonable good good Pass

Scalability poor excellent excellent Pass

Security excellent excellent good Pass

Hardware cost good excellent N/A N/A

License cost good excellent N/A N/A

Table 1 - Evaluation results

Cluster Access

Gridcore allows you to connect to its clusters through the GomputeXplorer. This is a Java-based

program that lets you monitor your jobs and launch virtual desktops. Establishing the connection

was actually not that easy. If the standard SSH and SSL ports (22 and 443) are open in your

companies firewall then connecting is straightforward. This is however rarely the case.

Alternatively you can make your connection with the use of a VPN. Both options require that the end

user make changes to the firewall. Because the end user had to wait a long time for these changes to

be implemented, valuable time was lost. Only the port changes were implemented. So the VPN

option was never tested.

Transfer Speed

Input files, and certainly result files, for typical calculations range from a couple of hundreds of

megabytes to a couple of gigabytes in size. Therefore a good transfer speed is of vital importance.

The target is a minimum of 2 MB/s for both upload and download. This means that it is theoretically

possible to transfer 1GB of data in 8.5 minutes.

Page 23: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

23

When transferring files with the Gompute Xplorer, upload speeds of 0.2MB/s and download speeds

of about 4-5MB/s were measured. When transferring the files with a regular SSH client the upload

speed was 1.7 MB/s and the download speed 0.9 MB/s. These speeds were measured during

transferring the same files several times. The tests were performed one after the other to ensure a

fair comparison. These measurements show that theoretically reasonable to good transfer speeds

are possible, but so far no solution was found to get the Gompute Xplorer’s upload speed up to par.

As noted by resource provider, most clients get their speed depending on the bandwidth, and the

low numbers measured are quite abnormal. Several tests were performed in the system seeking the

root cause of the issue, but none was found. The investigations would have continued until the

solution was found, but not within the time frame of the experiment. It might be more practical to

wait for a new file transfer tool that is planned to be rolled out shortly by Gompute and might

resolve this issue.

Graphical output in batch

To see how the flow develops over time, it is common practice to output some images from the flow

field. Fluent cannot do this with just a command line but requires an X-window to render to. The end

user was not able to make this option work on the Gompute cluster within the allocated timeframe.

Several suggestions (mainly different command line arguments) have been put forward to resolve

this issue.

Remote visualization

The end user used the HP Remote Graphics Software package that gave a like local experience. If we

categorise HP RGS as excellent, the VNC based solution of Gompute can surely be categorized as

good. There was a noticeable difference between the dedicated cluster and the on-demand one with

regard to the quality of the remote visualisation (these are both remote Gompute clusters – the

dedicated one was specifically reserved for the end user). The dedicated clusters render quality and

latency was much better. It is entirely possible to do pre- and post-processing on the cluster. It is

also possible to request a graphically accelerated node when starting your programs with a GUI.

Ease of use

The Gompute remote cluster uses the same queuing system (SGE) as the end user’s cluster so the

commands are familiar. The fact that you can request a full virtual desktop makes using the system a

breeze. This virtual desktop allows for easy compilation of the UDF’s (C-code to extend the

capabilities of Fluent) on the architecture of the remote cluster. Submitting and monitoring jobs is

just as easy as on the local cluster. The process is also identical on the dedicated and the on-demand

cluster. Apart from the billing method, there is no additional overhead when you temporarily want

to expand your simulation capacity by using the on-demand cluster.

Hardware

The hardware that was made available to the end user was less than two years old (Westmere

Xeon’s). This was considered to be good. Sandy Bridge-based Xeon’s would have been considered

excellent. The test case was used to benchmark the Gompute cluster against the end user’s own

aging cluster.

Page 24: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

24

Fig. 1 - Comparison of run times of the test case.

The time it took to run the simulation on 16 cores of the local cluster is the reference where the

speedup is defined relative to this time. The blue curve represents the old, local cluster and the red

curve the on-demand cluster from Gompute. The green point is from a run on a workstation that has

a similar hardware configuration as the cluster from Gompute but runs Windows instead of Linux.

The following points can be concluded from this graph:

• The old cluster isn’t performing all that badly considering its age. Either that or a larger speedup was expected from the new hardware.

• The simulation scales nicely on the Gompute cluster, but not as well on the local cluster.

• The performance of the workstation is similar to that of the Gompute cluster.

Cost

The resource provider only provides hardware; the customer is still responsible for acquiring

necessary software licenses. The cost benefit is therefore limited to hardware and support.

The most likely customer base for the On Demand Cluster service are companies that either rarely

do a simulation or occasionally need extra capacity. In both cases they would have to pay for a set of

licenses that are rarely used. It doesn’t seem to be a very good solution and may become a

showstopper for adopting the HPC in the cloud. Hopefully, ANSYS will come up with a license model

that would enable a service that is more in line with HPC in the cloud.

BENEFITS

End User

• Ease of use.

• Post- and pre-processing can be done remotely.

• Excellent opportunity to test the state of the art in cloud based HPC.

Page 25: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

25

CONCLUSIONS AND RECOMMENDATIONS

• HPC in the cloud is technically feasible. Most remaining issues are implementation related

that the resource provider should be able to solve.

• The remote visualisation solution was good and allowed the user to actually perform some

real work. Of course, it remains to be seen if a stress test with multiple users from the same

company yields the same results.

• The value of the HPC in the cloud solution is limited by the absence of appropriate license

models from the software vendors that would allow Gompute to actually sell simulation

time and not just hardware and support.

• Further rounds of this experiment can be used to analyse the abnormal uploading speed.

File transfer might be tested using the VPN connection to guarantee no restrictions from the

company’s firewall. Also of interest is the testing of the new release of Gompute file transfer

tool, which implements a transferring accelerator.

• Different graphical node configurations can be tested to enhance the user experience.

2013 – Case Study Authors – Wim Slagter, Ramon Diaz, Oleh Khoma, and Dennis Nagy.

Note: The illustration at the beginning of this report shows pressure contours in front/behind a 6-bladed axial fan.

Page 26: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

26

Team 94

Gas-liquid Two-phase Flow Application

MEET THE TEAM End-User - Kyoji Ishikawa, Chiyoda Corp. Chiyoda is an integrated contractor, serving mainly the hydrocarbon and chemical industries. It provides project and program management, engineering, procurement, construction and commissioning, O&M, and asset management. Software Provider - Wim Slagter, ANSYS, Inc. ANSYS develops, markets and supports engineering simulation software such as computational fluid dynamics (CFD). Resource Provider and Team Expert - Hiroyuki Kanazawa, Fujitsu Ltd. Fujitsu is the leading Japanese information and communication technology (ICT) company offering a full range of technology products, solutions and services. Fujitsu provides HPC services including the virtual desktop function under the name of Technical Computing (TC) Cloud.

USE CASE The use case is an evaluation of rate of gas entrainment due to liquid flow in the liquid storage facility for an energy plant (see Figure 1 below). Gas-liquid two-phase flow was simulated using computational fluid dynamics (CFD) software. CFD simulation is carried out under the following conditions:

• Use of a volume of fluid (VOF) model.

• Two simulation cases were carried out. Total mesh numbers are 300,000 for case 1 and 2 million for case 2.

• Computation time for case 1 was 20 hours; case 2 required 5 days.

• Application software – ANSYS Fluent

• Computing Resource: 32 parallel cores (16 cores × 2 nodes) and several gigabytes HDD space

• Network – Gigabit Ethernet (GbE) or InfiniBand (IB)

• Pre- and post-processing environment (remote visualization)

“Calculation speeds were as

fast as expected and post

processing with the desktop

display acceleration

technology RVEC was almost

the same as working in a

local environment.”

Page 27: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

27

Liquid

Gas

Figure 1: Simulation image of liquid flow in a storage plant.

Because the end-user sometimes experiences a computational resource shortage for urgent, unexpected or unplanned projects, he decided to explore the use of remote cloud computing. In this project, multiphase flow simulation was carried out using ANSYS Fluent running in the Fujitsu Technical Computing (TC) Cloud. This project was our first attempt of CFD simulation using cloud computing for the end-user. Our objective was to experience the end-to-end process of using CFD software on a cloud computing service. Project objectives were to:

• Confirm the running of ANSYS Fluent on Fujitsu TC Cloud computing.

• Master how to use CFD software when running in the cloud.

• Confirm the calculation speed on TC Cloud computing.

• Clarify the problem and discover the benefits of using TC Cloud – this assumes that the end user uses the cloud service when computer resources are limited because of unexpected or unplanned projects.

Project execution: End-to-end process The team had a kick-off meeting at which the team partners agreed upon the required hardware and software resources. The software provider issued the following software licenses:

• ANSYS Fluent Solver: 2 licenses

• ANSYS HPC Pack: 4 licenses

• ANSYS CFD-Post: 2 licenses As part of the preparation for accessing the TC Cloud, the resource provider installed ANSYS Fluent in the cloud. The resource provider also constructed a user interface dedicated to ANSYS Fluent. This included a HPC portal so that the end user could easily run calculations. The HPC Portal is web-based interface for using the TC Cloud. To upload the ANSYS Fluent input file, the end user accessed the TC Cloud via a VPN connection. Using the HPC Portal interface, the end user confirmed that ANSYS Fluent was indeed running and then measured the calculation speed.

CHALLENGES The resource provider took a week to prepare the cloud computing system following the request by the end-user. However, the end-user needed quicker preparation for urgent jobs and often used a rental server for such urgent times. However, this took more than a week. Therefore it is important

Page 28: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

28

that the preparation period for cloud computing is shorter than required for use of the rental server.

Figure 2: Fujitsu HPC Portal for ANSYS Fluent.

BENEFITS Calculation speeds were as expected. The figures below compare the theoretical and actual calculation speeds. When we increased the computational core number, the calculation speeds increased. When using 32 cores on the HPC system along with an InfiniBand network, case1 and case2 achieved 70% speed of the theoretical speed.

0

1

2

3

4

5

6

7

8

9

0 4 8 12 16 20 24 28 32 36

Spe

ed

rat

io

Parallels

Theoretical

Gigabit Ethernet

Infiniband

Figure 3: Case 1 calculation speed ratio.

Post processing (remote visualization) - Calculation results were visualized by the HPC Portal’s virtual desktop function. The HPC Portal uses a virtual desktop display acceleration technology named RVEC, which reduces the amount of data and improves operational response. Figure 5 shows a visualization image of the CFD simulation result. Despite the large size of the CFD output file, imaging speed and operability were almost same as when we employed the local server environment. It is possible to perform the post processing without transferring the result data to the local server.

Page 29: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

29

0

1

2

3

4

5

6

7

8

9

0 4 8 12 16 20 24 28 32 36

Spe

ed

rat

io

Parallels

Theoretical

Gigabit Ethernet

Infiniband

Figure 4: Case 2 calculation speed ratio.

Liquid

GasGas

Volume fraction [-]

(Colored Contour)

Velocity [m/s]

(Arrow)

large

small

Liquid only

Gas only

Arrows display stream line of

gas phase.

Figure 5: Visualization image showing flow path and volume fraction of liquid and gas.

CONCLUSIONS AND RECOMMENDATIONS After finishing the calculation, the end-user needed to transfer all the result data from the cloud server to a local server. Because the size of result files was so large (7 GB for case 1; 20 GB for case 2), it took several days to transfer the files. Obviously, the end-user needed a faster data transfer system. Calculation speeds were as fast as expected and post processing with the virtual desktop function was almost the same as working in a local environment. However, after finishing all the calculations, data transfer took a long time. We expect that improvements in high-speed data transfer technology are in development. Also, there was no need for physical data storage delivery in this experiment, but it might be useful when the data size is larger.

2013 – Case Study Authors – Kyoji Ishikawa, Wim Slagter, Hiroyuki Kanazawa

Page 30: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

30

Team 118

Coupling In-house FE Code with ANSYS Fluent CFD

MEET THE TEAM End user - Hubert Dengg, Rolls-Royce Deutschland, works as a thermal analyst, specializing in temperature predictions for jet engine components based on CFD and Finite Element methods. Software Provider - Wim Slagter and René Kapa, ANSYS, Inc. Resource Provider - Thomas Gropp and Alexander Heine, CPU 24/7 GmbH & Co. KG. CPU 24/7 is an innovative company specializing in providing remote HPC systems and computing power “on demand,” either in the form of the permanently available tailored configurations or as flexibly usable computing capacities via the CPU 24/7 Resource Area – both provided as a ready-to-work workplace environment. HPC/CAE Expert - Prof. Dr. Marius Swoboda, Rolls-Royce Deutschland. Team Expert - Alexander Heine, CPU 24/7.

USE CASE In the present test case, a jet engine high pressure compressor assembly was the subject of a transient aerothermal analysis using the FEA/CFD coupling technique. Coupling is achieved through an iterative loop with smooth exchange of information between the FEA and CFD simulations at each time step, ensuring consistency of temperature and heat flux on the coupled interfaces between the metal and the fluid domains. The aim of the HPC Experiment was to link the commercial CFD code ANSYS Fluent with an in-house FE element code. This was done by extracting heat flux profiles from the Fluent CFD model and applying them to the FE model. The FE model provides metal temperatures in the solid domain. This conjugate heat transfer process is very consuming in terms of computing power, especially when 3D-CFD-models with more than 10 million cells are required. As a consequence, we thought that using cloud resources would have a beneficial effect regarding computing time.

“Outsourcing of computational

workload to an external cluster

allowed us to distribute

computing power in an efficient

way – especially when the in-

house computing resources

were already at their limit.”

Page 31: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

31

Figure 1: Contours of total temperature for a jet engine component.

First, we set up a validation case to check the whole simulation procedure. For a second case, much more complexity was added. We used this as a benchmark case for performance checking purposes. The purpose of the CFD/FE coupling was to test the process on a high performance computing environment. The focus of this experiment was to investigate the speedup that can be achieved when running the process on a multicore computer cluster instead of on a normal Windows workstation.

Figure 2: Contours of heat flux.

CHALLENGES There were two main challenges that had to be addressed:

• The first challenge was to bring together software from different providers. Both the finite element code and the ANSYS Fluent CFD code have different license models that had to be implemented on the cluster environment.

• The second challenge was getting the Fluent process to run on several machines when called from the FE software. While an autonomous Fluent calculation spawned flawlessly on all cores,

Page 32: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

32

only one machine was used, when Fluent was called from the FE code. After some adjustments the coupling procedure worked as expected.

The computation was performed on the 32 cores of two nodes with dual Intel Xeon E5-2690@ 2.9 GHz, 16 Cores, 128 GB RAM connected with Mellanox Infiniband FDR fabric. The calculation was done in cycles in which the FE code and Fluent CFD ran alternating, exchanging their results. As it can be seen in Figure 3, only the CFD part of the calculation was done on all 32 cores, while the FE part ran only sequentially on one core. The Figure also indicates that the whole calculation spent most of the time in the CFD component, which was expected. Calculation Details

Figure 3: Cluster load during calculation cycles.

BENEFITS From the end user’s point of view there were multiple advantages associated with using the external cluster resources:

• The outsourcing of the computational workload to an external cluster allowed the end user to distribute computing power in an efficient way – especially when the in-house computing resources were already at their limit.

• The problems, which occurred during the experiment, could be solved quickly due to good teamwork between every member of the project.

• We used dedicated Fluent licenses, which then allowed the process to run independently from the in-house licensing and queuing system; therefore the licenses process ran faster than when running it in-house.

• Bigger models usually give more detailed insights into the physical behavior of the system. Due to the enhanced computing power available on the cluster it is possible to run bigger jobs in a short time scale.

• We gained practical experience on how to run Fluent jobs. We used batch processing, which allowed us to solve certain sub steps with Fluent in combination with our in-house code.

In addition, the end user benefited from the HPC provider's knowledge of how to setup a cluster, to run applications in parallel based on MPI, to create a host file, to handle the FlexNet licenses, and to prepare everything needed for turn-key access to the cluster. During the entire process the resource provider stayed competently at the end user’s side and provided comprehensive and expedient technical support.

Page 33: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

33

CONCLUSIONS AND RECOMMENDATIONS • It has been shown that a speed-up factor of approximately 5x between Windows PC and cluster

run could be achieved.

• It took only one month from the initial idea of doing this project to the first calculation done on the remote cluster, which is a very short period from an industrial point of view. This was possible because of the smooth collaboration between ANSYS and the CPU 24/7 team.

• The graphical performance was acceptable although, as expected, it is not comparable with running the jobs on local machines, especially with taking into account the large size of the model. This can be easily improved by applying some 3D visualization tools. However, with the resources that were available the performance was quite satisfactory.

2014 – Case Study Authors – Hubert Dengg, Thomas Gropp, Alexander Heine, Wim Slagter, Marius Swoboda

Page 34: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

34

Team 154

CFD Analysis of Geo-Thermal Perforation

in the Cloud

MEET THE TEAM End-User/FEA Expert: Devvrath Khatri, Foro Energy in Littleton, Colorado Software Provider: Wim Slagter, ANSYS Inc. Resource Provider and HPC Experts: John Van Workum, Sabalcore Computing Inc., Orlando, Florida

Foro Energy is commercializing high power lasers for the oil, natural gas, geothermal, and mining industries with capability and hardware platform to transmit high power lasers over long distance fiber optic cables enables step change performance in applications to drill, complete, and workover wells. Launched in 2009, Foro Energy is built upon a decade of academic work at the Colorado School of Mines with a novel approach to bust through the “sound barrier” of Stimulated Brillouin Scattering that previously made it impossible to transmit high power lasers over long distance fiber optic cables. Sabalcore Computing, Inc. (www.sabalcore.com) provides High Performance Computing Cloud (HPC) Cloud services for government, commercial industry and academic institutions with target industries in life sciences, weather modelling, engineering and design, financials, and oil and gas. Outsourcing HPC minimizes the daunting cost of administrating and maintaining complicated Linux cluster systems. The company’s business model allows on-demand, pay as you go access to high performance computing clusters and storage which can be used to meet peak requirements or as a complete outsource host for high-performance platforms.

USE CASE This experiment studied computational fluid dynamics (CFD) performed on various setups of perforation using Ansys Fluent software at Foro Energy, Littleton, CO. The scope is to characterize the flow propagation through the test setup and recommend design changes needed to improve the flow propagation. The whole setup is submerged inside the water at a given ambient pressure, i.e. the surrounding medium is filled with water at the given ambient pressure and is in abundance. The goal is to hit the

“The extra number of cores

in the cloud we got access to

helped us extremely in

reducing the time needed to

run all our simulations.”

Page 35: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

35

target with the laser and in order to do that, we reduce the dispersion of the laser in the medium. For that purpose a fluid (e.g. nitrogen, liquid CO2 etc.) is used as the guided flow. For this purpose, the path travelled by laser is cleared of water medium by using the guided fluid jet at high flow rate. The goal of the study is to determine the optimal flow rate of the guided fluid needed. The optimal flow rate will make sure that the laser path is cleared of water and is filled with the guided fluid which is a better medium for laser propagation as compared to water medium.

MODELING To determine an optimal flow rate a numerical model is created and solved for different parameters.

The first step is to create solid model for the setup, which represents the actual laser operating

system along with the walls and bottom of the tank/well used in the experiment or actual field

operation. This solid model is commonly generated in SolidWorks software. From the detailed solid

model of the actual laser operating system, a representation of the actual model is created and

imported into the ANSYS Design Modeler for further processing. Occasionally, by importing the

geometry from one software package (SolidWorks) to another (Ansys Design Modeler) results in

some discontinuities in the model. For this purpose, ANSYS Design Modeler is then used to clean and

smoothen the surfaces and patch any discontinuities which may have arisen due to importing.

The fluid domain is then partitioned into smaller parts for generating efficient and optimal mesh in

the next step. The fluid domain is then transferred to the mesh generator and with the tools and

strategies available in mesh generator an optimal mesh for the fluid domain is generated. The final

mesh is transferred to ANSYS Fluent for CFD analysis. Appropriate boundary conditions are applied

at all the faces of the domain, and pressure field distribution is applied on the entire outlet boundary

surface by taking into consideration the effect of gravity. All the outlet boundary surfaces also have

water as back flow condition. Based on the Reynolds and Prandtl number of the flow, an

appropriate turbulence fluid model discretization schemes, and time iteration method were

selected.

Figure 1: Sample steady state snap shot of guided fluid (e.g. nitrogen) phase distribution for all the corresponding inlet flow rate conditions.

Page 36: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

36

A steady state model is then solved with ANSYS Fluent to understand the development of nitrogen

phase. Based on the total number of elements and expected computational cost, the simulations

are usually performed on the local desktop machine using parallel processing with maximum 8

cores, where shared memory on local machine is used. Fluent uses Intel MPI to divide the model

equally on all the cores, which helps in reducing the overall time needed to finish the simulation.

In the cases where the total number of elements is large enough and it is expensive to run on the

local system, i.e. the total time needed to finish the simulation is significantly large; in those cases

we used Sabalcore servers to run our simulation. These servers are UNIX based where total number

of cores available for the simulation also depends on the number of licenses we have for parallel

processing, which is 2 in our case which allow us ability to run simulation on up to 32 cores. The

simulation settings are set such that they use Infiniband based interconnect and ssh to communicate

between nodes. After a successful completion of the simulation, the data is transferred to CFD-Post

for post processing and analyzing the results which could be performed directly from Sabalcore HPC

Cloud or locally.

Figure 1 above presents a sample steady state snap shot of guided fluid (e.g. nitrogen) phase

distribution for all the corresponding inlet flow rate conditions. In the sample case, various boundary

conditions applied in this study and the monitoring zones are displayed. The Pinlet is the inlet

boundary condition where a fixed flow rate of nitrogen is applied. The Vnozzle exit is the surface zone

at the nozzle exit, where the velocity of flow is computed and this value is then compared with the

theoretical value. The Ptank projected nozzle area is the projection of the nozzle on the wall

surface/target, where the pressure value is computed during the simulation. The Ymax nitrogen

distance is the maximum distance the nitrogen is able to penetrate inside the water domain for the

given inlet flow rate. The red color is the guided fluid (which is nitrogen in this specific case) coming

out of the nozzle and the blue color is the water body.

Figure 2: Flow field development inside the domain across the cross section

as the slot is 360o, the nitrogen flow gets distributed.

CHALLENGES Most of the challenges we had to cope with were in the CAD / CAE process of setting up the geometry and physics, for example to find the perfect mesh for the geometry and the best suited physical parameters such as for the boundary and the turbulence. We did not face any challenges with accessing and using Sabalcore’s cloud computing resources though.

Page 37: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

37

CONCLUSIONS In this study, we studied the effect of inlet guided fluid flow field for various different scenarios and find the optimal inlet flow rate for each corresponding case. We studied the cases where the guided fluid, primarily nitrogen in this study, is used to clear the path in the water medium. We also studied the cases where nitrogen flow is used to prevent the debris and particles from entering the nozzle and found the effect of particle diameter, particle density on the penetration length of particles.

ACKNOWLEDGEMENTS We like to thank Sabalcore Computing for giving generous access to their resources for running our simulations on their servers. The extra number of cores we got access to as compared to our local desktop, helped us extremely in reducing time needed to run all the simulations significantly, as we have access to 32 cores per simulation as compared to only 8 cores per simulation on our local desktop system. We like to give special thanks to Wolfgang Gentzsch at The UberCloud and John Van Workum at Sabalcore Computing Inc. and their team members who came together and made this joint venture possible by getting all the necessary ANSYS (Fluent and HPC Pack) licenses needed and setting up the account on the servers and various other work. With their timely support, we managed to run the simulation on the server without any interruptions, as some of the large scale models would not have been possible to simulate on our local desktop.

2014 – Case Study Author – Devvrath Khatri, Foro Energy

Page 38: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

38

Team 160

Aerodynamics & Fluttering on an Aircraft Wing Using Fluid Structure Interaction

MEET THE TEAM End-User/CFD Expert – Praveen Bhat, Technology Consultant, India Software Provider – ANSYS, Inc. and UberCloud Container Resource Provider – ProfitBricks

USE CASE Fluid structure interaction problems in general are too complex to solve analytically. Therefore, they have to be analysed using experiments or numerical simulation. Studying this phenomena requires modelling of both fluid and structure. In this case study, aeroelastic behaviour and flutter instability of an aircraft wing in the subsonic incompressible flight speed regime are investigated. The project involved evaluating wing aerodynamic performance using computational fluid dynamics (CFD). The Standard Goland wing was considered for this experiment. The CFD models were generated in an ANSYS environment. The simulation platform was built in a 62 core HPC cloud with the ANSYS 15.0 modelling environment. The Cloud environment was accessed using a VNC viewer through a web browser. ProfitBricks provided the 62 core server with 240 GB RAM. CPU and RAM were dedicated to the single user – this was the largest instance that was built in ProfitBricks. ProfitBricks uses an enhanced KVM (Kernel-based Virtual Machine) as virtualization platform. Using KVM, the user can run multiple virtual machines; each virtual machine has private visualized hardware, a network card, disks, graphics adaptor, etc. The following flow chart defines the fluid structure interaction framework for predicting the wing performance under aerodynamic loads:

Here is the step by step approach used to set up the Finite Element model using ANSYS Workbench 15.0 Environment.

“The whole user experience

in the cloud was similar to

accessing a website

through the browser.”

Page 39: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

39

1. Generate the Goland wing geometry using ANSYS Design Modeller, where the input for the dimension of the wing is the coordinate system which is imported in the modelling environment as coordinate files (*.csv).

2. Develop the CFD model with atmospheric air volume surrounding the Goland wing in ANSYS Design Modeller.

3. Import the CFD model into the Fluent Computational Environment. 4. Define the model parameters, fluid properties, and boundary conditions. 5. Define solver setup and solution algorithm, mainly related to defining the type of solver,

convergence criteria, and equations to be considered for solving the aerodynamic simulation.

6. Extract the pressure load on the wing surface, which is then coupled and applied on the structural wing geometry while solving the structural problem.

The Fluent simulation setup was solved in the HPC Cloud environment. The simulation model needed to be precisely defined using a large amount of fine mesh elements around the wing geometry. The following snapshots highlight the wing geometry considered and Fluent mesh models.

Figure 1: Finite Element Mesh model of Goland Wing

Figure 1: CFD mesh model for the wing geometry with surrounding air volume

The pressure load calculated from the CFD simulation was extracted and mapped on the Goland wing while evaluating the structural integrity of the wing. The following steps define the procedure for the structural simulation setup in ANSYS Mechanical:

1. Goland wing was meshed with ANSYS Mesh Modeller. Hexahedral mesh models were created.

2. The generated mesh was imported in the ANSYS Mechanical Environment where the material properties, boundary conditions etc. were defined.

3. The solution methods and solver setups were defined. The analysis setup mainly involves defining the type of simulation (steady state in this case), output result type (stress and displacement plots, strain plots etc.).

4. The pressure load extracted from the CFD simulation was mapped on the wing structure to evaluate the wing behaviour under aerodynamic loads.

Figure 3: Pressure distribution, mid-section of wing Figure 4: Velocity distribution at mid-section of wing

Page 40: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

40

Figure 5: Aerodynamic loads acting on wing wall load Figure 6: Wing deflection due to aerodynamic load

Figure 3 shows the pressure distribution at the mid-section of the Goland wing. The pressure distribution across the section is uniform. The velocity plot in figure 4 shows that the air velocity varies near the edge of the wing. The air particle velocity is uniform with particles following a streamlined path near the wing wall. Figures 5 and 6 indicate aerodynamic loads on the wing which are calculated based on the pressure distribution on the wing wall. The respective aerodynamic loads are mapped on the wing structure and the deformation of the wing is simulated to evaluate the wing deformation. The wing behaviour under the aerodynamic loads indicates its flutter stability. HPC Performance Benchmarking The flutter stability of the aircraft wing study was carried out in an HPC environment built on a 62 core server with CentOS Operating System and an ANSYS Workbench 15.0 simulation package. The server performance was evaluated by submitting the simulation runs for different numbers of elements. The higher the element numbers, the more time required to run the simulation. The run time can be minimized by using higher core systems. The following table highlights the solution time captured for an 8 core system with element numbers ranging between 750K to 12 million:

Table 1: Comparison of solution time (min) for different mesh density

8 Core 16 Core 32 Core

No. of elements

Memory utilized (GB)

solving time (min)

Memory utilized (GB)

solving time (min)

Memory utilized (GB)

solving time (min)

750K 7.92 13.00 7.92 7.00 7.92 4.00

2.0M 9.46 66.08 9.46 35.58 9.46 20.33

3.1M 11.02 119.17 11.02 64.17 11.02 36.67

4.3M 12.55 172.25 12.55 92.75 12.55 53.00

5.4M 14.11 225.33 14.11 121.33 14.11 69.33

6.6M 15.65 278.42 15.65 149.92 15.65 85.67

7.7M 17.21 331.50 17.21 178.50 17.21 102.00

9.0M 18.74 384.58 18.74 207.08 18.74 118.33

11M 20.30 437.67 20.30 235.67 20.30 134.67

12M 21.84 490.75 21.84 264.25 21.84 151

Page 41: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

41

Figure 7: Comparison of Solution time (min) for different element density.

The simulation time reduces considerably with the increase in the number of CPU units. The solution time required for 8 cores with a fine mesh model is 3.5 times higher than the time required for a 32 core server with the same mesh model. For a moderate number of elements (~ 750K), the 32 core server performance is 5.2 times better than a normal dual core system with respect to total number of simulation jobs completed in a day.

Person Effort Invested End user/Team Expert: 100 hours for simulation setup, technical support, reporting and overall management of the project. UberCloud support: 16 hours for monitoring and administration of host servers and guest containers, managing container images (building and installing container images during any modifications/ bug fixes) and improvements (such as tuning memory parameters, configuring Linux libraries, usability enhancements). Most of the efforts were one time only and will benefit future users. Resources: 1110 core hours were used for performing various iterations in the simulation experiments.

CHALLENGES The project started with setting up the ANSYS 15.0 workbench environment with Fluent modelling software in the 62 core server. Initial working of the application was evaluated and the challenges faced during the execution were highlighted. Once the server performance was enhanced from the feedback, the next level of challenge faced was technical. This involved the accurate prediction of flutter behaviour of the wing, which is achieved through defining appropriate element size to the mesh model. The finer the mesh the higher the simulation time required; therefore the challenge was to perform the simulation within the stipulated timeline.

BENEFITS

1. The HPC cloud computing environment with ANSYS 15.0 Workbench made the process of model generation easier with process time reduced drastically because of the HPC resource.

Page 42: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

42

2. The mesh models were generated for different cell numbers where the experiments were performed using mesh models ranging from coarse to highly fine. The HPC computing resource helped in achieving smoother completion of the simulation runs without re-trails or resubmission of the same simulation runs.

3. The computation requirement for a highly fine mesh (12 million cells) is high and near to impossible to achieve on a normal workstation. The HPC cloud made it feasible to solve highly fine mesh models with drastically reduced simulation time. This allowed the team to obtain the simulation results within acceptable run time (2.25 hrs).

4. The use of ANSYS Workbench helped in performing different iterations in the experiments by varying the simulation models within the workbench environment. This helped increase the productivity of the simulation setup effort by providing a single platform to perform an end-to-end simulation setup.

5. The successful experiments performed in the HPC cloud environment gave the team the confidence to setup and run additional simulations remotely in the cloud. The different simulation setup tools required were installed in the HPC environment and this enabled users to access the tool without any prior installations.

6. With the use of VNC Controls in the web browser, HPC cloud access was very easy, requiring minimal or no installation of any pre-requisite software. The whole user experience was similar to accessing a web site through a browser.

7. The UberCloud containers helped with smoother execution of the project with easy access to the server resources, and the regular UberCloud auto-update module through email provided huge advantage to the user that enabled continuous monitoring of the job in progress in the server without any requirement to log-in and check the status.

CONCLUSION AND RECOMMENDATIONS 1. The selected HPC Cloud environment with UberCloud containerized ANSYS on ProfitBricks

cloud resources was a very good fit for performing advanced computational experiments that involve high technical challenges and require higher hardware resources to perform the simulation experiments.

2. There are different high-end software applications which can be used to perform fluid- structure interaction simulations. ANSYS 15.0 Workbench environment helped us to solve this problem with minimal effort in setting up the model and performing the simulation trials.

3. The combination of HPC Cloud, UberCloud Containers, and ANSYS 15.0 Workbench helped in speeding up the simulation trials and also allowed us to complete the project within the stipulated time frame.

2014 – Case Study Author – Praveen Bhat

Page 43: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

43

Team 163

Finite Element Analysis for 3D Microelectronic Packaging in the Cloud

MEET THE TEAM End User – Dazhong Wu, Iowa State University, Xi Liu, Georgia Tech Resource Provider – Steve Hebert, Nimbix Other Cloud Resources – NephoScale and Microsoft Azure Software Provider – Wim Slagter, ANSYS Inc.

USE CASE Although both academia and industry have shown increasing interest in exploring CAE in the Cloud, little work has been reported on systematically evaluating the performance of running CAE applications in public clouds against that of workstations and traditional in-house supercomputer. In particular, an important question to answer is: “Is the performance of cloud computing services sufficient for large-scale and complex engineering applications?” As an initial step towards finding an answer, the experiment evaluated the performance of HPC clouds via quantitative and comparative case studies for running an FE analysis simulation model using HPC in three public clouds. Specifically, the primary aspect of the experiment was evaluating the performance of the Nimbix Cloud using multiple nodes, NephoScale Cloud using a single node, and Azure Cloud using a single node.

Figure 1: Schematic view of 3D microelectronic package

“Our experiment showed that

the performance of the HPC

cloud is sufficient for solving

large FE analysis problems.”

Page 44: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

44

The application used in the experiment was the thermo-mechanical warpage analysis of a 3D stacked die microelectronic package integrated with through silicon vias (TSVs), as shown in Figure 1. Over the last decade, digital information processing devices for HPC systems require an increasing level of computing power while using less power and space. 3D integrated logic devices with stacked memory using through-silicon vias have the potential to meet this demand. The shorter and highly parallel connection between logic and high-capacity memory can avoid the von Neumann bottleneck, reduce power consumption, and realize the highest device density. However, the challenges pertaining to 3D packaging with TSVs include yield, assembly, test, and reliability issues. In particular, the 3D stacked die package warpage problem is one of the key challenges for 3D package assembly. Understanding the package warpage behavior is crucial to achieve high package stack yield because the different warpage directions of top and bottom package will impact the yield of package stacking. To address these issues, we created a FE model with detailed package features, shown in Figure 1, to investigate the warpage behaviors of stacked die 3D packages, not well understood for various reasons. One is that 3D stacked dies interconnected with TSVs are still in development stage. Thus, a very limited amount of prototype samples are available for investigating the warpage problem. The other reason is numerical simulation of 3D packages is computationally intensive. For example, in 3D packages, the in-plane dimensions are at millimeter scale. However, the out-of-plane dimensions, TSVs, and microbumps are at the micrometer scale, which results in a significantly increased finite-element mesh density to meet the element aspect ratio requirements. In addition, there are generally hundreds or thousands of TSVs/microbumps between each stacked die.

PROCESS OVERVIEW 1. Define the end-user project 2. Contact the assigned resources and set up the project environment 3. Initiate the end-user project execution 4. Monitor the project 5. Review results 6. Document findings

RESULT OF THE ANALYSIS

Table 1: Hardware specifications for the workstation

Processor Model Two Intel® Xeon® CPU E5530@ 2.40GHz

Number of Cores 8

Memory 24 GB

Parallel Processing Method Shared Memory Parallel (SMP)

Page 45: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

45

Figure 2: Solution scalability on a workstation

Figure 3: Relative speedup versus number of nodes

Table 1 lists the workstation hardware specifications. Figure 2 shows the solution scalability on the workstation. Figure 3 shows the comparison of the run time using the 6-core configuration on a standard workstation against the mean run times using 8 to 16 nodes in the cloud. As expected, we found that distributed ANSYS performance significantly exceeds that of SMP ANSYS. Specifically, the maximum speed-up over the workstation is 8.37 times, with using 16 nodes in the cloud.

Table 2: Hardware specifications for the NephoScale Cloud

Processor Model Intel Xeon CPU E5-2690 v2 @ 3.00 GHz

Number of Cores 20

Memory 256 GB

Parallel Processing Method Shared Memory Parallel (SMP)

Figure 4: Solution scalability on the NephoScale Cloud

Table 2 lists the hardware specifications of the NephoScale Cloud. The single node on the NephoScale Cloud has 20 CPU cores and 256GB memory. Figure 4 shows the solution scalability on the NephoScale Cloud using 4, 8, 16, and 20 CPU cores on a single node.

Table 3: Hardware specifications for the Azure Cloud

Processor Model Intel Xeon CPU E5-2698B v3@ 2.00GHz

Core 32

Memory 448 GB

Page 46: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

46

Parallel Processing Method Shared Memory Parallel (SMP)

Figure 5: Solution scalability on the Azure Cloud

Table 3 lists the hardware specifications of the Azure Cloud. The single node on the Azure Cloud has 32 CPU cores and 448 GB memory. Figure 5 shows the solution scalability on the Azure Cloud using 4, 8, 16, and 32 CPU cores on a single node.

Table 4 Hardware specifications for the dedicated on premise supercomputer

Processor Model Two 8-Core Intel Xeon CPU E5-2650 v2 @ 2.6 GHz

Cores per Node 16

Memory per Node 128 GB

Interconnect 40 Gbps QDR Infiniband

File system Lustre parallel file system

Parallel Processing Method Distributed Memory Parallel (DMP)

Table 4 lists the hardware specifications of the dedicated on-premise supercomputer. Figure 6 shows the solution scalability on this supercomputer using 8, 10, 12, 14, and 16 nodes. This study has shown that performance bottlenecks may exist in HPC clouds using multiple nodes. The two single machines on the two public clouds significantly outperformed the workstation as shown in Figures 2, 4, and 5. In addition, the run times using SMP on a single node was very stable. Moreover, when using multiple nodes, low efficiency in cloud-based HPC is due to a severe system imbalance from factors such as CPU and memory size limitations, slow I/O rate, slow interconnects between processors on distributed memory systems, and slow solver computational rate.

Figure 6: Solution scalability on the dedicated on premise supercomputer

Page 47: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

47

Based on the experimental results, some of the general observations for cloud-based HPC simulations for finite element analysis are as follows:

• Node count – Adding more nodes does not always accelerate simulations.

• Memory – Having powerful nodes with sufficient memory is more desirable than to have nodes with limited memory.

• Interconnects – Poor interconnects between CPU cores result in run times going up as CPU cores are added.

• Scalability – Cloud-based HPC typically cannot achieve linear scalability for FEA applications using the ANSYS Mechanical software package.

BENEFITS

The benefits of applying cloud-based HPC in FEA applications include:

• Anytime, anywhere access – Cloud-based HPC enables users to access state-of-the-art FEA software package from ANSYS and HPC computing hardware from Nimbix via a web portal and/or application program interfaces (APIs) anytime, anywhere.

• Cost efficiency – Cloud-based HPC allows users to solve complex problems using FEA simulations that typically require high bandwidth, low latency networking, many CPU cores, and large memory size. In particular, cloud-based HPC enables users to not only improve computing performance as dedicated on premise HPC clusters, but also reduce costs by using on-demand computing resources and the pay-per-use pricing model without large capital investments.

• High flexibility – Cloud-based HPC has the potential to transform dedicated HPC clusters into flexible HPC clouds that can be shared and adapted for rapidly changing customer requirements through private, hybrid, and public clouds.

• High throughput – Cloud-based HPC can significantly increase simulation throughput as opposed to standard workstations by allowing globally dispersed engineering teams to perform complex engineering analysis and simulations concurrently and collaboratively.

CONCLUSIONS AND RECOMMENDATIONS In this experiment, we evaluated the performance of an HPC cloud for a FE analysis application. As opposed to traditional computing paradigms, such as standard workstations, HPC in the cloud enables scalable accelerated computationally expensive FE simulations. In response to the initial research question, our experimental results show that the performance of the HPC cloud is sufficient for solving the large example FE analysis problem. In the future, it will be worthwhile to compare the performance of a HPC cloud with that of a supercomputer. Such a comparison can help users make decisions when facing a tradeoff between performance and cost associated with performing a large and complex computer-aided engineering analysis. Finally, evaluating the performance of a HPC cloud for different problem sizes and solvers would be also an important direction for future research. By identifying the largest scale of an engineering problem that cloud-based HPC can address, we can identify the boundary between cloud-based HPC and supercomputers.

2014 – Case Study Author – Dazhong Wu

Page 48: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

48

Team 165

Wind Turbine Aerodynamics with UberCloud ANSYS Container in the Cloud

MEET THE TEAM End-User/CFD Expert – Praveen Bhat, Technology Consultant, INDIA Software Provider – ANSYS, Inc. and UberCloud Container Resource Provider – ProfitBricks

USE CASE With an ever-increasing energy crisis occurring in the world, it is important to investigate alternative methods of generating power other than fossil fuels. Wind energy is an abundant resource in comparison with other renewable resources. Moreover, unlike the solar energy, its use is not affected by climate and weather. A wind turbine is the device that extracts energy from the wind and converts it into electric power. This case study describes the evaluation of the wind turbine performance using a Computational Fluid Dynamics (CFD) approach. Standard wind turbine designs were considered for this UberCloud experiment. The CFD models were generated with ANSYS CFX. The simulation platform was built on a 62-core 240 GB HPC cloud server at ProfitBricks, the largest instance at ProfitBricks. The cloud environment was accessed using a VNC viewer through a web browser. The CPU and the RAM were dedicated to the single user. The ANSYS software ran in the UberCloud’s new application containers.

Process Overview The following defines the step-by-step approach in setting up the CFD model in the ANSYS Workbench 15.0 environment.

1 Import the standard wind turbine designs, which are in the 3D CAD geometry format. These were imported into the ANSYS Design modeler. The model was modified by creating the atmospheric air volume around the wind turbine design.

2 Develop the CFD model with an atmospheric air volume surrounding the wind turbine in ANSYS Mesh Modeler.

3 Import the CFD model in the ANSYS CFX Computational Environment. 4 Define the model parameters, fluid properties, and boundary conditions.

“The HPC cloud provided a

service to solve very fine

mesh models and thus

reduced the simulation

time drastically.”

Page 49: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

49

5 Define the solver setup and solution algorithm. This portion of setup was mainly related to defining the type of solver, convergence criteria, and equations to be considered for solving the aerodynamic simulation.

6 Perform the CFD analysis and review the results.

The simulation model needed to be precisely defined with a large amount of fine mesh elements around the turbine blade geometry. The following snapshot highlights the wind turbine geometry considered and ANSYS CFX mesh models.

Figure 1: Wind turbine Geometry

Figure 2: CFD model of wind turbine

The CFD simulation evaluated the pressure distribution and velocity profiles around the wind turbine blades. The wind turbine blades are subjected to average wind speed of 7 to 8 m/min. The following plots highlight the pressure and velocity distribution around the wind turbine blades.

Figure 3: Plot of pressure distribution

on the wind turbine blades

Figure 4: Vector plot of velocity profiles

around the wind turbine blades

HPC Performance Benchmarking The aerodynamic study of wind turbine blades was carried out in the HPC environment that is built on a 62-core server with a CentOS Operating System and an ANSYS Workbench 15.0 simulation package. Server performance is evaluated by submitting the simulation runs for different parallel computing environments and mesh densities. The simulation runs were performed using ANSYS CFX by varying the mesh densities and submitting the jobs for different numbers of CPU cores. Three different parallel computing environments were evaluated: Platform MPI, Intel MPI and PVM Parallel. Each of the parallel computing platforms was evaluated for their performance on total compute time and successful completion of the submitted jobs. Further the best parallel computing environment is proposed based on the experiments conducted and results achieved.

Page 50: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

50

Figure 5: Solution time for different element density using Intel MPI

Figures 5 shows the solution time required for different mesh density where the simulation models are solved using Intel MPI. The Intel MPI parallel computing platform shows a stable performance with the solution time decreasing with increases in the number of CPU cores (Ref. Figure 5).

Figure 6: Performance comparison for a mesh density of 250K

Effort Invested End user/Team expert: 75 hours for simulation setup, technical support, reporting and overall management of the project. UberCloud support: 16 hours for monitoring & administration of host servers and guest containers, managing container images (building and installing container images during any modifications/bug fixes) and improvements (such as tuning memory parameters, configuring Linux libraries, usability enhancements). Most of this effort is one time occurrence and will benefit future users. Resources: ~600 core hours were used for performing various iterations in the simulation experiments.

CHALLENGES The project started with setting up the ANSYS 15.0 workbench environment with ANSYS CFX modeling software on the 62-core server. Initial working of the application was evaluated and the challenges faced during the execution were highlighted. Once the server performance was

Page 51: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

51

enhanced, the next set of challenges faced was related to technical complexity. This involved accurate prediction of wind turbine blade behavior under aerodynamic loads, which is achieved through defining appropriate element size for the mesh model. The finer the mesh the higher is the simulation time required; therefore the challenge was to perform the simulation within the stipulated timeline.

BENEFITS 1 The HPC cloud environment with ANSYS 15.0 Workbench made the process of model

generation easier with process time reduced drastically because of the use of the HPC resource.

2 The mesh models were generated for different cell numbers where the experiments were performed using coarse-to-fine to highly fine mesh models. The HPC computing resource helped in achieving smoother completion of the simulation runs without re-trials or resubmission of the same simulation runs.

3 The computation requirement for a very fine mesh (2.5 million cells) is high, which is next to impossible to achieve on a normal workstation. The HPC cloud provided the ability to solve very fine mesh models and drastically reduce simulation time drastically. This allowed us to obtain simulation results within an acceptable run time (~1.5 hours).

4 The use of ANSYS Workbench helped in performing different iterations in the experiments by varying the simulation models within the workbench environment. This further helped to increase the productivity of the simulation setup effort and provided a single platform to perform the end-to-end simulation setup.

5 The experiments performed in the HPC Cloud environment showed the possibility and provided the extra confidence required to setup and run the simulations remotely in the cloud. The different simulation setup tools required were installed in the HPC environment and this enabled the user to access the tool without any prior installations.

6 With the use of VNC Controls in the web browser, the HPC Cloud access was very easy with minimal or no installation of any pre-requisite software. The whole user experience was similar to accessing a website through the browser.

7 The UberCloud containers helped with smoother execution of the project with easy access to the server resources, and the regular UberCloud auto-update module through email provided huge advantage to the user by enabling continuous monitoring of the job in progress without any requirement to login to the server and check the status.

CONCLUSION AND RECOMMENDATIONS 1. The selected HPC Cloud environment with UberCloud containerized ANSYS on ProfitBricks

cloud resources was a very good fit for performing advanced computational experiments that involve high technical challenges and require higher hardware resources to perform the simulation experiments.

2. There are different high-end software applications that can be used to perform wind turbine aerodynamics study. ANSYS 15.0 Workbench environment helped us to solve this problem with minimal effort in setting up the model and performing the simulation trials.

3. The combination of HPC Cloud, UberCloud Containers, and ANSYS 15.0 Workbench helped in speeding up the simulation trials and also completed the project within the stipulated time frame.

2015 – Case Study Author – Praveen Bhat

Page 52: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

52

Team 171

Dynamic Study of Frontal Car Crash with UberCloud ANSYS Container in the Cloud

MEET THE TEAM End-User/FEA Expert – Praveen Bhat, Technology Consultant, INDIA Software Provider – ANSYS Inc. and UberCloud Container Resource Provider – Nephoscale

USE CASE Vehicle compatibility has been investigated in many studies using different approaches such as the real world crash statistics, crash testing and computer simulations. Fundamental physics and field studies have clearly shown a higher fatality risk for occupants in smaller and lighter vehicles when colliding with heavier ones. The consensus is that the significant parameters influencing compatibility in front crashes are geometric interactions, vehicle mass and vehicle stiffness. Crash testing requires a number of test vehicles to be destroyed during the course of physical testing which is time consuming and uneconomical. An efficient alternative is virtual simulation of crash testing with crash tests performed using computer simulation with Finite Element methods. The current study focused on the frontal crash simulation of the representative car model against a rigid plane wall. The computational models were solved using the Finite Element software LS-DYNA that simulates the vehicle dynamics during the crash test. The Cloud computing environment was accessed using a VNC viewer through web browser. The 40 core server with 256 GB RAM installation was at Nephoscale. The LS-DYNA solver was installed and run in ANSYS platform with multi-CPU allocation settings. The representative car model was travelling at a speed of 80 km/hr. The effect on the car due to the frontal impact was studied during which component to component interactions were analysed and car impact behaviour was visually experienced.

PROCESS OVERVIEW The following steps detail the step by step approach that was taken in setting up the frontal crash simulation modelling environment in LS-DYNA:

1. The 3D CAD model of the car was considered for this case study. The mesh model was based on the 3D CAD model.

“The HPC cloud played a very

important role in reducing the

total simulation run time and

thereby helped in performing a

quick turnaround in different

simulation iteration.”

Page 53: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

53

2. The mesh model of the representative car was developed where the shell and solid elements were used to model the car geometry. Each component in the car assembly was meshed and the connections between each component in the car assembly were defined either through contact definitions or through 1 dimensional link / rigid elements.

3. The material properties were defined for all the components in the car assembly along with the definition of type of material model that was used. All components were defined using Isotropic elastic plastic material models.

4. The environment for the crash was defined. A wall / vertical surface was defined as a rigid surface. The car moving forward with a velocity was made to impact the rigid wall.

5. The solver setup, solution steps and also the total simulation time were defined. 6. The output results required for comparison plots & graphs were defined. 7. The Impact analysis was defined, and the results were reviewed.

The LS-DYNA simulation models were solved in the HPC Cloud computing environment. The simulation model needed to be precisely defined with good amount of finely meshed elements in the car assembly. The following snapshot highlights the representative car designs considered with the 3D CAD geometry and Finite Element mesh model.

Figure 1: 3D CAD model of representative car geometry

Figure 2: Finite Element mesh model of representative car geometry

The LS-Dyna simulation was performed to evaluate the damage caused in the car assembly when it impacted with velocity against a rigid wall. The car was travelling at an average speed of 80 km/hr. The following plots highlights the damage caused on the car.

Figure 3: Physical damage due to the impact with the wall along with the energy variation in the system

Figure 3 highlights the physical damage in the car due to the impact with the wall along with the variation in the kinetic and internal energy of the car during the impact. As the car impacted the wall the kinetic energy of the car (which is the energy due to car speed) was absorbed by the car thereby

Page 54: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

54

increasing its internal energy during the impact. Figure 4 highlights the stress distribution in the car assembly due to impact. The graph on the right highlights the variation in the maximum stress in the car assembly during the impact.

Figure 4: Stress distribution in the car assembly with plot of variation of the stress with time (sec)

HPC PERFORMANCE BENCHMARKING The impact study on the car was carried out in the HPC environment, which is built on a 40 core cloud server with CentOS Operating System and LS-DYNA simulation package and is running on ANSYS platform. The HPC performance was evaluated by building the simulation models with different mesh densities which started with a coarse mesh that was transformed into a fine mesh that had 1.5 million cells in the simulation model.

Figure 5: Simulation time for different numbers of cores for a mesh model with 17K elements

Figure 5 shows the graph of run time required for solving the car impact simulation model using

different numbers of CPU cores for a coarse mesh model.

Page 55: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

55

Figure 6: Simulation time for different numbers of cores for a fine mesh model with 1.5M elements

Figure 7: Simulation time for mesh model with different mesh density using different CPU Cores Figure 7 shows the comparison of the simulation time required for different mesh model for different CPU cores. We observed that the simulation time for higher number of cells at higher CPU cores was much less when compared to the solution time required for the respective mesh model at lower CPU cores. The total simulation time is the combination of the time required to run the simulation and the time required to write the results and the output files. Overall performance of the LS-DYNA server is optimal when using the CPU resources of 8 and 16 cores. The performance is more efficient up to 70-80% of the system usage and the simulation time increases if the CPU resource is utilized 100%.

Page 56: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

56

Figure 8 shows the comparison of the CPU time required for different mesh models with different CPU cores. The total CPU clock time is the time taken by the individual CPU resources. The higher the CPU resources used by LS-DYNA, the higher is the CPU clock time to speed up the simulation and thereby reduce the simulation run time. The CPU clock time also depends on the memory allotted to LS-DYNA for performing the simulation.

Figure 8: CPU time for mesh model with different mesh density using different CPU Cores

The advantage of the HPC cloud resource is that it increases the power of LS- DYNA solver to solve the simulation model in a shorter run time. The use of the HPC cloud has enabled simulations that include complex physics and geometrical interactions. The problems posed by these categories require high computing hardware and resources which is not possible using a normal workstation.

EFFORT INVESTED End user/Team Expert: 125 hours for simulation setup, technical support, reporting and overall management. UberCloud support: 20 hours for monitoring & administration of host servers and guest containers, managing container images (building and installing container images during any modifications/bug fixes) and improvements (such as tuning memory parameters, configuring Linux libraries, usability enhancements). Most of the mentioned effort is one time effort and will benefit the future users. Resources: ~2400 core hours for performing various iterations in the simulation experiments.

CHALLENGES The project was executed for simulation models with different mesh densities. During the execution in ANSYS APDL with LS-DYNA it showed a memory error which was mainly related to defining the memory for solving LS-DYNA simulation. This error is seen especially when the simulation model is very fine and requires higher HPC resource to run the simulation. The solution to this challenge was to define and allocate the required memory (in words) when starting the simulation and make sure it overrode the default memory allocation settings. The second challenge seen was the file size handling. The LS-DYNA solver writes the simulation results in *.d3plot format and the frequency of the file written is based on the time step we define to write this file. Each step writes a separate *.d3plot files and these files are physically stored in the disk there by increasing overall folder size. The major challenge that was faced is simulation file transfer after the runs were completed. The time required for simulation result file transfer is very

Page 57: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

57

high as the file sizes starts with 1 or 2 GB for a coarse mesh model and may extend to 35 to 40 GB for fine mesh model. Conventional file compression software was able to compress the file to 5 - 6 %.

BENEFITS 1. The HPC cloud computing environment with LS-DYNA has enabled the possibility of

simulating complex physics problem which requires complex geometry and models to be solved. The simulation time was reduced drastically because of the HPC resource.

2. The computation requirement for a fine mesh (1.5 million cells) is high, which is near to impossible to achieve on a normal workstation. The HPC cloud provided this feasibility to solve highly fine mesh models and the simulation time drastically reduced thereby providing an advantage to getting the simulation results within acceptable run time (~12 hrs), which on a normal workstation would take over 50 hours.

3. Since the simulation model that was solved is a dynamic simulation problem (which is a time – dependent analysis), the HPC resource played an important role in reducing the total number crunching time and helped in performing a quick turnaround in different simulation iterations.

4. The experiments performed in the HPC Cloud environment showed the possibilities and gave extra confidence to setup and run the simulations remotely in the cloud. The different simulation setup tools like LS-PrePost, which is used for pre-process and post-processing activities, were installed in the HPC environment and this enabled the user to access the tool without any prior installations.

5. With the use of VNC Controls in the Web browser, The HPC Cloud access was very easy with minimal or no installation of any pre-requisite software. The whole user experience was similar to accessing a website through the browser.

6. The UberCloud containers helped with smoother execution of the project with easy access to the server resources. The regular UberCloud auto-update module through email provided huge advantage to the user that enabled continuous monitoring of the job in progress without any requirement to log-in to the server and check the status.

CONCLUSION AND RECOMMENDATIONS

1. The selected HPC Cloud environment with UberCloud containerized ANSYS LS-DYNA was on Nephoscale cloud resources which were a very good fit for performing advanced computational experiments that involve high technical challenges and require higher hardware resources to perform the simulation experiments.

2. There are different high-end software applications which can be used to perform complete system modeling. LS-DYNA with HPC environment helped us to solve this problem with minimal effort in setting up the model and performing the simulation trials.

3. The power of HPC Cloud, UberCloud Containers, and LS-DYNA helped in speeding up the simulation trials and also completed the project within the stipulated time frame.

2015 – Case Study Author – Praveen Bhat

Page 58: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

58

Team 177

Combustion Training in the Cloud

MEET THE TEAM End-user: A. de Jong Group, Energy and Environmental Technologies, The Netherlands Combustion Expert: Ferry Tap, Dacolt, The Netherlands Software Provider: Wim Slagter, ANSYS Inc. and UberCloud Containers Ansys Container Provider: Fethican Coskuner, UberCloud Resource Provider and HPC Experts: Thomas Gropp, Alexander Heine, Christian Unger, CPU 24/7, Potsdam, Germany.

USE CASE Dacolt provides highly appreciated Combustion CFD trainings for ANSYS Fluent since 2012. When delivering such trainings on-site, a number of challenges are faced:

• Does the end-user have sufficient CFD licenses available?

• Does the end-user have sufficient HPC resources available?

• How are the HPC resources accessed? Not so long ago, some training sessions involved running on laptop computers which had to be physically moved around as well as being up-to-date from the Operating System and the CFD software perspective, involving substantial logistics and potential IT headaches. In this UberCloud Experiment, the ANSYS software is provided in a Linux (Docker-based) Container from UberCloud, which runs on CPU 24/7 HPC Cloud resources. The trainer accesses the HPC system via a web browser, using the end-user company’s guest WIFI network. The four end-user trainees each access the HPC system from their local workstations, also directly in the web browser.

“UberCloud container technology

harnessing Ansys Fluent CFD

software on CPU24/7 HPC

resources, accessed from a

browser on a laptop computer,

provided a light and seamless user

experience.”

Page 59: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

59

USER EXPERIENCE The end-users and trainer used Fluent on their own workstations. The login process is simple, getting files in and out of the HPC Cloud system works without any problem using a web-based file exchange system, in this case Dacolt’s Basecamp account. The whole experience was so natural that it seemed as if this way of working was daily routine. For the trainer, the UberCloud container technology provides a very simple and scalable solution to provide training in the field of HPC, having to bring only a laptop computer.

BENEFITS 1. The UC Ansys container is very intuitive to use, it is a remote desktop running

within the web browser. Also non-Linux users did not have any trouble to run their tutorials.

2. For the end-user, the company did not have to prepare any logistics to host the training.

3. For the trainer, the logistics only consisted in being on time, knowing the required resources were up and running in the Cloud.

CHALLENGES 1. The only real challenge encountered was on the back-end, to let the UC

Ansys containers with Fluent check-out a license from the CPU24/7 license server. Through very effective team work and excellent support from Ansys, UberCloud and CPU24/7 resolved this issue swiftly.

CONCLUSION & RECOMMENDATIONS 1. The selected HPC Cloud environment with the UberCloud Ansys container was a

very good combination to provide the training to multiple trainees on a customer site.

2. The HPC resources from CPU24/7 were more than sufficient to allow the users to run their tutorials.

3. The light-weight web-access to the training environment is very comfortable for both trainer and trainees.

2015 – Case Study Author – Dr. Ferry Tap, Dacolt

Page 60: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

60

Team 184

Spray modeling of PMDI dispensing device on

the Microsoft Azure Cloud

MEET THE TEAM End-User/CFD Expert: Praveen Bhat, Technology Consultant, INDIA Software Provider: ANSYS INC. with CFX software and UberCloud CFX Container Resource Provider: Microsoft Azure with the UberCloud ANSYS Container HPC Expert: Burak Yenier, Co-Founder, CEO, UberCloud

USE CASE Pressurized Meter Dosage Inhalers (PMDI) are widely used to deliver aerosolized medications to the lungs, most often to relieve the symptoms of asthma. Numerical simulations are being used more frequently to predict the flow and deposition of particles at various locations both inside the respiratory tract as well as in PMDIs and add-on devices. These simulations require detailed information about the spray generated by a PMDI to ensure the validity of their results. The main objective of this project is to characterize the fluid dispensed by a PMDI device which forms a spray cone shape. The simulation framework is developed and executed on UberCloud HPC to achieve good accuracy in result prediction and also with respect to the solution time and resource utilization.

PROCESS OVERVIEW

Figure 1: Geometry & Mesh model for air volume around the dispenser.

“UberCloud Containers with ANSYS

CFX on the Microsoft Azure Cloud

provides a powerful platform to

develop and run virtual simulation

models efficiently for a technically

complex physics application.”

Page 61: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

61

1. The Finite Element volume mesh model is generated which is followed by the fluid properties definition. The volume surrounding the spray nozzle is air which is considered as incompressible in nature.

2. The fluid properties are defined under Newtonian fluids which posts a linear relationship between the shear stress (due to internal friction forces) and the rate of strain of the fluid.

3. The atmospheric air is used as a medium for the spray fluid motion. The fluid when dispensed from the spray nozzle spreads into the air during which the fluid particles break itself leading to smaller particles and there after form a typical ‘Spray cone shape’ in the atmospheric air.

4. The next section in the model setup is defining the model boundary conditions and assigning the pressure and velocity initial values. The wall boundary conditions are assigned on the outer surface of the air volume. The surface of the nozzle where the fluid is injected is considered as inlet and the other extreme surface of the cylindrical volume is considered as outlet where the pressure boundary conditions are defined.

5. The solution algorithm and convergence criteria are defined for the simulation to solve and find out the accuracy of the results.

6. The Model is solved in parallel and once the solution is converged, the solved model in the parallel processors is reconstructed to get the final simulation results. The final result is used to visualize the output of the spray modelling and the respective result components are captured using the post-processing software tool in ANSYS.

Figure 2: Pressure distribution in the

atmospheric volume.

Figure 3: Pressure distribution around dis dispensing nozzle.

Page 62: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

62

Figure 4: Velocity contour of streamline of air

flow in the atmospheric volume.

Figure 5: Velocity contour with air flow vectors

around the dispensing nozzle.

HPC PERFORMANCE BENCHMARKING The HPC system is a Microsoft Azure GS5 Instance: 32 cores, 448 GB RAM, Max Disk size OS = 1023 GB and local SSD = 896 GB, Cache size 4224, and Linux operating system. The software used to develop the spray modeling is ANSYS Workbench with CFX in an UberCloud HPC Container, which is integrated with the Microsoft Azure cloud platform. The model is evaluated for the accuracy of prediction of spray dispensing from the nozzle and also the formation of the ‘spray cone shape’. Finite volume models are developed for both fine and coarse meshes. The models are submitted to ANSYS CFX in the container. The time required for solving the model with different meshes is then captured to benchmark the HPC performance. The boundary conditions, solution algorithm, solver setup and convergence criteria remain the same for all the models.

Figure 6: Solution time required for different mesh density running on a single CPU core.

EFFORT INVESTED End User/Team Expert: 10 hours for the simulation setup, technical support, reporting and overall management of the project. UberCloud support: 1 hour for monitoring & administration of the performance in the host server. Resources: ~500 core hours were used for performing various simulation experiments.

Page 63: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

63

Figure 7: Solution time required for different mesh density running on 8 CPU core.

Figure 8: Solution time for a model with 1.29M elements solved running on different HPC core.

Page 64: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

64

Figure 9: Comparison of solution time for different mesh densities running on different core configurations.

CHALLENGES The project challenges were related to the technical complexity of the application. This involved accurate prediction of flow behavior in the atmospheric volume with a correct ‘spray cone shape formation’. The finer the mesh the better is the ‘spray cone shape formation’, but the higher is the simulation runtime, and hence the challenge was to perform the simulation within the stipulated timeline. Getting exposure to the Azure Cloud platform and using its features consumed some time at first as this required going through learning and following instructions provided by Azure.

BENEFITS 1. UberCloud’s HPC container cloud environment with ANSYS workbench & CFX made the

process of model generation much easier with processing time reduced drastically along with result viewing & post-processing.

2. The mesh models were generated for different cell numbers where the experiments were performed using coarse – to – fine to highly fine mesh models. The HPC computing resource helped in achieving smoother completion of the simulation runs without re-trails or resubmission of the same simulation runs there by helping the user to achieve highly accurate simulation results.

3. The computation requirement for a fine mesh (~1.5 million cells) is high, which is near to impossible to achieve on a normal workstation. The HPC cloud provided this feasibility to solve highly fine mesh models and the simulation time drastically reduced thereby providing an advantage of getting the simulation results within acceptable run time (~10 min).

4. The experiments performed in the HPC Cloud environment showed the possibility and gave extra confidence to setup and run the simulations remotely in the cloud. The different simulation setup tools were pre-installed into UberCloud’s application container and this enabled the user to access the tools then without any prior installations.

5. With the use of VNC Controls in the Web browser, The HPC Cloud access was very easy with minimal or no installation of any pre-requisite software. The whole user experience was similar to accessing a website through the browser.

Page 65: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

65

6. The UberCloud containers helped with smoother execution of the project with easy access to the server resources, and the regular UberCloud auto-update module through email provided huge advantage to the user that enabled continuous monitoring of the job in progress without any requirement to log-in to the server and check the status.

7. The UberCloud environment integrated on the Microsoft Azure platform proved to be powerful as it facilitates running parallel containers and viewing the system performance and usage on the dashboard in the Azure environment.

CONCLUSION & RECOMMENDATIONS

1. The selected HPC Cloud environment with UberCloud containerized ANSYS Workbench with CFX on the Microsoft Azure platform was an excellent fit for performing complex simulations that involved huge hardware resource utilization with a large number of simulation experiments.

2. The combination of Microsoft Azure, HPC Cloud, UberCloud Containers, and ANSYS Workbench with CFX helped in speeding up the simulation trials and also completed the project within the stipulated time frame.

2015 - Case Study Author – Praveen Bhat, Technology Consultant, INDIA

Page 66: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

66

Team 185

Air flow through an engine intake manifold

on Microsoft Azure

MEET THE TEAM End-User/CFD Expert: Praveen Bhat, Technology Consultant, INDIA Software Provider: ANSYS INC., Fluent, and UberCloud Fluent Container Resource Provider: Microsoft Azure HPC Expert: Burak Yenier, Co-Founder, CEO, UberCloud

USE CASE Increasingly stringent legislation aimed at reducing pollutant emissions from vehicles has intensified efforts to gain better understanding of the various processes involved in internal combustion (IC) engines. In the case of spark ignition engines one of the most important processes is to prepare air-fuel mixture. This mixture circulates to the intake port through a very complicated path including air cleaner, intake pipe, and intake manifold. Hence the design of the intake manifold is an important factor which determines engine performance. The main objective of this project is to understand the flow characteristics in an intake manifold. The simulation framework is developed and executed on Azure Cloud resources running the ANSYS Fluent UberCloud container to achieve good accuracy in result prediction and also with respect to the solution time and resource utilization.

PROCESS OVERVIEW

Figure 1: Geometry & Mesh model for air intake manifold.

“Combination of Microsoft Azure

with UberCloud ANSYS FLUENT

Container provided a strong

platform to develop an accurate

virtual simulation model that

involved complex geometries.”

Page 67: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

67

7. The internal volume is extracted which represents the flow path for the intake manifold. A finite volume mesh is generated which is followed by the Fluid properties definition. The entire internal volume of the intake manifold is defined as air.

8. The fluid properties are defined under Newtonian fluids which posts a linear relationship between the shear stress (due to internal friction forces) and the rate of strain of the fluid.

9. The air enters into the manifold with a certain flow rate and then moves into different hose at the exit of the intake manifold.

10. The next section in the model setup is defining the model boundary conditions and assigning the pressure and velocity initial values. The wall boundary conditions are assigned on the outer surface of the air volume. The top surface of the intake manifold where air enters is defined as inlet and the cylindrical faces are defined outlet.

11. The solution algorithm and the convergence criteria are defined for the simulation to solve and find out the accuracy of the results.

12. The model is solved in parallel. The final result is used to view the output of air flow inside the intake manifold and the respective result components are captured using the post-processing software tool in ANSYS.

Figure 2: a) Contour plot of pressure distribution b) Contour plot of velocity distribution

c) streamline plot of air flow velocity.

Figure 3: Solution time required for different mesh density with single CPU Core.

Page 68: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

68

HPC PERFORMANCE BENCHMARKING The HPC system is a Microsoft Azure GS5 Instance: 32 cores, 448 GB RAM, Max Disk size OS = 1023 GB and local SSD = 896 GB, Cache size 4224, and Linux operating system. The software used to develop the air flow modelling for intake manifold is ANSYS Workbench with FLUENT in an UberCloud HPC container, which is integrated with the Microsoft Azure cloud platform. The model is evaluated for the accuracy of predicting air circulation within the intake and also determines if there is any recirculation which results in blockage / smoother flow of air. Different finite volume models are developed for fine and coarse mesh. The time required for solving the model with different mesh intensity is then captured to benchmark the HPC performance in solving high density mesh models. Boundary conditions, solution algorithm, solver setup and convergence criteria remain same for all models.

Figures 4 & 5 provide the comparison of the solution time required for different mesh densities with and without parallel processing. The comparison of the solution time with single core and 8 cores shows that the time for the parallel run is significantly less when compared with running the same simulations with single core.

Figure 4: Solution time required for different mesh density using 8 CPU Core.

EFFORT INVESTED End user/Team Expert: 10 hours for simulation setup, technical support, reporting and overall management of the project. UberCloud support: 1 hours for monitoring & administration of the performance in the host server. Resources: ~600 core hours were used for performing various iterations in the simulation experiments.

CHALLENGES The project challenges faced were related to technical complexity. This involves use of appropriate mesh model and solution algorithm which will capture accurately, the flow behaviour. Hence it was necessary to perform trials with different mesh density model. The finer the mesh better is the flow behaviour captured, but higher is the simulation runtime required and hence the challenge was to perform the simulation within the stipulated timeline. Getting exposure to the azure cloud platform

Page 69: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

69

and using its features consumed some time as this required going through, learning and following written instructions provided in Azure.

Figure 5: Solution time for a model with 770K elements solved using different HPC Core configuration.

Figure 6: Comparison in solution time for different mesh densities models solved

using different HPC core configuration.

Page 70: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

70

BENEFITS 8. The HPC cloud computing environment with ANSYS workbench & FLUENT made the process

of model generation easier with process time reduced drastically along with result viewing & post-processing because of the HPC resource.

9. The mesh models were generated for different cell numbers with experiments performed for coarse – to – fine to highly fine mesh models. The HPC resource helped in achieving smooth completion of the simulation runs without re-trails or resubmission of the same simulation runs thereby helping the user to achieve highly accurate simulation results.

10. The computation requirement for fine meshes (~770K cells) is high, which is near to impossible to achieve on a normal workstation. The HPC cloud provided the feasibility to solve highly fine mesh models and the simulation time drastically reduced thereby providing an advantage of getting the simulation results within acceptable run time (~5 min).

11. The experiments performed in the HPC Cloud environment showed the possibility and provided extra confidence to setup and run the simulations remotely in the cloud. The different simulation setup tools required were pre-installed in the HPC container and this enabled the user to access the tools without any prior installations.

12. With the use of VNC Controls in the Web browser, The HPC Cloud access was very easy with minimal or no installation of any pre-requisite software. The whole user experience was similar to accessing a website through the browser.

13. The UberCloud containers helped with smooth execution with easy access to the server resources. UberCloud environment integrated with the Microsoft Azure platform proved to be powerful as it facilitates running parallel UberCloud containers, with a dashboard in the Azure environment which helped in viewing the system performance and usage.

CONCLUSION & RECOMMENDATIONS

3. Microsoft Azure with UberCloud HPC resources was a very good fit for performing advanced computational experiments that involve high technical challenges with complex geometries and cannot be solved on a normal workstation.

4. The combination of Microsoft Azure, HPC Cloud resources, UberCloud Containers, and ANSYS Workbench with FLUENT helped in speeding up the simulation trials and also completed the project within the stipulated time frame.

2016 – Case Study Author – Praveen Bhat, Technology Consultant, INDIA

Page 71: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

71

Team 186

Airbag simulation with ANSYS LS-DYNA in

the Microsoft Azure Cloud

MEET THE TEAM End-User/FEA Expert: Praveen Bhat, Technology Consultant, INDIA Software Provider: ANSYS INC. and UberCloud LS-DYNA Container Resource Provider: Microsoft Azure with UberCloud Containers HPC Expert: Burak Yenier, Co-Founder, CEO, UberCloud

USE CASE Automobile airbags are the results of some incredible engineering. In a high speed crash the driver of the car can be hurled into the steering wheel, but in an airbag equipped car a small electronics will inflate the airbag providing enough cushion to protect the driver from impact. Fatality and serious injury rates have been reduced since the widespread installation of airbags. The main objective of this project is to understand the air bag inflation behaviour under dynamic conditions. The simulation framework is developed and executed with ANSYS LS-DYNA in an UberCloud container on Microsoft Azure computing resources to achieve good accuracy in result prediction and also with respect to the solution time and resource utilization.

PROCESS OVERVIEW

“Microsoft Azure resources with

UberCloud Containers and ANSYS

LS-DYNA provide an excellent

platform to develop and run

accurate simulation models that

involve complex impact physics.”

Page 72: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

72

Figure 1: Geometry & Mesh model for a steering with closed airbag model.

13. The steering wheel with folded air bag is meshed using the 2D quad mesh elements. The contacts and interactions between different components in the steering wheel assembly and air bag is defined.

14. The material properties for the steering wheel assembly with air bag are defined. The section properties are defined which involved thickness definition for different components in the assembly.

15. The next step of the model setup is defining the model boundary conditions and assigning load curves. The steering wheel geometry is fixed and the load curve provides the air bag opening forces which are defined on the air bag component.

16. Solution algorithm and convergence criteria are defined along with output parameters and results to be used for post processing.

17. The Model is solved in ANSYS LS-DYNA with parallel computing on 1 to 16 cores. The final result is used to view the output of the simulation result, and the respective result components are captured using the post-processing software tool in ANSYS.

Figure 2: Deformation plot of air bag (a) Opening sequence of air bag (b) Contour plot on the steering and air bag assembly.

HPC PERFORMANCE BENCHMARKING The HPC system is a Microsoft Azure GS5 Instance: 32 cores, 448 GB RAM, Max Disk size OS = 1023 GB and local SSD = 896 GB, Cache size 4224, and Linux operating system.. The air bag model is simulated using ANSYS LS-DYNA in an UberCloud Container on the Microsoft Azure cloud platform. The model is evaluated for the air bag behaviour and it also determines the rate of air bag opening and the stresses developed on the air bag material.

Different finite element models are developed for fine and coarse mesh. The model data are submitted to the ANSYS LS-DYNA container. The time for solving the model with different mesh intensity is captured to benchmark the performance in solving high density mesh models. Boundary conditions, solution algorithm, solver setup and convergence criteria remain same for all models.

Page 73: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

73

Figures 3 & 4 provide a comparison of solution times required for different mesh density model with

and without parallel processing. The comparison of the solution time with single core processor and

32 core processors shows that the time required to solve using parallel computing is significantly less

when compared with running the same simulations with single core.

Figure 3: Solution time required for different mesh density with single CPU Core.

Figure 3: Solution time required for different mesh density using 8 CPU Core.

Page 74: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

74

Figure 5 shows the comparison of the solution time required for a model with 132K elements which

is submitted with different CPU cores. Figure 6 provides a comparison of the solution for different

mesh models using different CPU cores. The comparison of the solution time with single core

processor and 32 core processor again demonstrates that the time with parallel computing is

significantly less when compared with running the same simulations with single core.

Figure 4: Solution time for a model with 132K elements solved using different HPC Core configuration.

Figure 5: Solution time for models with different mesh densities using different HPC core configurations.

EFFORT INVESTED End User/Team Expert: 10 hours for the simulation setup, technical support, reporting and overall management of the project.

Page 75: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

75

UberCloud support: 1 hour for monitoring & administration of the performance in the host server. Resources: ~1000 core hours used for performing various iterations in the simulation experiments.

CHALLENGES The project challenges faced were related to technical complexity of the application and the ability to run the dynamic simulation within a very short period of execution time. Hence it was necessary to perform trials with different mesh density models to accurately capture the air bag behaviour. The finer the mesh the better is the simulation result accuracy, but the higher is the simulation runtime, and hence the challenge was to perform the simulation within the stipulated timeline. Getting exposure to the Azure Cloud platform and using its features consumed some time at first as this required going through learning and following written instructions provided by Azure.

BENEFITS 14. The HPC cloud computing environment with ANSYS workbench & LS-DYNA made the process

of model generation easier with process time reduced drastically along with result viewing & post-processing because of the ANSYS / Azure / UberCloud HPC set-up.

15. The mesh models were generated for different cell numbers where the experiments were performed using coarse - to - fine to very fine mesh models. The HPC computing resource helped in achieving smoother completion of the simulation runs without re-trails or resubmission of the same simulation runs thereby helping the user to achieve highly accurate simulation results.

16. The computation time requirement for a fairly fine mesh (~132K cells) is quite high, which is nearly impossible to achieve on a normal workstation. The HPC Cloud provided this feasibility to solve highly fine mesh models and the simulation time drastically reduced providing an advantage of getting the simulation results within acceptable time (~30 min).

17. The experiments in the HPC Cloud showed the possibility and gave extra confidence to setup and run the simulations remotely in the cloud. The different simulation setup tools were pre-installed in the HPC container and this enabled the user to access the tools without any prior installations.

18. With the use of VNC Controls in the Web browser, the HPC Cloud access was very easy with no installation of any pre-requisite software. The whole user experience was similar to accessing a website through the browser.

19. The UberCloud containers helped with smooth execution of the project with easy access to the server resources. The UberCloud ANSYS container integrated with the Microsoft Azure platform proved to be powerful as it facilitates running parallel UberCloud containers. A dashboard in the Azure helped in viewing the system performance and usage.

CONCLUSION & RECOMMENDATIONS

5. The selected HPC Cloud environment with UberCloud containerized ANSYS Workbench with LS-DYNA on Microsoft Azure was an excellent fit for performing complex simulation that involved huge hardware resource utilization with a high number of simulation experiments.

6. Microsoft Azure with UberCloud Containers enabled performing advanced computational experiments that involve high technical challenges with complex geometries and which cannot be solved on a normal workstation.

2016 – Case Study Author – Praveen Bhat, Technology Consultant, INDIA

Page 76: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

76

Team 193

Implantable Planar Antenna Simulation with

ANSYS HFSS in the Cloud

MEET THE TEAM End user – Mehrnoosh Khabiri, Ozen Engineering, Inc. Sunnyvale, California Team Expert – Metin Ozen, Ozen Engineering, Inc. and Burak Yenier, UberCloud, Inc. Software Provider – Ozen Engineering, Inc. and UberCloud, Inc. Resource Provider – Nephoscale Cloud, California.

USE CASE In recent years, with rapid development of wireless communication technology, Wireless Body Area Networks (WBANs) have drawn a great attention. WBAN technology links electronic devices on and in the human body with exterior monitoring or controlling equipment. The common applications for WBAN technology are biomedical devices, sport and fitness monitoring, body sensors, mobile devices, and so on. All of these applications have been categorized in two main areas, namely medical and non-medical, by IEEE 802.15.6 standard. For medical applications, the wireless telemetric links are needed to transmit the diagnostic, therapy, and vital information to the outside of human body. The wide and fast growing application of wireless devices yields to a lot of concerns about their safety standards related to electromagnetic radiation effects on human body. Interaction between human body tissues and Radio Frequency (RF) fields are important. Many researches have been done to investigate the effects of electromagnetic radiation on human body. The Specific Absorption Rate (SAR), which measures the electromagnetic power density absorbed by the human body tissue, is considered as an index by standards to regulate the amount of exposure of the human body to electromagnetic radiation. In this case study implantable antennas are used for communication purposes in medical devices. Designing antennas for implanted devices is an extremely challenging task. The antennas require to be small, low profile, and multiband. Additionally, antennas need to operate in complex environments. Factors such as small size, low power requirement, and impedance matching play significant role in the design procedure. Although several antennas have been proposed for

“ANSYS HFSS in UberCloud’s

application software container

provided an extremely user-

friendly on-demand computing

environment very similar to my

own desktop workstation.”

Page 77: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

77

implantable medical devices, the accurate full human body model has been rarely included in the simulations. An implantable Planar Inverted F Antenna (PIFA) is proposed for communication between implanted medical devices in human body and outside medical equipment. The main aim of this work is to optimize the proposed implanted antenna inside the skin tissue of human body model and characterize the electromagnetic radiation effects on human body tissues as well as the SAR distribution. Simulations have been performed using ANSYS HFSS (High-Frequency Structural Simulator) which is based on the Finite Element Method (FEM), along with ANSYS Optimetrics and High-Performance Computing (HPC) features.

ANSYS HUMAN BODY MODEL AND ANTENNA DESIGN ANSYS offers the adult-male and adult-female body models in several geometrical accuracy in millimeter scale [17]. Fig. 1 shows a general view of the models. ANSYS human body model contains over 300 muscles, organs, tissues, and bones. The objects of the model have geometrical accuracy of 1-2 mm. The model can be modified by users for the specific applications and parts, and model objects can simply be removed if not needed. For high frequencies, the body model can be electrically large, resulting in huge number of meshes which makes the simulation very time-consuming and computationally complex. The ANSYS HPC technology enables parallel processing, such that one has the ability to model and simulate very large size and detailed geometries with complex physics. The implantable antenna is placed inside the skin tissue of the left upper chest where most pacemakers and implanted cardiac defibrillators are located, see Figure 1. Incorporating ANSYS Optimetrics and HPC features, optimization iterations can be performed in an efficient manner to simulate the implantable antenna inside the human body model.

Figure 1: Implanted antenna in ANSYS male human body model.

The antenna is simulated in ANSYS HFSS which is a FEM electromagnetic solver. Top and side view of proposed PIFA is illustrated in Figure 2 (left), the 3D view of the implantable PIFA is demonstrated in Figure 2 (right). The thickness of dielectric layer of both substrate and superstrate is 1.28 mm. The length and width of the substrate and superstrate are Lsub=20mm and Wsub=24mm, respectively. The width of each radiating strip is Wstrip=3.8mm. The other parameters of antenna are considered to be changed within the solution space in order to improve the PIFA performance. HFSS Optimetrics, an integrated tool in HFSS for parametric sweeps and optimizations, is used for tuning and improving the antenna characteristics inside the ANSYS human body model.

Page 78: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

78

Figure 2: Top and side view of PIFA (left) and 3D view of PIFA geometry in HFSS (right).

RESULTS AND ANALYSIS Figure 3 illustrates the far-field radiation pattern of the proposed PIFA at 402 MHz. Since the antenna is electrically small and the human body provides a lossy environment, the antenna gain is very small (~-44 dBi) and the EM fields are reactively stored in the human body parts in vicinity.

Figure 3: 3D Radiation pattern of implanted PIFA inside the human body model.

Figure 4 shows the simulated electric field distributions around the male human body model at 402 MHz center frequency. The electric field magnitude is large at upper side of the body, and it becomes weaker as going far away from the male body chest.

The electromagnetic power absorbed by tissues surrounding the antenna inside the human body model is a critical parameter. Hence, SAR analysis is required to evaluate the antenna performance. SAR measures the electromagnetic power density absorbed by the human body tissue. SAR measurement makes it possible to evaluate if a wireless medical device satisfies the safety limits.

Page 79: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

79

Fig. 4 Electric Field distribution around male body model at 402 MHz.

SAR is averaged either over the whole body or a small volume (typically 1 g or 10 g of tissue). ANSYS HFSS offers SAR calculations according to standards. The 3D plots of the local SAR distribution are shown in Figure 5 and Figure 6. In Figure 5, the detailed male body model with heart, lungs, liver, stomach, intestines, and brain are included. It can be observed that the left upper chest region where SAR is significant is relatively small. The peak SAR of the PIFA is smaller than the regulated SAR limitation. Figure 5 shows the SAR distribution on the skin tissue of the full human body model.

Figure 5: Local SAR distribution on upper side of male body model at 402 MHz.

Page 80: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

80

Figure 6: Local SAR distribution on the skin tissue of male body model at 402 MHz. A more detailed discussion of this use case by Mehrnoosh Khabiri can be found in the Ozen Engineering white paper about “Design and Simulation of Implantable PIFA in Presence of ANSYS Human Body Model for Biomedical Telemetry Using ANSYS HFSS”, http://www.ozeninc.com/wp-content/uploads/2015/05/OEI_Biomedical_WhitePaper_Final.pdf.

CONCLUSIONS Design modification and tuning of antenna performance were studied with the implantable antenna

placed inside the skin tissue of ANSYS human body model. The resonance, radiation, and Specific

Absorption Rate (SAR) of implantable PIFA were evaluated. Simulations were performed with ANSYS

HFSS (High-Frequency Structural Simulator) which is based on Finite Element Method (FEM). All

simulations have been performed on a 40-core Nephoscale cloud server with 256 GB RAM. These

simulations were about 4 times faster than on the local 16-core desktop workstation.

ANSYS HFSS has been packaged in an UberCloud HPC software container which is a ready-to-execute package of software designed to deliver the tools that an engineer needs to complete his task in hand. In this experiment, ANSYS HFSS has been pre-installed, configured, and tested, and running on bare metal, without loss of performance. The software was ready to execute literally in an instant with no need to install software, deal with complex OS commands, or configure. This technology also provides hardware abstraction, where the container is not tightly coupled with the server (the container and the software inside isn’t installed on the server in the traditional sense). Abstraction between the hardware and software stacks provides the ease of use and agility that bare metal environments lack.

2016 – Case Study Author: Mehrnoosh Khabiri, Ozen Engineering

Page 81: The 2016 UberCloud Compendium of Case Studies · PDF fileANSYS / UberCloud Compendium of Case Studies, 2012 - 2018 3 The UberCloud Experiment Sponsors We are very grateful to our

ANSYS / UberCloud Compendium of Case Studies, 2012 - 2018

81

Thank you for your interest in the free and voluntary UberCloud Experiment. If you, as an end-user, would like to participate in this Experiment to explore hands-on the end-to-end process of on-demand Technical Computing as a Service, in the Cloud, for your business then please register at: http://www.theubercloud.com/hpc-experiment/ If you, as a service provider, are interested in promoting your services on the UberCloud Marketplace then please send us a message at https://www.theubercloud.com/help/ 1st Compendium of case studies, 2013: https://www.theubercloud.com/ubercloud-compendium-2013/ 2nd Compendium of case studies 2014: https://www.theubercloud.com/ubercloud-compendium-2014/ 3rd Compendium of case studies 2015: https://www.theubercloud.com/ubercloud-compendium-2015/ 4th Compendium of case studies 2016: https://www.theubercloud.com/ubercloud-compendium-2016/ HPCwire Readers Choice Award 2013: http://www.hpcwire.com/off-the-wire/ubercloud-receives-top-honors-2013-hpcwire-readers-choice-awards/ HPCwire Readers Choice Award 2014: https://www.theubercloud.com/ubercloud-receives-top-honors-2014-hpcwire-readers-choice-award/ Gartner Names The UberCloud a 2015 Cool Vendor in Oil & Gas: https://www.hpcwire.com/off-the-wire/gartner-names-ubercloud-a-cool-vendor-in-oil-gas/ HPCwire Editors Choice Award 2017: https://www.hpcwire.com/2017-hpcwire-awards-readers-editors-choice/ IDC/Hyperion Innovation Excellence Award 2017: https://www.hpcwire.com/off-the-wire/hyperion-research-announces-hpc-innovation-excellence-award-winners-2/ If you wish to be informed about the latest developments in technical computing in the cloud, then please register at http://www.theubercloud.com/ and you will get our free monthly newsletter.

Please contact UberCloud [email protected] before distributing this material in part or in full.

© Copyright 2018 TheUberCloud™. UberCloud is a trademark of TheUberCloud Inc.