10

Click here to load reader

[American Institute of Aeronautics and Astronautics 21st AIAA Applied Aerodynamics Conference - Orlando, Florida ()] 21st AIAA Applied Aerodynamics Conference - Parallel Computing

  • Upload
    xiaoyin

  • View
    218

  • Download
    2

Embed Size (px)

Citation preview

Page 1: [American Institute of Aeronautics and Astronautics 21st AIAA Applied Aerodynamics Conference - Orlando, Florida ()] 21st AIAA Applied Aerodynamics Conference - Parallel Computing

_________________________________________________ *Professor and Chair, PhD, Senior Member AIAA †Manager, PhD, Aerodynamics Department, Member AIAA ‡Graduate Student

PARALLEL COMPUTING FOR AEROELASTICITY PROBLEMS

Hasan U. Akay*, Erdal Oktay†, Zhenyin Li‡ and Xiaoyin He‡

*‡Department of Mechanical Engineering

Indiana University - Purdue University Indianapolis (IUPUI) Indianapolis, IN 46202, USA

†Roketsan Inc.

Elmadag, 06780 Ankara, Turkey

ABSTRACT

A code-coupling approach is presented for solution of aerolelastic problems. A Computational Fluid Dynamics (CFD) code developed for unsteady solution of Euler equations on unstructured moving meshes is coupled with a Computational Solid Mechanics (CSD) code for coupled solutions of solid-fluid interaction problems to predict aeroelastic flutter. A loosely coupled approach is employed for transfer of fluid pressures from CFD code to CSD code and the transfer of solid surface displacements from CSD code to CFD code. A cell-centered and parallelized finite volume solver is used with implicit time integrations for solution of flow equations. The CFD code is dynamically deformed based on a spring analogy and Arbitrary Lagrangian Eulerian (ALE) approach. While the CFD solver uses three-dimensional tetrahedral elements, the CSD solver uses quadrilateral shell elements with mid-surface representation of the wing-like geometries. The dynamic response of the wing is solved via a mode-superposition algorithm using a few of the smallest natural frequencies and mode shapes of the structure. Because the CFD solutions take much longer than the CSD solution, the CFD domain is subdivided into subdomains for parallel computations to speedup the coupled solutions. The results obtained for an AGARD wing case (AGARD 445.6 Wing) showed good correlations between the experiments and computed results. The parallel efficiency of the coupled solver is demonstrated by running cases with varying number of CFD blocks.

NOMENCLATURE

a = speed of sound a = acceleration vector

e = total energy per unit volume C = structural damping matrix f = aerodynamic load vector

F = flux vector K = structural stiffness matrix M = Mach number M = structural mass matrix n = surface normal unit vector p = pressure q = displacement vector Q = vector of conservation variables R = residual vector

pS = speedup

t = time , ,u v w = velocity components in x, y, and z

directions, respectively V = flow velocity vector V = volume

, ,x y z = Cartesian coordinates

iX = i th generalized displacement W = mesh velocity vector

nW = face speed in normal direction α = angle of attack, deg

if = i th mode shape γ = specific heats ratio

iω = i th natural frequency ρ = density

INTRODUCTION

It is well known that the prediction of nonlinear aeroelastic phenomena requires the coupled solution of fluid and solid dynamics problems. While this phenomenon can be predicted by using a strongly

21st Applied Aerodynamics Conference23-26 June 2003, Orlando, Florida

AIAA 2003-3511

Copyright © 2003 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.

Page 2: [American Institute of Aeronautics and Astronautics 21st AIAA Applied Aerodynamics Conference - Orlando, Florida ()] 21st AIAA Applied Aerodynamics Conference - Parallel Computing

American Institute of Aeronautics and Astronautics

2

coupled approach, where the coupled field equations for fluid and solid may be solved together, such an approach provides great deal of numerical complexity due to vast differences in the scales of spatial and temporal variations of each medium. Strong coupling at the interfaces usually dictates matching grid points, which would result with major computational inefficiencies. Since the fluid medium typically requires very dense meshes on the solid boundaries, the use of such dense meshes in solid media would lead to unnecessary inefficiencies. As an alternative, the use of a loosely coupled approach has been promoted for solution of such problems, where the fluid and solid media are solved separately and the information between the two media exchanged at the solid-fluid interface by interpolations. While this approach is more approximate, since typically there is a one-step stagger in time between the transient solutions of fluid and solid media, it provides a practical and accurate enough alternative (see, e.g., references 1 and 2). In that case, the fluid problem is solved via a computational fluid dynamics (CFD) code and the solid problem is solved via a computational solid dynamics (CSD) code. The codes communicate with each other at the solid-fluid interfaces where the nodes may not match, since typically coarser meshes are used for the solid and denser meshes used for the fluid. The loosely coupled approach adopted herein requires coupling of the meshes at the interfaces, which can be achieved by interpolation of nodal quantities from the nodes of one mesh to the nodes of the other mesh. In spite of the relative efficiency of this approach, one is faced with the difficult task of coupling two codes and their meshes, which are usually developed independently. The problem becomes, even more difficult if a domain-decomposition based parallel computing approach is utilized for large scale-computations. In this study, we chose to utilize a third-party program called MPCCI,3 which allows the coupling of codes as well as the interpolation of data from one mesh to another. The message passing library MPI4 i s used for parallelization of the CFD code. With libraries like MPCCI, multidisciplinary codes and their independent meshes may be coupled with little code modification. Hence, the major effort may go into making each code more efficient and developing better transient solution schemes.

THE FLOW SOLVER

The flow solver is an unsteady Euler solver,5 based on cell-centered tetrahedral finite volumes, with:

1. Dynamically deforming mesh in an Arbitrary Lagrangian Eulerian (ALE) reference frame.6

The mesh deformations are done using a spring analogy consisting of linear spring network representing the edges of tetrahedral.

2. Geometric conservation law, applied to provide a self-consistent solution for the cell volumes.

3. Backward-Euler implicit time integration algorithm.

4. Parallelized solver based on domain partitioning, to work on distributed computer systems.

More details on the flow solver and its accuracy may be found in Oktay et al.5,7 Here we summarize the equations solved. Euler Equations

The three-dimensional unsteady and inviscid flow equations in for a finite-volume cell in ALE form are expressed in the following form (see e.g., Singh, et al.8):

0=⌡

⌠⌡

⌠⋅+

⌠⌡

⌠⌡

∂∂

∂ΩΩ

dSdVt

nFQ (1)

where

[ ]T,,,, ewvu ρρρρ=Q is the vector of conserved flow variables,

( )[ ]

+

+

⋅−=⋅

n

z

y

x

Wnnn

p

pewvu

0

ρρρρ

nWVnF (2)

is the convective flux vector; x y zn n n= + +n i j k is

the normal vector to the boundary Ω∂ ; V u v w= + +j j k is the fluid velocity;

tztytx ∂∂+∂∂+∂∂= kjiW is the mesh velocity;

and n x y zW n x t n y t n z t= ⋅ = ∂ ∂ + ∂ ∂ + ∂ ∂W n is the

face speed of finite-volume cells in the normal direction. The pressure p is given by the equation of state for a perfect gas

2 2 21( 1) ( )

2p e u v wγ ρ = − − + +

(3)

These equations have been nondimensionalized by the freestream density ρ∞ , the freestream speed of sound ∞a , and a reference length l .

Page 3: [American Institute of Aeronautics and Astronautics 21st AIAA Applied Aerodynamics Conference - Orlando, Florida ()] 21st AIAA Applied Aerodynamics Conference - Parallel Computing

American Institute of Aeronautics and Astronautics

3

Time Integrations

A cell-centered finite-volume formulation is employed. The flow variables are volume-averaged values, hence the governing equations are rewritten in the following form:

( )t

VdSt

V nn

1n

∆∆

∆∆

Ω

QnQFQn −⋅⌡

⌠⌡

⌠−=

+ (4)

where nnn QQQ −=∆ +1 , V n is the cell volume at

time step n, 1V +n is the cell volume at time step

1+n , nnn VVV −= +1∆ , and t∆ is the time increment. Since an implicit time-integration scheme is employed, fluxes are evaluated at time step 1+n . The integrated flux vector is linearized according to

nn

nnn Q

Q

RRR ∆∂

∂+=+1 (5)

Hence the following system of linear equations is solved at each time step:

tV n

nnnn

∆∆∆ QRQA −= (6)

where

n

nnn

tV

Q

RIA∂

∂−=∆

(7)

( ) dSnn nFR ⋅⌡

⌠−=

Ω∂

Q (8)

Parallelization of the Code

For parallelization of the code, a domain-decomposition approach is used, where the flow domain is subdivided into a number of subdomains equal or more than the number of available processors. The subdomains, also named solution blocks, are interconnected by means of interfaces, which are of matching and overlapping type, following the methodology proposed in Akay et al.9

Overlaps are of one cell between two blocks. The interfaces serve to exchange the data between the blocks. Each block has both a block solver and an interface solver. The governing equations for flow or mesh movements are solved in a block solver, which updates its interface solver with newly calculated nodal variables at each iteration step. The interface solver in each block, in turn, communicates with the

interface solver of the neighboring block for the same interface. Each interface solver also updates its block after receiving information from its neighbor.

CSD SOLVER

The CSD solver is based on finite element discretization of the solid dynamics equations expressed as follows10:

&& &Mq+Cq+Kq= f (9)

where ? , C , and K are called the mass, damping, and stiffness matrices, respectively, ( )t=q q is the

nodal displacement vector, ( )t=f f is the nodal load (aerodynamic) vector, and

ddt

=& qq and 2

2ddt

=&& qq (10)

Equation (9) is in the form of system of ordinary differential equations, the solution of which – assuming linear small deformation theory – is obtained using a modal superposition algorithm. In this method, a set of orthogonal vectors corresponding to natural mode shapes are evaluated first, and the large global dynamic equilibrium equations are simplified to a small set of uncoupled second order differential equations. This way, the computational cost per time step is much lower when solving these equations than solving Eq. 9 using direct time integrations. The first step in the mode superposition analysis is to solve the following generalized eigenvalue equation:

2i i iω=Kf Mf (11)

where nω is i th free vibration frequency and if is

the i th mode shape. Usually, the lowest few vibration modes, e.g., n , are sufficient to obtain structural response. Hence, the displacement vector

( )tq is expressed as follows:

[ ]( ) ( )nt t1 2= , ,…,q f f f X (12)

where ( )tX is the vector of generalized displacements. Substitution of Eq. 12 into 9, and using the orthogonality property of mode shape vectors, yields

22i i i i i i iX X X fξ ω ω ∗+ + =&& & (13)

where iξ is the damping ration of i th mode, and T

i if ∗ = f f is the generalized aerodynamic load vector. The above ordinary differential equations are in the form of n linear uncoupled second-order

Page 4: [American Institute of Aeronautics and Astronautics 21st AIAA Applied Aerodynamics Conference - Orlando, Florida ()] 21st AIAA Applied Aerodynamics Conference - Parallel Computing

American Institute of Aeronautics and Astronautics

4

differential equations, which ay be used using an appropriate differencing technique, such as a Newmark type method.10 Thus, the CSD solver reduces to solving Eq. 13, once the fundamental mode shapes and frequencies of the structure are obtained before starting with integration of Eq. 13. The initial conditions of Eq. 13 are:

0o

i itX X

== and

0

oii

t

dXX

dt =

= & (14)

where oiX and o

iX& are known initial values.

COUPLING ALGORITHM

The two codes are coupled using the code coupling software MPCCI, which allows communication between CFD and CSD mesh interfaces. The steps of the coupled approach are:

• The code and mesh coupling library, MPCCI,3 which in turn uses the message passing library MPI4

• An unstructured CFD code5 for solution of Euler equations with: § Cell centered finite volume formulation § Arbitrary Lagrangian and Eulerian (ALE)

form of the flow equations § Implicit time integrations § A domain-decomposition based parallel

algorithm § A dynamically deforming mesh

• A CSD code10 for solution of structural response with: § Finite element formulation § Lagrangian formulation of solid

mechanics equations § Structured or unstructured meshes with

mid-surface (shell) elements for flexible structures, such as wings, or solid elasticity elements for non-slender structures

§ Modal superposition algorithm for dynamic response of structures

The coupling scheme for the codes is shown in Figure 1. Since, the CSD mesh is usually coarser and the analysis is linear, the computational burden added is much less than the computational requirements of CFD meshes. Hence, only the CFD mesh is partitioned in this study for parallel computing. For parallelization, the CFD mesh is subdivided into subdomains called solution blocks with one-cell overlaps at the interfaces as illustrated in Figure 2. The message passing across the interfaces of the

solution blocks of the CFD mesh is handled by MPCCI via the MPI message passing library. For problems with p solution domains, 1p + processes are utilized. The extra process is reserved for MPCCI, which controls the flow of data and computations. Since the structural computations are very fast, the structural code for modal superpositions is assigned to one of the processors used for the CFD code. For the algorithms developed here, more than one process may be assigned to each processor.9 Once the interface meshes are defined, MPCCI manages the information exchange, such as pressures and displacements, between each mesh via bilinear interpolations. For solid models with mid-plane representation of the structure using, shell-like finite elements, virtual meshes are also needed to facilitate the information transfer. Schematic in Figure 3 depicts the CFD and CSD meshes and their virtual counterparts. A sequential time-integration approach is used where the structural analysis succeeds the flow analysis as follows:

1. Start with an initial condition (typically a steady state flow) at 0t = .

2. Compute pressures on the nodes of the CFD mesh from flow calculations.

3. Pass the load information to the mesh points of the CSD domain via the virtual structural mesh.

4. Calculate the nodal displacements with the CSD code.

5. Feed the structural deformation information back to the CFD domain via the virtual flow mesh.

6. Deform the CFD mesh. 7. Advance time: t t t= + ∆ . 8. Repeat steps 2 through 8.

The flow code uses an implicit backward Euler solver for time integrations, while the CSD code uses a modal superposition algorithm. The first few natural frequencies and mode shapes of the structural mesh are computed only once before the coupled time-integrations start. Transient structural responses are computed via modal superposition of generalized displacements. Since, generally the computer effort for structural analysis is a small fraction of the flow analysis per time step, the both codes advance with the same time increment governed by the accuracy and stability of the CFD mesh. This paper focuses on the coupling approach used in this research. The results obtained and parallel

Page 5: [American Institute of Aeronautics and Astronautics 21st AIAA Applied Aerodynamics Conference - Orlando, Florida ()] 21st AIAA Applied Aerodynamics Conference - Parallel Computing

American Institute of Aeronautics and Astronautics

5

efficiency of the computations on different systems, including Unix and Linux operating systems, are studied.

RESULTS

A coupled solid-fluid interaction test case has been analyzed to study the validity and effectiveness of the approach for both static and dynamic aeroelasticity problems. The AGARD 445.6 Wing, which is a well-known and experimentally documented test case,11 was studied here. For comparisons with the experiments and the results of other investigators (e.g., 12-14), for flutter predictions, the farfield flow velocity is controlled via a velocity index defined as:

fU

Vb αω µ

∞= (15)

where U∞ is the freestream velocity, b is the half

chord, αω is the natural circular frequency of the wing in first uncoupled torsion mode, and µ is the solid to fluid volume mass ratio. Starting with a small generalized initial velocity (see Eq. 14) as a disturbance to the structural dynamics equations, the structural responses of generalized displacements are computed and plotted in time. The flutter point is determined as the point where the responses of all modes are near critical equilibrium. Shown in Figure 4 are the CFD and CSD meshes used in this case. Two levels of refinement were considered for the CFD mesh to study convergence of solutions. The coarse CFD mesh has 147,547 tetrahedral elements while the fine mesh has 307,559 tetrahedrals. A CSD mesh of 400 shell elements were used for both CFD meshes. As an example, the results obtained with the coarse mesh at Mach number 0.338 and different critical speeds are summarized in Figures 5-7. As may be observed, the speed index 0.337fV = gives a

decaying (thus stable) response, 0.620fV = gives a

growing (thus unstable) response, and 0.499fV =

yields a neutral (thus critical) response. Thus, 0.499fV = is defined as the flutter point at

0.338M∞ = . Carrying out such calculations for farfield Mach numbers varying from low subsonic to supersonic, the results shown in Figure 8 are obtained. As may be observed, the comparisons of both coarse and fine meshes are favorable when

compared with the experiment with fine mesh results being more favorable, even though the viscous effects are not accounted for in the present approach. While the accuracy is good at low Mach numbers, the flutter index is underestimated by 13% at high Mach numbers. This is attributed to flow separations, which are not accounted for in this analysis. Shown in Figure 9 are the comparison of flutter index with representative solutions in the literature. As may be observed, all deviate from experiments at high Mach numbers, including the Navier-Stokes solver of Lee-Rauch, et al.12 It is interesting to note that the small disturbance transonic flow solver of Bennett and Batina13 yields the best results, except near Mach number one. This may be attributed to the higher resolution of the small disturbance transonic mesh near the wing. The parallel efficiency of the code on Unix (IBM SP2) and Linux (Intel PC) is illustrated in Figure 10, in terms of speedup computed from 1p pS T T= ,

where 1T and pT are the elapsed times needed to

solve the problem with one and p processors, respectively. A case with 1.3M CFD cells and 900 CSD elements was used in this study. As may be observed, close to 85% efficiency is reached with both systems for up to 20 processors.

CONCLUSIONS

A loosely coupled parallelized code coupling approach is presented for solid-fluid interaction problems and aeroelastic flutter. The third party software, MPCCI, allowing coupling of codes and meshes, proved to be a convenient tool. The flutter results obtained showed good accuracy even with the Euler equations (ignoring viscous effects), as well as good parallel efficiencies.

ACKNOWLEDGEMENT

The access provided, on the Indiana University’s IBM SP and Linux computer systems, by the University Information Technology Services (UITS) is greatly acknowledged.

REFERENCES

1. Bhardwaj, M.K., Kapania, R.K., Reichenbach, E., and Guruswamy, G.P., “Computational Fluid Dynamics/Computational Structural Dynamics Interaction Methodology for Aircraft Wings,” AIAA Journal, Vol. 36, No. 12, pp. 2179-2186. 1999.

Page 6: [American Institute of Aeronautics and Astronautics 21st AIAA Applied Aerodynamics Conference - Orlando, Florida ()] 21st AIAA Applied Aerodynamics Conference - Parallel Computing

American Institute of Aeronautics and Astronautics

6

2. Farhat, C., and Lesoine, M., “Two Efficient Staggered Algorithms for the Serial and Parallel Solution of Three-Dimensional Nonlinear Transient Aeroelastic Problems,” Comp. Meth. Appl. Mech. Eng., No. 182, pp. 499-515, 2000.

3. “MPCCI: A Mesh-based Parallel Code Coupling Interface, User’s Guide,” Institute of Algorithms and Scientific Computing (SCAI), http://www.mpcci.org , Sankt Augustin, Germany, 2002.

4. “MPI: A Message Passing Interface Standard – Message Passing Interface Forum,” The International Journal of Supercomputer Applications and High Performance Computing, Vol. 8, 1994.

5. Oktay, E., and Akay, H.U. and Uzun, A., “A Parallelized 3D Unstructured Euler Solver for Unsteady Aerodynamics,” Journal of Aircraft, Vol. 40, No. 2, 2003, pp. 348-354.

6. Trepanier, J.Y., Reggio, M., and Zhang, H., Camarero, R., “A Finite Volume Method for the Euler Equations on Arbitrary Lagrangian-Eulerian Grids,” Computers and Fluids, Vol.20, No. 4, 1991, pp. 399-409.

7. Oktay, E., “USER3D, 3-Dimensional Unstructured Euler Solver,” ROKETSAN Inc., SA-RS-RP-R 009/442, Ankara, Turkey, May 1994.

8. Singh, K.P., Newman, J. C., and Baysal, O., “Dynamic Unstructured Method for Flows Past Multiple Objects in Relative Motion,” AIAA Journal, Vol. 33, No. 4, 1995, pp. 641-649.

9. Akay, H.U., Blech, R., Ecer, A., Ercoskun, D., Kemle, B., Quealy, A., and Williams, A., “A Database Management System for Parallel Processing of CFD Algorithms,” Proceedings of Parallel CFD ’92, Edited by R.B. Pelz, et al., Elsevier Science, Amsterdam, 1993, pp. 9-23.

10. Bathe, K.J., Wilson, E.L., and Peterson, F.E., “SAPIV, A Structural Analysis Program for Static and Dynamic Response of Linear Systems,” College of Engineering, University of California, Berkeley, CA, 1975.

11. Yates, Jr., E.C., Land, N.S., and Foughner, Jr., J.T., “Measured and Calculated Subsonic and Transonic Flutter Characteristics of a 45o Sweptback Wing Planform in Air and in Freon-12 in the Langley Transonic Dynamics Tunnel,” Langley Research Center, NASA TN D-1616, 1963.

12. Lee-Rauch, E.M., and Batina, J.T., “Wing Flutter Computations Using an Aerodynamic Model

Based on the Navier-Stokes Equations,” Journal of Aircraft, Vol. 33, No. 6, 1996, pp. 1139-1148.

13. Bennett, R.M., and Batina, J.T., and Cunningham, H.J., “Wing Flutter Calculations with the CAP-TSD Unsteady Transonic Small-Disturbance Program,” Journal of Aircraft , Vol. 26, No. 9, 1089, pp. 876-882.

14. Liu, F., Cai, J., and Zhu, Y., “Calculation of Wind Flutter by a Coupled CFD-CSD Method,” AIAA Paper 2000-0907, 38th Aerospace Sciences Meeting & Exhibit, Reno, NV, January 2000.

Code I CFD

Solver

Application Interface

Code II CSD

Solver

Application Interface

MPCCI

Figure 1. CFD-CSD coupling scheme.

Figure 2. Partitioning of the CFD mesh into solution blocks for parallel solutions.

Page 7: [American Institute of Aeronautics and Astronautics 21st AIAA Applied Aerodynamics Conference - Orlando, Florida ()] 21st AIAA Applied Aerodynamics Conference - Parallel Computing

American Institute of Aeronautics and Astronautics

7

c) CSD mesh (virtual). d) CSD mesh (mid surface).

Virtual CSD Surface Mesh

Mid-surface Structural Mesh

CFD surface Mesh Match Virtual CSD

Surface mesh

Figure 4. Computational meshes used for the AGARD 445.6 Wing.

Figure 3. Schematics for CFD surface mesh, CSD virtual surface mesh, and structural mesh.

a) CFD mesh on symmetry plane and b) CFD mesh on symmetry plane and wing surface (coarse mesh) . wing surface (fine mesh).

Page 8: [American Institute of Aeronautics and Astronautics 21st AIAA Applied Aerodynamics Conference - Orlando, Florida ()] 21st AIAA Applied Aerodynamics Conference - Parallel Computing

American Institute of Aeronautics and Astronautics

8

-1.5E-04

-1.0E-04

-5.0E-05

0.0E+00

5.0E-05

1.0E-04

1.5E-04

2.0E-04

2.5E-04

0.0E+00 5.0E-02 1.0E-01 1.5E-01 2.0E-01 2.5E-01

Time

Gen

aera

lized

Dis

plac

emen

t

Mode 1Mode 2Mode 3Mode 4

-1.0E-01

-8.0E-02

-6.0E-02

-4.0E-02

-2.0E-02

0.0E+00

2.0E-02

4.0E-02

6.0E-02

0.0E+00 2.0E-02 4.0E-02 6.0E-02 8.0E-02 1.0E-01 1.2E-01 1.4E-01

Time

Gen

eral

ized

Dis

pla

cem

ent

Mode 1Mode 2Mode 3Mode 4

Figure 5. Time history of the generalized displacement for the AGARD 445.6 wing for 0.338M∞ = and 0.337fV = (damped, i.e., stable response).

Figure 6. Time history of the generalized displacement for the AGARD 445.6 wing for 0.338M∞ = and 0.620fV = (unstable response).

Page 9: [American Institute of Aeronautics and Astronautics 21st AIAA Applied Aerodynamics Conference - Orlando, Florida ()] 21st AIAA Applied Aerodynamics Conference - Parallel Computing

American Institute of Aeronautics and Astronautics

9

-4.0E-04

-2.0E-04

0.0E+00

2.0E-04

4.0E-04

6.0E-04

8.0E-04

0.0E+00 5.0E-02 1.0E-01 1.5E-01 2.0E-01 2.5E-01

Time

Gen

eral

ized

Dis

pla

cem

ent

Mode 1Mode 2Mode 3Mode 4

0

0.1

0.2

0.3

0.4

0.5

0.6

0 0.2 0.4 0.6 0.8 1 1.2

Mach Number

Flu

tter

Sp

eed

Ind

ex

Experiment [11]Coarse MeshFine Mesh

Figure 8. Variation of flutter speed index with Mach number – comparison of coarse and fine mesh results with experiment.

Figure 7. Time history of the generalized displacement for the AGARD 445.6 wing for 0.338M∞ = and 0.499fV = (neutral response, i.e., the flutter point).

Page 10: [American Institute of Aeronautics and Astronautics 21st AIAA Applied Aerodynamics Conference - Orlando, Florida ()] 21st AIAA Applied Aerodynamics Conference - Parallel Computing

American Institute of Aeronautics and Astronautics

10

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0 0.2 0.4 0.6 0.8 1 1.2

Mach Number

Flu

tter

Sp

eed

Ind

ex

Experiment [11]

Present (Fine)

Bennett [13]

Liu [14]

Lee [11] - Euler

Lee [12] - NS

0

5

10

15

20

25

0 5 10 15 20 25

Number of Processors

Sp

eed

up

(SP

)

UnixLinuxIdeal

Figure 9. Variation of flutter speed index with Mach number – comparison with representative solutions in the literature.

Figure 10. Parallel speedup with Unix versus Linux systems (1.3M CFD cells, 900 CSD elements).