13
American Institute of Aeronautics and Astronautics 1 Development of a Quadtree Based Agglomeration Method for a Multigrid Viscous Flow Solver on Unstructured Grids Emel MAHMUTYAZICIOĞLU 1 TUBITAK-SAGE, Defense Industry Research and Development Institute, Ankara, Turkey İsmail H. TUNCER 2 , Haluk AKSEL 3 METU, Middle East Technical University, Ankara, Turkey In this study, a new cell agglomeration technique is developed and applied to 1 st and 2 nd order inviscid/viscous multigrid flow solutions on unstructured/hybrid grids. The grid coarsening required by the multigrid scheme is achieved by agglomerating the unstructured/hybrid cells based on their distribution on a quadtree data structure. The paper presents the coarsening strategy and the multigrid adaptation on inviscid/viscous 2D solutions over a NACA 0012 airfoil section. It is shown that, the quadtree based agglomeration and grid coarsening provides well defined, nested, body fitted coarse grid with optimum aspect ratio cells at all coarse grid levels. In the cases studies, it is observed that, the multigrid flow solutions obtained in this study provide convergence accelerations about 5 to 36 times for low speed inviscid flows and in a range of 3.5 to 71 fold for viscous flows. Nomenclature 2D = 2 dimensional α = Angle of Attack A = area C D = drag force coefficient C m = pitch moment coefficient C N = normal force coefficient H h I = restriction process h H I = prolongation process FMG = full multigrid FAS = full approximation storage scheme L = nonlinear differential space operator MG = multigrid h r = the residual h u = the converged discrete solution h u = the iterative discrete solution new h u = the iterative discrete solution at new step 1 Chief Research Engineer, Aerodynamics Division, [email protected], Senior member 2 Prof, Aerospace Engineering Department, [email protected], AIAA Member, Senior member 3 Prof, Mechanical Engineering Department, [email protected], AIAA Member, Senior member 48th AIAA Aerospace Sciences Meeting Including the New Horizons Forum and Aerospace Exposition 4 - 7 January 2010, Orlando, Florida AIAA 2010-316 Copyright © 2010 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.

[American Institute of Aeronautics and Astronautics 48th AIAA Aerospace Sciences Meeting Including the New Horizons Forum and Aerospace Exposition - Orlando, Florida ()] 48th AIAA

  • Upload
    haluk

  • View
    212

  • Download
    0

Embed Size (px)

Citation preview

American Institute of Aeronautics and Astronautics

1

Development of a Quadtree Based Agglomeration Method for a Multigrid Viscous Flow Solver on Unstructured Grids

Emel MAHMUTYAZICIOĞLU1 TUBITAK-SAGE, Defense Industry Research and Development Institute, Ankara, Turkey

İsmail H. TUNCER2, Haluk AKSEL 3 METU, Middle East Technical University, Ankara, Turkey

In this study, a new cell agglomeration technique is developed and applied to 1st and 2nd order inviscid/viscous multigrid flow solutions on unstructured/hybrid grids. The grid coarsening required by the multigrid scheme is achieved by agglomerating the unstructured/hybrid cells based on their distribution on a quadtree data structure. The paper presents the coarsening strategy and the multigrid adaptation on inviscid/viscous 2D solutions over a NACA 0012 airfoil section. It is shown that, the quadtree based agglomeration and grid coarsening provides well defined, nested, body fitted coarse grid with optimum aspect ratio cells at all coarse grid levels. In the cases studies, it is observed that, the multigrid flow solutions obtained in this study provide convergence accelerations about 5 to 36 times for low speed inviscid flows and in a range of 3.5 to 71 fold for viscous flows.

Nomenclature 2D = 2 dimensional α = Angle of Attack A = area CD = drag force coefficient Cm = pitch moment coefficient CN = normal force coefficient

HhI = restriction process hHI = prolongation process

FMG = full multigrid FAS = full approximation storage scheme L = nonlinear differential space operator MG = multigrid

hr = the residual

hu = the converged discrete solution

hu = the iterative discrete solution newhu = the iterative discrete solution at new step

1 Chief Research Engineer, Aerodynamics Division, [email protected], Senior member 2 Prof, Aerospace Engineering Department, [email protected], AIAA Member, Senior member 3 Prof, Mechanical Engineering Department, [email protected], AIAA Member, Senior member

48th AIAA Aerospace Sciences Meeting Including the New Horizons Forum and Aerospace Exposition4 - 7 January 2010, Orlando, Florida

AIAA 2010-316

Copyright © 2010 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.

American Institute of Aeronautics and Astronautics

2

I. Introduction he multigrid (MG) technique is likely to be the most effective technique to achieve the reduction the CPU cost of a single computation for flow solvers (Ref. 1,5). The basic idea of MG strategy is to employ multilevel grids

which are the coarsening subsets of the original grid. An explicit solver rapidly reduces the high frequency errors on the computational grids. In a MG solution high frequency errors at each grid level are therefore reduced rapidly. Since high frequency errors on coarse grids correspond to low frequency errors on fine grids, cycling through the coarse grid levels rapidly reduces all the errors ranging from high to low frequency on the original fine grid.

In MG applications on structured grids, a coarse grid can easily be derived from a given fine grid by omitting every other point in each coordinate direction. Recursive application of this procedure results in a sequence of coarse grids. The main difficulty with unstructured MG methods is the construction of the coarse grid levels. A variety of techniques have been proposed for unstructured MG coarse grid constructions (Ref. 9). Agglomeration technique is a widely used method due to being fully nested, easily automated, no geometry loss and high solution accuracy (Ref. 8). In an agglomeration grid cells are fused together to form a smaller set of larger polygonal (or polyhedral in three dimensions) control volumes. The main difficulty in agglomeration is the selection of the cells to be agglomerated so that the new cells formed have acceptable aspect ratios (Ref. 9).

In this study, a new grid coarsening technique is developed. This technique relies on agglomerating unstructured/hybrid cells based on their distribution on a quad-tree data structure (Ref. 10). It is shown that the new grid agglomeration method provides well defined, nested, body fitted and optimum aspect ratio cells at all coarse grid levels. The method is integrated into the flow solver SENSE2D and applied for the solution of 2D viscous flows over NACA0012 airfoil section using various MG cycling strategies such as classical V cycle and Full MG. It is observed that, MG flow solutions provide convergence accelerations in a range 3.5 to 71 fold in the flow cases studied.

II. Method The basic idea of MG strategy is to employ multilevel grids which are the coarser subsets of the original grid.

The main objective of this study is to create coarser subsets of the original grid by cell agglomeration. On structured grids, a coarser grid can easily be obtained from a given fine mesh by omitting every other point in each coordinate direction. However, on unstructured grids, the construction of coarser grid levels is not an easy task and a variety of techniques have been proposed for coarse grid constructions (Ref. 2,4,5,6,9). Cell agglomeration techniques are widely used, due to being fully nested, easily automated, without geometry loss and with high solution accuracy (Ref. 8). In this method, the methodology of selecting the cells to be agglomerated is critical in keeping the aspect ratio of the cells acceptable (Ref. 9). In this study, a new grid coarsening method which is based on representation of the grid cells in a quadtree hierarchical data structure.

In MG solution method, the flow solution is mostly based on the flux computation on the active edges and at each coarse level, active edges are already defined in the agglomeration process. Therefore, the baseline flow solver, SENSE2D, is modified to proceed the flux calculation along the edges and the data structure and solution algorithm are modified to accommodate the edge based solution method. The modifications are summarized at “Modification on SENSE2D Solver” section.

Finally, the MG routines are prepared and embedded to the baseline code. The Full Approximation Storage Scheme (FAS) is chosen to directly handle non-linear problems. In FAS, flow variables themselves and the solution residuals at each level are transferred to the coarser level, which is known as a restriction process. The corrections obtained at the coarser levels are then transferred back to the finer level grid solution which is known as a prolongation process. The MG strategies known as V-cycle and Full MG cycle are implemented. The MG algorithm is summarized at the “Multigrid Adaptation” section.

A. Grid Coarsening Based on Quadtree Hierarchy Quadtree or octree data structures are well known in storing data associated with location in 2 and 3-dimensional

space, respectively (Ref. 10). The term quadtree is used to describe a hierarchical data structure which is based on the successive subdivision of the space into four equal-sized quadrants or childs. In this study, a quadtree data structure is used for first grouping a maximum of four cells together in a quadrant, and then merging the cells in a quadrant with the cells in the parent quadrant successively. A computational grid is first represented by the nodes corresponding to the cell centers; the nodes are placed in the quadtree structure. The quadrants are formed by a maximum of four nodes. It should be noted that the coarse grid cells formed by agglomerating the neighboring cells

T

American Institute of Aeronautics and Astronautics

3

in such a way provide an aspect ratio of about one. The coarser grids are obtained by agglomerating the cells in the quadrants into the parent quadrants successively. In the grid coarsening process, the coarser grids are nested, that is, there is no a newly generated edge or nodes between the grid levels. Such a property also assures that the solid boundaries are not removed at the coarser grid levels.

B. Modification on Flow Solver The viscous flow solver, SENSE2D, is taken as the baseline flow solver. It is an unstructured finite volume

method (FVM) solver, flow variables are stored at cell centers and second order Roe’s upwind flux computations are employed. The time dependent equations are solved explicitly using the third order Runge-Kutta method with variable time-stepping (Ref. 9). The solution algorithm proceeds along the cells and the flux computations are carried out along the edges of each triangular/quadrilateral cell. In an MG solution method, since the cells are no longer regular cells and may have many number of edges. The solution algorithm therefore proceeds along the active edges rather than the cells. The active edges in each grid level are already defined in the agglomeration process. In this study the data structure and solution algorithm are therefore modified to accommodate an edge based solution method rather than a cell based one. An edge based solution algorithm can easily accommodate varying cell structures in MG levels. The implementation of boundary conditions is kept the same, since the boundary edges are not removed in the agglomeration process.

C. Multigrid Adaptation During MG adaptation, Full Approximation Storage Scheme (FAS) is chosen to directly handle non-linear

problems (Ref. 5,8). In FAS, flow variables themselves and residual at each level are transferred to the coarser levels, the process is known as restriction. The corrections obtained at the coarser levels are then transferred back to the finer level grid solution which is known as prolongation. As a final important point, V-cycles and Full MG cycles are embedded to the code. Full Approximation Storage Scheme: The discrete problem at the fine grid level is defined;

0=hhuL (1)

where, subscript represents the grid level, L is nonlinear differential space operator, hu is the converged discrete solution. For an unconverged solution in the fine level, by using the current estimate of the solution by iterative technique, hu , Equation (1) is not satisfied exactly, therefore;

hhh ruL = (2)

hhhhh ruLuL −=−~ (3)

where hr is the residual of fine grid level. At the coarser grid level, 0H HL u =% similarly to Equation (1), for the unconverged solution at the coarser level, Equation (3) can be written as,

HHHHH ruLuL −=−~ (4)

where Hr is the residual of coarse grid level. It can be seen that the exact solution can be transferred as,

hhHhHH uLIuL ~~ = (5)

and both sides give zero. If we substitute fine level exact solution to coarse level equation by using this equality given in Equation (5), in the coarse level, it can be obtained as,

American Institute of Aeronautics and Astronautics

4

HhHhh

HhHHH rrIuILuL +−= (6)

Since the first two terms on right-hand side are constant and transferred from the fine level solution, they are sometimes called the forcing function for the coarse level solution.

Once the coarse grid equations are solved, the corrections on the flow variables are prolonged to the fine grid level:

)( oldh

Hh

newH

hH

oldh

newh uIuIuu −+= (7)

Transfer Operators: One of the key elements for the success of such a method is the development of efficient transfer mechanisms for flow variables and residuals between grids, which are known as restriction H

hI and

prolongation process hHI . In the restriction of flow variables an area weighted averaging us used:

∑∑=

coarse

hfineh

Hh A

uAuI )(

(8)

The residuals are restricted by simple summation of the fine level residuals:

∑= hhHh rrI )(

(9)

The corrections on the flow variables on the coarse level cells are prolonged to the fine grid level HhI by simply

assigning the corrections to all the fine level cells forming the coarse cell. Cycling: The V-cycles and Full MG cycle patterns are

applied to the MG application. The V-cycle begins on the finest grid of the sequence, where one relaxation or time-step is performed. The solution and residuals are then interpolated to the next coarser grid, where another time-step is performed. This procedure is repeated on each coarser grid until the coarsest grid of the sequence is reached. Then refinement phase starts and the coarse grid corrections are prolongated back to each successively finer grid. At the classical V-cycle strategy, single or multiple time-steps on each grid level is performed (Figure 1-a). This refinement procedure is repeated until the finest grid of the sequence is reached. The combination of grid sequencing with a MG method (where the solution on the current grid is initiated from a previously computed solution on a coarser grid) results in a strategy known as the full MG procedure. The basic cycling strategy is depicted in Figure 1-b. Beginning with an initial sequence of grids, the solution on the finest grid of the sequence is obtained by a MG procedure operating on this sequence. A new finer grid is then added to the sequence, the solution is interpolated onto this new grid, and MG cycling is resumed on this new larger sequence of grids. The procedure can be repeated, each time adding a new finer grid to the sequence, until the desired solution accuracy has been achieved, or the finest available grid has been reached.

a. Classical V-cycle

b. FMG

Figure 1. Cycling Strategies.

American Institute of Aeronautics and Astronautics

5

III. Results and Discussions In this section we present results for a set of 1st order inviscid and 1st and 2nd order viscous flows in two

dimensional flows. The flows are computed over NACA0012 airfoil with M=0.1 and α=3.0°. A sequence of four coarse hybrid grids is generated using the quadtree based agglomeration method. In the solution process, FMG and the V-cycle strategies are applied with 20 time steps at each grid level.

A. Grid Coarsening and Cell Agglomeration An automated quadtree based data structure is used for coarsening the unstructured hybrid grid around

NACA0012 airfoil. The fine grid level, shown in Figure 2, contains 42,227 nodes and 65,540 tetrahedrals / quadriterals. Coarser grids for MG levels are shown in Figure 4-a to Figure 4-d with their sizes. In this case, maximum of 60% coarsening ratio between the coarse grid levels is used. The connectivity information between grids is obtained from the data tree via this parent / child relationship. It should be noted that the quadtree based coarser grids have high quality cells with aspect ratios of about one.

Figure 2. Hybrid unstructured grid for NACA 0012 airfoil.

B. Case1: The First Order Accurate Inviscid Flow Solution The first validation case is for an inviscid flow over

NACA0012 airfoil at a low Mach number of 0.1, a Reynolds number of 105 and at 3° angle of attack. The solution is obtained using the first order flux computations with 0.5 CFL number. The density distribution, which is computed to be the same in both the singe grid and MG solutions is presented in Figure 3.

The same solution is then obtained by applying V-cycle MG and Full MG algorithms. Although the residual histories give the same solution characteristics, the aerodynamic loads convergence more rapidly in FMG algorithm as seen in Figure5. The convergence histories of the solutions in terms of the variation of the density residual and the aerodynamic loads are given in Figure 6 and Figure 7. It is seen from Figure 6 that, the FMG and the single grid solution without the MG solutions exhibit about the same convergence rate as expected. A similar convergence rate is observed in the variation of drag, lift and pitching moment coefficients when the solutions with and without MG are compared as shown in Figure 7.

Figure 3. Density distribution for Case 1.

American Institute of Aeronautics and Astronautics

6

a. Grid level 2 – 31,523 cells

b. Grid level 3 – 18,110 cells

c. Grid level 4 – 10,519 cells

d. Grid level 5 – 4,352 cells

Figure 4. NACA 0012 grid levels (close up view).

American Institute of Aeronautics and Astronautics

7

Figure 5. Variation of lift coefficient for Case1.

Figure 6. Residual history for Case 1.

The speed-up in the drag coefficient (CD) convergence in

terms of CPU time (clock time) in fine grid level without MG and FMG is tabulated in Table 1. Solution based on FMG is taken as a reference and CPU times to reach 20%, 10%, 5%, 1% and 0.1% error bands are recorded. The runtime needed to reach above error bands is documented for solution without MG. It is seen in Table 1 that the MG solutions have approximately 5 to 36 fold faster convergence than the baseline solution. Such convergence acceleration is in agreement with the findings in literature (Ref. 2). However, the single grid solution cannot approach 0.1% error band in 435,000 time steps which takes 70,000 seconds.

Table 1. Drag convergence for Case 1.

Drag Coefficient (CD) CPU Time (sec) Error

Band Without MG FMG Ratio 20% 6,118 178 34.37 10% 7,745 215 36.02 5% 10,107 1,228 8.23 1% 12,673 2,379 5.33 0.1% NA 17,517 -

American Institute of Aeronautics and Astronautics

8

Figure 7. Variation of aerodynamic coefficients for Case 1.

American Institute of Aeronautics and Astronautics

9

C. Case2: First Order Accurate Viscous Flow Solution

The second validation case is for a laminar flow over NACA0012 airfoil is at the same flow condition as the case 1. The solution is again obtained using the first order flux computations with 0.5 CFL number. The computed normalized density distribution around the airfoil is presented in Figure 8.

The solution is then obtained by applying Full MG algorithms. The convergence histories of the solutions in terms of the variation of the density residual and the aerodynamic loads are showing similar trends in case 1 and given in Figure 9and Figure 10. In Figure 9, the FMG and single grid solution without the MG solutions exhibit about the same convergence rate as expected. A similar convergence rate is observed in the variation of the drag, lift and pitching moment coefficients when the solutions with and without the MG are compared as shown in Figure 11.

Figure 9. Residual history for Case2.

The speed-up in the steady-state convergence of the drag coefficient in terms of CPU time (clock time) in the fine grid level without MG and with FMG is tabulated in Table 2. The solution based on FMG is taken as a reference and CPU times to reach 20%, 10%, 5%, 1% and 0.1% error bands are recorded. It is seen in Table 2 that the MG solution has approximately 3.5 to 71 time faster convergence than the single grid solution. Similar to previous case, the single grid solution cannot approach 1% error band in 260,000 time steps which now takes 73,000 seconds.

Figure 8. Density distribution for Case2.

Table 2. Drag convergence for Case 2. Drag Coefficient (CD)

CPU Time (sec) Error Band Without MG FMG Ratio

20% 9,696 137 70.77

10% 11,742 1,715 6.85

5% 16,062 4,602 3.49

1% 28,617 -

0.1% NA 57,763 -

American Institute of Aeronautics and Astronautics

10

Figure 10 Variation of aerodynamic coefficients for Case2.

American Institute of Aeronautics and Astronautics

11

D. Case3: The Second Order Accurate Viscous Flow Solution The last validation case involving a laminar flow over NACA0012 airfoil is at same condition with case 2. The

solution is now obtained using the second order flux computations with 0.5 CFL number. The solution is then obtained by applying Full MG algorithms. The convergence histories of the solutions in

terms of the variation of the density residual and the aerodynamic loads are showing similar trends in case 1 and given in Figure 11 and Figure. In Figure 11, the FMG and single grid solution without the MG solutions exhibit about the same convergence rate as expected. A similar convergence rate is observed in the variation of drag, lift and pitching moment coefficients when the solutions with and without MG are compared as shown in Figure 12.

Figure 11. Residual history for Case 3.

The speed-up in the drag coefficient (CD) convergence in terms of CPU time (clock time) in fine grid level without MG and FMG is tabulated in Table 3. Solution based on FMG is taken as a reference and CPU times to reach 20%, 10%, 5%, 1% and 0.1% error bands are recorded. The runtime needed to reach above error bands is documented for solution without MG. It is seen in Table 3 that the MG solutions have approximately 17 to 23 fold faster convergence than the baseline solution. In addition, single grid solution cannot approach even 5% error band in in 200,000 time steps which now takes 70,000s.

IV. Conclusion In this paper, we have presented a new grid coarsening technique based on the placement of unstructured hybrid

grid cells on a quadtree data structure. The grid coarsening technique developed and the multigrid scheme is successfully implemented in an unstructured flow solver, SENSE2D. The multigrid solutions are obtained for inviscid and viscous flows over an airfoil with first and second order flux computations. Numerical results show that the multigrid convergence rates are about 5 to 36 times faster for low speed inviscid flows and in a range of 3.5 to 71 times faster for viscous flows. The multigrid algorithm developed is currently being extended to three dimensional flows.

Table 3. Drag convergence for Case 3. Drag Coefficient (CD)

CPU Time (sec) Error Band Without MG FMG Ratio 20% 24,422 1,463 16.7 10% 53,810 2,311 23.28 5% 5,355 - 1% 12,347 -

0.1% NA

40,131 -

American Institute of Aeronautics and Astronautics

12

Figure 12 Variation of aerodynamic coefficients for Case3.

American Institute of Aeronautics and Astronautics

13

References 1Ahlawat, S., Johnson, R., Vanka, S.P., “Object Based Library for 3D Unstructured Grid Representation and Volume Based

Agglomeration,” AIAA 2006-888, 2006 2Lambropoulos, N.K., Koubogiannis, D.G., Giiannakogluo, K.C., “Agglomeration Multigrid and Parallel Processing For The

Numerical Solution of The Euler Equations on 2D and 3D Unstructured Grids,” 4th GRACM Congress on Computational Mechanics, GRACM 27-29 June, 2002

3Waltz, J., Löhner, R., “A Grid Coarsening Algorithm for Unstructured Multigrid Applications,” AIAA 2000-0925, January 2000

4Ollivier-Gooch., C.F., “Robust Coarsening of Unstructured Meshes for Multigrid Methods,” AIAA-99-3250, 1999 5Mavriplis., D.J., “On Convergence Acceleration Techniques For Unstructured Meshes,” AIAA 1998-2966, 1998 6Francescatto, J., Dervieux, A., “A Semi-coarsening Strategy for Unstructured Multigrid Based on Agglomeration,”

International Journal for Numerical Methods in Fluids 26: 927–957, 1998 7Frink, N.T., Pirzadeh, S.Z. “Tetrahedral Finite-Volume Solutions to the Navier-Stokes Equations on Complex

Configurations,” International Journal for Numerical Methos in Fluids, Vol.31, pp. 175-187, 1999 8Mavriplis, D.J.,. “Multigrid Techniques for Unstructured Meshes,” ICASE Report N0: 95-27, NASA Contractor Report

195070, April 1995 9Bergen, B.K,. “A Framework for Efficient Multigrid on High Performance Architectures” Student Paper, Frıedrich-

Alexander-Universitat Erlangen Nürnberg 10Samet, H., “The Quadtree and Related Hierarchical Data Structures,” Computing Surveys, Vol. 16, No.2, June 1984 11“Experimental Data Base For Computer Program Assessment”, AGARD-AR-138, pp. A1-1 – A1-13, May 1979