Upload
others
View
3
Download
0
Embed Size (px)
Citation preview
Introduction Parallel SAMR Fluid-structure interaction Conclusions
Massively parallel fluid-structure interaction simulation ofblast and explosions impacting on realistic building
structures with a block-structured AMR method
Ralf Deiterding & Stephen Wood
Computer Science and Mathematics DivisionOak Ridge National Laboratory
P.O. Box 2008 MS6367, Oak Ridge, TN 37831, USAE-mail: [email protected]
SIAM Conference on Parallel Processing for Scientific ComputingFebruary 16, 2012
R. Deiterding, S. Wood – Parallel FSI simulation with a block-structured AMR method 1
Introduction Parallel SAMR Fluid-structure interaction Conclusions
Outline
Introduction
Parallel SAMRStructured adaptive mesh refinementComplex geometry embeddingParallelizationPerformance data from AMROC
Fluid-structure interactionCoupling to a solid mechanics solverVerification and validation configurationsBlast-driven deformationDetonation-driven deformations
Conclusions
This work was sponsored by the Office of Advanced Scientific Computing Research; U.S. Department of Energy (DOE) and was performedat the Oak Ridge National Laboratory, which is managed by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725.
R. Deiterding, S. Wood – Parallel FSI simulation with a block-structured AMR method 2
Introduction Parallel SAMR Fluid-structure interaction Conclusions
The Virtual Test Facility
I Developed for first DOE ASC Center at the Caltech under Dan Meiron
I Overall idea: Use Cartesian embedded boundary approach based on levelsets in combination with AMR to enable generic fluid-structure interactioncoupling to numerous explicit solid mechanics solvers
I Targets strongly driven problems (shocks, blast, detonations)
I http://www.cacr.caltech.edu/asc
I Papers: [Deiterding, 2011, Deiterding et al., 2009, Deiterding et al., 2007,Deiterding et al., 2006], etc: http://www.csm.ornl.gov/∼r2v
I AMROC V2.0 plus some solid mechanics solvers (SFC, beam solver)
I ∼ 430, 000 lines of code total in C++, C, Fortran-77, Fortran-90
I autoconf / automake environment with support for typical parallelhigh-performance system
I Used in here AMROC V2.1 (not released yet)
I New interface to DYNA3D - first prototype by Patrick Hung, JulianCummings (CACR)
R. Deiterding, S. Wood – Parallel FSI simulation with a block-structured AMR method 3
Introduction Parallel SAMR Fluid-structure interaction Conclusions
Outline
Introduction
Parallel SAMRStructured adaptive mesh refinementComplex geometry embeddingParallelizationPerformance data from AMROC
Fluid-structure interactionCoupling to a solid mechanics solverVerification and validation configurationsBlast-driven deformationDetonation-driven deformations
Conclusions
R. Deiterding, S. Wood – Parallel FSI simulation with a block-structured AMR method 4
Introduction Parallel SAMR Fluid-structure interaction Conclusions
Structured adaptive mesh refinement
Block-structured adaptive mesh refinement (SAMR)
For simplicity ∂tq(x, t) +∇ · f(q(x, t)) = 0
I Refined blocks overlay coarser ones
I Refinement in space and time byfactor rl
I Block (aka patch) based datastructures
+ Numerical scheme
Qn+1jk = Qn
jk −∆t
∆x1
hF1
j+ 12,k − F1
j− 12,k
i− ∆t
∆x2
hF2
j,k+ 12− F2
j,k− 12
ionly for single patch necessary
+ Efficient cache-reuse / vectorizationpossible
- Cluster-algorithm necessary
G2,1
G2,2
G1,1
G1,2
G1,3
G0,1
R. Deiterding, S. Wood – Parallel FSI simulation with a block-structured AMR method 5
Introduction Parallel SAMR Fluid-structure interaction Conclusions
Complex geometry embedding
Level-set method for boundary embedding
I Implicit boundary representation via distancefunction ϕ, normal n = ∇ϕ/|∇ϕ|
I Complex boundary moving with local velocityw, treat interface as moving rigid wall
I Construction of values in embedded boundarycells by interpolation / extrapolation
Interpolate / constant value ex-trapolate values at
x = x + 2ϕn
Velocity in ghost cells
u′ = (2w · n− u · n)n + (u · t)t
= 2 ((w − u) · n) n + u
ρj−1 ρj ρj ρj−1
uj−1 uj 2w − uj 2w − uj−1
pj−1 pj pj pj−1
ut
ut
ut
w
uj
2w − uj
R. Deiterding, S. Wood – Parallel FSI simulation with a block-structured AMR method 6
Introduction Parallel SAMR Fluid-structure interaction Conclusions
Complex geometry embedding
Verification: shock reflection
I Reflection of a Mach 2.38 shock in nitrogen at 43o wedge
I 2nd order MUSCL scheme with Roe solver, 2nd order multidimensionalwave propagation method
Cartesian base grid 360 × 160 cells on domain of36 mm × 16 mm with up to 3 refinement levelswith rl = 2, 4, 4 and ∆x1,2 = 3.125µm, 38 h CPU
GFM base grid 390 × 330 cells on domain of26 mm × 22 mm with up to 3 refinement levelswith rl = 2, 4, 4 and ∆xe,1,2 = 2.849µm, 200 hCPU
R. Deiterding, S. Wood – Parallel FSI simulation with a block-structured AMR method 7
Introduction Parallel SAMR Fluid-structure interaction Conclusions
Complex geometry embedding
Shock reflection: SAMR solution for Euler equations
∆x = 25 mm ∆x = 12.5 mm ∆x = 3.125 mm
∆xe = 22.8 mm ∆xe = 11.4 mm ∆xe = 2.849 mm
2nd order MUSCL schemewith Van Leer FVS, dimen-sional splitting
∆x = 12.5 mm ∆x = 3.125 mm
R. Deiterding, S. Wood – Parallel FSI simulation with a block-structured AMR method 8
Introduction Parallel SAMR Fluid-structure interaction Conclusions
Parallelization
Parallelization strategies
I Data of all levels resides on same node
I Grid hierarchy defines unique ”floor-plan”
I Workload estimation
W(Ω) =
lmaxXl=0
"Nl (Gl ∩ Ω)
lYκ=0
rκ
#
I Parallel operations
I Synchronization of ghost cellsI Redistribution of data blocks
within regridding operationI Flux correction of coarse grid cells
Processor 1 Processor 2
I Clip grid lists with properly chosen quadratic bounding box before using ∩, \I All topological operations in Recompose(l) involving global lists can be reduced
to local ones
I Present code still uses MPI allgather() to communicate global lists to all nodes
I Global view useful to evaluate new local portion of hierarchy and for dataredistribution
R. Deiterding, S. Wood – Parallel FSI simulation with a block-structured AMR method 9
Introduction Parallel SAMR Fluid-structure interaction Conclusions
Parallelization
Parallelized construction of space-filling curve
Computation of space filling curve
I Partition-Init
1. Compute aggregated workload fornew grid hierarchy and project resultonto level 0
2. Construct recursively SFC-units untilwork in each unit is homogeneous,GuCFactor defines minimalcoarseness relative to level-0 grid
I Partition-Calc
1. Compute entire workload and new work for each processor2. Go sequentially through SFC-ordered list of partitioning units and
assign units to processors, refine partition if necessary and possible
I Ensure scalability of Partition-Init by creating SFC-units strictly local
I Currently still use of MPI allgather() to create globally identical input forPartition-Calc (can be a bottleneck for weak scalability)
R. Deiterding, S. Wood – Parallel FSI simulation with a block-structured AMR method 10
Introduction Parallel SAMR Fluid-structure interaction Conclusions
Parallelization
Partitioning example
I Cylinders of spheres in supersonic flow
I Predict force on secondary body
I Right: 200x160 base mesh, 3 Levels, factors 2,2,2, 8 CPUs
[Laurence et al., 2007]
R. Deiterding, S. Wood – Parallel FSI simulation with a block-structured AMR method 11
Introduction Parallel SAMR Fluid-structure interaction Conclusions
Performance data from AMROC
First performance assessment
I Test run on 2.2 GHz AMD Opteronquad-core cluster connected withInfiniband
I Cartesian test configuration
I Spherical blast wave, Euler equations,3rd order WENO scheme, 3-stepRunge-Kutta update
I AMR base grid 643, r1,2 = 2, 89 timesteps on coarsest level
I With embedded boundary method: 96time steps on coarsest level
I Redistribute in parallel every 2nd baselevel step
I Uniform grid 2563 = 16.8 · 106 cells
Level Grids Cells0 115 262,1441 373 1,589,8082 2282 5,907,064
Grid and cells used on 16 CPUs
R. Deiterding, S. Wood – Parallel FSI simulation with a block-structured AMR method 12
Introduction Parallel SAMR Fluid-structure interaction Conclusions
Performance data from AMROC
Cost of SAMR and ghost-fluid method
I Flux correction isnegligible
I Clustering is negligible(already localapproach). For thecomplexities of ascalable globalclustering algorithm see[Gunney et al., 2007]
I Costs for GFM constantaround ∼ 36%
I Main costs: Regrid(l)
operation and ghost cellsynchronization
CPUs 16 32 64Time per step 32.44s 18.63s 11.87s
Uniform 59.65s 29.70s 15.15sIntegration 73.46% 64.69% 50.44%
Flux Correction 1.30% 1.49% 2.03%Boundary Setting 13.72% 16.60% 20.44%
Regridding 10.43% 15.68% 24.25%Clustering 0.34% 0.32% 0.26%
Output 0.29% 0.53% 0.92%Misc. 0.46% 0.44% 0.47%
CPUs 16 32 64Time per step 43.97s 25.24s 16.21s
Uniform 69.09s 35.94s 18.24sIntegration 59.09% 49.93% 40.20%
Flux Correction 0.82% 0.80% 1.14%Boundary Setting 19.22% 25.58% 28.98%
Regridding 7.21% 9.15% 13.46%Clustering 0.25% 0.23% 0.21%
GFM Find Cells 2.04% 1.73% 1.38%GFM Interpolation 6.01% 10.39% 7.92%
GFM Overhead 0.54% 0.47% 0.37%GFM Calculate 0.70% 0.60% 0.48%
Output 0.23% 0.52% 0.74%Misc. 0.68% 0.62% 0.58%
R. Deiterding, S. Wood – Parallel FSI simulation with a block-structured AMR method 13
Introduction Parallel SAMR Fluid-structure interaction Conclusions
Performance data from AMROC
AMROC scalability tests
Basic test configuration
I Spherical blast wave, Eulerequations, 3D wavepropagation method
I AMR base grid 323 withr1,2 = 2, 4. 5 time steps oncoarsest level
I Uniform grid2563 = 16.8 · 106 cells, 19time steps
I Flux correction deactivated
I No volume I/O operations
I Tests run IBM BG/P(mode VN)
Weak scalability test
I Reproduction of configuration each 64CPUs
I On 1024 CPUs: 128× 64× 64 basegrid, > 33, 500 Grids, ∼ 61 · 106 cells,uniform 1024× 512× 512 = 268 · 106
cellsLevel Grids Cells
0 606 32,7681 575 135,3122 910 3,639,040
Strong scalability test
I 64× 32× 32 base grid, uniform512× 256× 256 = 33.6 · 106 cells
Level Grids Cells0 1709 65,5361 1735 271,0482 2210 7,190,208
R. Deiterding, S. Wood – Parallel FSI simulation with a block-structured AMR method 14
Introduction Parallel SAMR Fluid-structure interaction Conclusions
Performance data from AMROC
Weak scalability test
64 128 256 512 10240
5
10
15
20
25
30
35
40
CPUs
sec
Time per higest level step
SAMR
Uniform
64 128
256
512
1024
0
2
4
6
8
10
12
14
sec
Breakdown of time per step with SAMR
Integration Syncing Partition Recompose Misc
I Costs for Syncing basically constant
I Partitioning, Recompose, Misc (origin not clear) increase
I 1024 required usage of -DUAL option due to usage of global lists datastructures in Partition-Calc and Recompose
R. Deiterding, S. Wood – Parallel FSI simulation with a block-structured AMR method 15
Introduction Parallel SAMR Fluid-structure interaction Conclusions
Performance data from AMROC
Strong scalability test
16 32 64 128 256 512 1024
101
102
CPUs
sec
Time per higest level step
SAMR
Uniform
16 32 64 128
256
512
1024
0
10
20
30
40
50
60
70
sec
Breakdown of time per step with SAMR
Integration Syncing Partition Recompose Misc
I Uniform code has basically linear scalability (explicit method)
I SAMR visibly looses efficiency for > 512 CPU, or 15, 000 finite volumecells per CPU
R. Deiterding, S. Wood – Parallel FSI simulation with a block-structured AMR method 16
Introduction Parallel SAMR Fluid-structure interaction Conclusions
Performance data from AMROC
Strong scalability test - II
16 32 64 128 256 512 1024
10−1
100
101
102
CPUs
Scaling of main operations
Integration
Syncing
Partition
Recompose
Misc
16 32 64 128
256
512
1024
0
20
40
60
80
100
%
Breakdown of time per step with SAMR
Integration Syncing Partition Recompose Misc
I Perfect scaling of Integration, reasonable scaling of Syncing
I Strong scalability of Partition needs to be addressed (eliminate global lists)
R. Deiterding, S. Wood – Parallel FSI simulation with a block-structured AMR method 17
Introduction Parallel SAMR Fluid-structure interaction Conclusions
Outline
Introduction
Parallel SAMRStructured adaptive mesh refinementComplex geometry embeddingParallelizationPerformance data from AMROC
Fluid-structure interactionCoupling to a solid mechanics solverVerification and validation configurationsBlast-driven deformationDetonation-driven deformations
Conclusions
R. Deiterding, S. Wood – Parallel FSI simulation with a block-structured AMR method 18
Introduction Parallel SAMR Fluid-structure interaction Conclusions
Coupling to a solid mechanics solver
Construction of coupling data
I Moving boundary/interface is treated as a moving contact discontinuity andrepresented by level set [Fedkiw, 2002][Arienti et al., 2003]
I Efficient construction of level set fromtriangulated surface data withclosest-point-transform (CPT) algorithm[Mauch, 2003]
I One-sided construction of mirrored ghostcell and new FEM nodal point values
I FEM ansatz-function interpolation toobtain intermediate surface values
I Explicit coupling possible if geometry andvelocities are prescribed for the morecompressible medium [Specht, 2000]
uFn := uS
n (t)|IUpdateFluid(∆t )σS
nn := −pF (t + ∆t)|IUpdateSolid(∆t )t := t + ∆t
Coupling conditions on interface
uSn = uF
n
σSnn = −pF
σSnm = 0
˛˛I
R. Deiterding, S. Wood – Parallel FSI simulation with a block-structured AMR method 19
Introduction Parallel SAMR Fluid-structure interaction Conclusions
Coupling to a solid mechanics solver
Coupling elements
R. Deiterding, S. Wood – Parallel FSI simulation with a block-structured AMR method 20
Introduction Parallel SAMR Fluid-structure interaction Conclusions
Coupling to a solid mechanics solver
Usage of SAMR
I Eulerian SAMR + non-adaptive Lagrangian FEM scheme
I Exploit SAMR time step refinement for effective coupling to solid solver
I Lagrangian simulation is called only at level lc ≤ lmax
I SAMR refines solid boundary at least at level lcI Additional levels can be used resolve geometric ambiguities
I Nevertheless: Inserting sub-steps accommodates for time step reductionfrom the solid solver within an SAMR cycle
I Communication strategy:
I Updated boundary info from solid solver must be received beforeregridding operation
I Boundary data is sent to solid when highest level available
I Inter-solver communication (point-to-point or globally) managedon-the-fly by special Eulerian-Lagrangian coupling (ELC) module[Deiterding et al., 2006]
R. Deiterding, S. Wood – Parallel FSI simulation with a block-structured AMR method 21
Introduction Parallel SAMR Fluid-structure interaction Conclusions
Coupling to a solid mechanics solver
SAMR algorithm for FSI couplingF1
Time
S1 S5S3 S7S2 S6S4 S8
F2
l=0
l=2
l=l =1c
F5
F3 F6F4 F7
AdvanceLevel(l)
Repeat rl times
Set ghost cells of Ql (t)CPT(ϕl, C l, I, δl)
If time to regrid?
Regrid(l)UpdateLevel(l)If level l + 1 exists?
Set ghost cells of Ql (t + ∆tl )AdvanceLevel(l + 1)Average Ql+1(t + ∆tl ) onto Ql (t + ∆tl )
If l = lc?SendInterfaceData(pF (t + ∆tl )|I)If (t + ∆tl ) < (t0 + ∆t0)?
ReceiveInterfaceData(I, uS |I)t := t + ∆tl
I Call CPT algorithmbefore Regrid(l)
I Include also call toCPT(·) intoRecompose(l) toensure consistent levelset data on levels thathave changed
I Communicate boundarydata on coupling level lc
R. Deiterding, S. Wood – Parallel FSI simulation with a block-structured AMR method 22
Introduction Parallel SAMR Fluid-structure interaction Conclusions
Coupling to a solid mechanics solver
Fluid and solid update / exchange of time stepsFluidStep( )
∆τF
:= minl=0,··· ,lmax
(Rl · StableFluidTimeStep(l), ∆τS)
∆tl := ∆τF/Rl for l = 0, · · · , L
ReceiveInterfaceData(I, uS |I)AdvanceLevel(0)
SolidStep( )
∆τS
:= min(K · Rlc · StableSolidTimeStep(), ∆τF)
Repeat Rlc times
tend := t + ∆τS/Rlc, ∆t := ∆τ
S/(KRlc )
While t < tend
SendInterfaceData(I(t), ~uS |I (t))ReceiveInterfaceData(pF |I)UpdateSolid(pF |I, ∆t)t := t + ∆t∆t := min(StableSolidTimeStep(), tend − t)
I Time step staysconstant for Rlc steps,which correponds toone fluid step at level 0
with Rl =Qlι=0 rι
R. Deiterding, S. Wood – Parallel FSI simulation with a block-structured AMR method 23
Introduction Parallel SAMR Fluid-structure interaction Conclusions
Verification and validation configurations
Shock-driven elastic panel motion
I Thin steel plate (thickness h = 1 mm, length 50 mm), clamped at lower end
I ρs = 7600 kg/m3, E = 220 GPa, I = h3/12, ν = 0.3 [Giordano et al., 2005]
I SAMR base mesh 320× 64(×2), r1,2 = 2, lc = 2, 4 solid sub-iterations
I Intel 3.4GHz Xeon dual processors, GB Ethernet interconnect
I ∼ 450 h CPU on 15 fluid CPU + 1 solid CPU for DYNA3D[Hallquist and Lin, 2005]
t ≈ 0.06 ms t ≈ 0.24 ms
Tip displacement in simulation andexperiment
[Deiterding, 2010]
R. Deiterding, S. Wood – Parallel FSI simulation with a block-structured AMR method 24
Introduction Parallel SAMR Fluid-structure interaction Conclusions
Verification and validation configurations
Plastic deformation of reinforced concrete column
I Column of 6.4 m and 500× 900 mm cross-section as in [Ngo et al., 2007]
I DYNA3D elastic-plastic concrete model: strength σmax = 80 MPa
I ρs = 2010 kg/m3, E = 21.72 GPa, ν = 0.2, yield stressσy = 910 kPa, ET = 11.2 GPa, β = 0.03
I Spherical energy deposition ≡ 150 kgTNT, 0.5 m distance, 2 m above theground
I 297 h CPU on 33+1 CPU 3.4 GHzIntel-Xeon
R. Deiterding, S. Wood – Parallel FSI simulation with a block-structured AMR method 25
Introduction Parallel SAMR Fluid-structure interaction Conclusions
Blast-driven deformation
Highway bridgeI Case follows [Agrawal and Yi, 2009]: 150 kg TNT 0.5 m in front of the high
middle column, 2 m above the ground
I Concrete modeled with DYNA3D plasticconcrete model, 3365 solid hexahedronelements
I SAMR: 240× 40× 80 base level, threeadditional levels r1,2,3 = 2, lc = 2, Rlc = 1
I 487 h CPU on 63+1 CPU 3.4 GHz Intel-Xeon,1504 coupled time steps to tend = 20 ms
R. Deiterding, S. Wood – Parallel FSI simulation with a block-structured AMR method 26
Introduction Parallel SAMR Fluid-structure interaction Conclusions
Blast-driven deformation
Highway bridge - meshing detail
R. Deiterding, S. Wood – Parallel FSI simulation with a block-structured AMR method 27
Introduction Parallel SAMR Fluid-structure interaction Conclusions
Blast-driven deformation
Coupled FSI - strong scalability
I SAMR: 240× 40× 80, two levels: r1 = 2, r2 = 4; coupling: lc = 2, Rlc = 1
I Timing done on fluid side for 24 steps on finest level
I ∼ 56, 500, 000 cells instead 393, 216, 000
8 16 32 64 128 256101
102
CPUs
sec
Time per higest level step in sec
SAMR
Ideal
CPUs 8 16 32 64 128 256Integration 290 133 62 29 14 6.9Syncing 50 41 15 14 6.6 4.5Interpolation 20 9.2 4.4 2.1 1.0 0.5GFM 15 6.5 3.0 1.5 0.7 0.4Regridding 4.8 2.2 1.1 1.4 0.7 0.6Coupling 4.3 1.7 0.8 0.4 0.2 0.1Level set 3.4 1.5 0.7 0.4 0.2 0.1ELC 0.7 0.4 0.3 0.5 1.1 2.2Misc 12 8.1 7.3 6.5 4.1 2.8
I Interpolation: 1/3 SAMR interpolation, 2/3 GFM extrapolation/interpolation
I Regridding: Partition (negligible in this case) + Recompose + Clustering
I Coupling: Computation of coupling data on fluid side
I Level set: Overhead + CPT algorithm
I ELC: waiting to receive solid data on fluid side
R. Deiterding, S. Wood – Parallel FSI simulation with a block-structured AMR method 28
Introduction Parallel SAMR Fluid-structure interaction Conclusions
Blast-driven deformation
Coupled FSI scalability - main operations
8 16 32 64 128 256
10−1
100
101
102
CPUs
Time per higest level step in sec
Integration
Syncing
Interpolation
GFM
Regridding
Coupling
Level set
ELC
Misc
R. Deiterding, S. Wood – Parallel FSI simulation with a block-structured AMR method 29
Introduction Parallel SAMR Fluid-structure interaction Conclusions
Blast-driven deformation
Coupled FSI scalability - main operations II
8 16 32 64 128
256
0
10
20
30
40
50
60
70
80
90
100Breakdown of CPU time with SAMR in %
Integration Syncing Interpolation GFM Regridding Coupling Level set ELC Misc
I ELC (waiting for solid data) increases unproportionally in strong scalability test
I Problem: only serial DYNA3D version easily available → change splittingapproach slightly and evaluate fluid and solid simultaenously for same time step
R. Deiterding, S. Wood – Parallel FSI simulation with a block-structured AMR method 30
Introduction Parallel SAMR Fluid-structure interaction Conclusions
Detonation-driven deformations
Prototypical hydrogen explosion in nuclear reactor
Chapman-Jouguet detonation in hydrogen-air mixture at atmospheric pressure. Eulerequations with single exothermic reaction A −→ B
∂tρ+ ∂xn (ρun) = 0 , ∂t (ρuk ) + ∂xn (ρuk un + δknp) = 0 , k = 1, . . . , d
∂t (ρE) + ∂xn (un(ρE + p)) = 0 , ∂t (Y ρ) + ∂xn (Y ρun) = ψ
with
p = (γ − 1)(ρE −1
2ρunun − ρYq0) and ψ = −kY ρ exp
„−EAρ
p
«modeled with heuristic detonation model by [Mader, 1979]
V := ρ−1, V0 := ρ−10 , VCJ := ρCJ
Y ′ := 1− (V − V0)/(VCJ − V0)If 0 ≤ Y ′ ≤ 1 and Y > 10−8 then
If Y < Y ′ and Y ′ < 0.9 then Y ′ := 0If Y ′ < 0.99 then p′ := (1− Y ′)pCJ
else p′ := pρA := Y ′ρE := p′/(ρ(γ − 1)) + Y ′q0 + 1
2unun
Used parameters for H2-Air, sto-ichiometry 0.5, induction length3.2 mm, dCJ ≈ 1620 m/s
ρ0 0.985 kg/m3
p0 100 kPaρCJ 1.951 kg/m3
pCJ 1378 kPaγ 1.266
R. Deiterding, S. Wood – Parallel FSI simulation with a block-structured AMR method 31
Introduction Parallel SAMR Fluid-structure interaction Conclusions
Detonation-driven deformations
H2-Air detonationin reactor building
Four materials used
I orange: high strength
I yellow: low strength
I dark gray: concrete,girders
I light gray: paneling
19502 solid hexahedron ele-mentsExemplary ignition in centerplane
R. Deiterding, S. Wood – Parallel FSI simulation with a block-structured AMR method 32
Introduction Parallel SAMR Fluid-structure interaction Conclusions
Detonation-driven deformations
H2-Air detonation in reactor building - meshing detailsI SAMR base mesh: 120× 120× 180, two levels r1,2 = 2
I coupling level lc = 2, Rl2 = 15 sub-iterations, 2852 coupled time steps totend = 50 ms
I 3742 h CPU on 63+1 CPU 3.4 GHz Intel-Xeon
I ∼ 16, 300, 000 (t = 0) to ∼ 50, 300, 000 (tend) instead of 165, 888, 000 cells
R. Deiterding, S. Wood – Parallel FSI simulation with a block-structured AMR method 33
Introduction Parallel SAMR Fluid-structure interaction Conclusions
Conclusions
I Developed and demonstrated the parallel coupling of AMROC withDYNA3D for sophisticated, real-world engineering scenarios
I Future directionsI Increase level of detail and realism on structural sideI Investigate more sophisticated detonation models on fluid side
I ParallelizationI Rigorous domain decomposition scales acceptably for hierarchies that
are not too deep and will scale fully in the weak senseI Improved strong and weak scalability requires complete elimination
of global data for recomposition and partitioningI Recomposition and partitioning bottlenecks will be reduced by
implementing hybrid MPI-OpenMP parallelization in AMROCI Improved scalability for FSI coupled application requires slight
algorithmic change to enable overlapping of computation on fluidand solid side
R. Deiterding, S. Wood – Parallel FSI simulation with a block-structured AMR method 34
References
References I
[Agrawal and Yi, 2009] Agrawal, A. K. and Yi, Z. (2009). Blast load effects onhighway bridges. Technical report, University Transportation Research Center, CityCollege of New York.
[Arienti et al., 2003] Arienti, M., Hung, P., Morano, E., and Shepherd, J. E. (2003). Alevel set approach to Eulerian-Lagrangian coupling. J. Comput. Phys., 185:213–251.
[Deiterding, 2010] Deiterding, R. (2010). Parallel adaptive cartesian upwind methodsfor shock-driven multiphysics simulation. In Anais do CNMAC - 33rd BrazilianNational Congress for Applied and Computational Mathematics, pages 1048–1057.
[Deiterding, 2011] Deiterding, R. (2011). Block-structured adaptive mesh refinement- theory, implementation and application. European Series in Applied and IndustrialMathematics: Proceedings, 34:97–150.
[Deiterding et al., 2009] Deiterding, R., Cirak, F., and Mauch, S. P. (2009). Efficientfluid-structure interaction simulation of viscoplastic and fracturing thin-shellssubjected to underwater shock loading. In Hartmann, S., Meister, A., Schafer, M.,and Turek, S., editors, Int. Workshop on Fluid-Structure Interaction. Theory,Numerics and Applications, Herrsching am Ammersee 2008, pages 65–80. kasseluniversity press GmbH.
R. Deiterding, S. Wood – Parallel FSI simulation with a block-structured AMR method 35
References
References II
[Deiterding et al., 2007] Deiterding, R., Cirak, F., Mauch, S. P., and Meiron, D. I.(2007). A virtual test facility for simulating detonation- and shock-induceddeformation and fracture of thin flexible shells. Int. J. Multiscale ComputationalEngineering, 5(1):47–63.
[Deiterding et al., 2006] Deiterding, R., Radovitzky, R., Mauch, S. P., Noels, L.,Cummings, J. C., and Meiron, D. I. (2006). A virtual test facility for the efficientsimulation of solid materials under high energy shock-wave loading. Engineeringwith Computers, 22(3-4):325–347.
[Fedkiw, 2002] Fedkiw, R. P. (2002). Coupling an Eulerian fluid calculation to aLagrangian solid calculation with the ghost fluid method. J. Comput. Phys.,175:200–224.
[Giordano et al., 2005] Giordano, J., Jourdan, G., Burtschell, Y., Medale, M.,Zeitoun, D. E., and Houas, L. (2005). Shock wave impacts on deforming panel, anapplication of fluid-structure interaction. Shock Waves, 14(1-2):103–110.
[Gunney et al., 2007] Gunney, B. T., Wissink, A. M., and Hysoma, D. A. (2007).Parallel clustering algorithms for structured AMR. J. Parallel and DistributedComputing, 66(11):1419–1430.
R. Deiterding, S. Wood – Parallel FSI simulation with a block-structured AMR method 36
References
References III
[Hallquist and Lin, 2005] Hallquist, J. and Lin, J. I. (2005). A nonlinear explicitthree-dimensional finite element code for solid and structural mechanics. TechnicalReport UCRL-MA-107254, Lawrence Livermore National Laboratory. Source code(U.S. export controlled) available for licensing fee from http://www.osti.gov/estsc.
[Laurence et al., 2007] Laurence, S. J., Deiterding, R., and Hornung, H. G. (2007).Proximal bodies in hypersonic flows. J. Fluid Mech., 590:209–237.
[Mader, 1979] Mader, C. L. (1979). Numerical modeling of detonations. University ofCalifornia Press, Berkeley and Los Angeles, California.
[Mauch, 2003] Mauch, S. P. (2003). Efficient Algorithms for Solving StaticHamilton-Jacobi Equations. PhD thesis, California Institute of Technology.
[Ngo et al., 2007] Ngo, T., Mendis, P., Gupta, A., and Ramsay, J. (2007). Blastloading and blast effects on structures - an overview. Electronic Journal ofStructual Engineering.
[Specht, 2000] Specht, U. (2000). Numerische Simulation mechanischer Wellen anFluid-Festkorper-Mediengrenzen. Number 398 in VDI Reihe 7. VDU Verlag,Dusseldorf.
R. Deiterding, S. Wood – Parallel FSI simulation with a block-structured AMR method 37