17
Multi-Scale Parallel computing in Laboratory of Computational Geodynamics, Chinese Academy of Sciences Yaolin Shi 1 , Mian Liu 2,1 , Huai Zhang 1 , Davi d A. Yuen 3 , Hui Wang 1 , Shi Chen 1 , Shaolin Chen 1 , Zhenzhen Yan 1 Laboratory of Computational Geodynamics, Graduate University of Chinese Academy of Sciences July 22 nd , 2006 1 Graduate University of Chinese Academy of Sciences 2 University of Missouri-Columbia 3 University of Minnesota, Twin Cities

Multi-Scale Parallel computing in Laboratory of Computational Geodynamics, Chinese Academy of Sciences Yaolin Shi 1, Mian Liu 2,1, Huai Zhang 1, David

Embed Size (px)

Citation preview

Page 1: Multi-Scale Parallel computing in Laboratory of Computational Geodynamics, Chinese Academy of Sciences Yaolin Shi 1, Mian Liu 2,1, Huai Zhang 1, David

Multi-Scale Parallel computing in Laboratory of Computational Geodynamics, Chinese

Academy of Sciences

Yaolin Shi1, Mian Liu2,1, Huai Zhang1, David A. Yuen3 , Hui Wang1, Shi Chen1, Shaolin Chen1, Zhenzhen Yan1

Laboratory of Computational Geodynamics,

Graduate University of Chinese Academy of Sciences

July 22nd, 2006

1 Graduate University of Chinese Academy of Sciences

2 University of Missouri-Columbia

3 University of Minnesota, Twin Cities

Page 2: Multi-Scale Parallel computing in Laboratory of Computational Geodynamics, Chinese Academy of Sciences Yaolin Shi 1, Mian Liu 2,1, Huai Zhang 1, David

Computational Geodynamics• Huge amount of data from GIS, GPS, and observation• Large-scale parallel machines;• Fast development of network and between HPCCs an

d inter-institute high speed network interconnections;• Middle-wares for grid computing;• Computational mathematics development for large sc

ale linear system and nonlinear algorithms for parallel computing;

• Problems become more and more complex;

Page 3: Multi-Scale Parallel computing in Laboratory of Computational Geodynamics, Chinese Academy of Sciences Yaolin Shi 1, Mian Liu 2,1, Huai Zhang 1, David

There is more than one way to do parallel computing and Grid Computing

• Now,we are thinking about ways to do parallel computing

• Developing state-of-art sourcecode packages – www.geodynamics.org;

• Specific type of models can be plugged-into a general supporting system (wave, fluid, structure, etc.) - Geofem

• Developing a platform that can generate parallel and grid computing sourcecode according user’s modeling – modeling language based computing environment

Page 4: Multi-Scale Parallel computing in Laboratory of Computational Geodynamics, Chinese Academy of Sciences Yaolin Shi 1, Mian Liu 2,1, Huai Zhang 1, David

0

0

0

zzzyzxz

yyzyyxy

xxzxyxx

fzyx

fzyx

fzyx

Automatic sourcecode generator

funcfuna=+[u/x] ………funf=+[u/y]+[v/x]

………dist =+[funa;funa]*d(1,1)+[funa;funb]*d(1,2)+[funa;func]*d(1,3)+[funb;funa]*d(2,1)+[funb;funb]*d(2,2)+[funb;func]*d(2,3)+[func;funa]*d(3,1)+[func;funb]*d(3,2)+[func;func]*d(3,3)+[fund;fund]*d(4,4)+[fune;fune]*d(5,5)+[funf;funf]*d(6,6) load = +[u]*fu+[v]*fv+[w]*fw-[funa]*f(1)-[funb]*f(2)-[func]*f(3)-[fund]*f(4)-[fune]*f(5)-[funf]*f(6)

PDEs

Complete source code

FEM Modeling Language

Data Grid (GEON and others)

Physical model

Model results

HPCC

Data =>???

SWF

SWF

Page 5: Multi-Scale Parallel computing in Laboratory of Computational Geodynamics, Chinese Academy of Sciences Yaolin Shi 1, Mian Liu 2,1, Huai Zhang 1, David

Why high-performance computingWhy high-performance computing

200

5000

20000

Not all numerical results are reliable.

Even the first order stress pattern need high precision numerical simulation

Page 6: Multi-Scale Parallel computing in Laboratory of Computational Geodynamics, Chinese Academy of Sciences Yaolin Shi 1, Mian Liu 2,1, Huai Zhang 1, David

disp u vcoor x yfunc funa funb func shap %1 %2gaus %3mass %1load = fu fv $c6 pe = prmt(1)$c6 pv = prmt(2)$c6 fu = prmt(3)$c6 fv = prmt(4)$c6 fact = pe/(1.+pv)/(1.-2.*pv)funcfuna=+[u/x]funb=+[v/y]func=+[u/y]+[v/x]stifdist =+[funa;funa]*fact*(1.-pv)+[funa;funb]*fact*(pv)+[funb;funa]*fact*(pv)+[funb;funb]*fact*(1.-pv)+[func;func]*fact*(0.5-pv)

*es,em,ef,Estifn,Estifv,

*es(k,k),em(k),ef(k),Estifn(k,k),Estifv(kk),

goto (1,2), ityp1 call seuq4g2(r,coef,prmt,es,em,ec,ef,ne) goto 32 call seugl2g2(r,coef,prmt,es,em,ec,ef,ne) goto 33 continue

DO J=1,NMATEPRMT(J) = EMATE((IMATE-1)*NMATE+J)End doPRMT(NMATE+1)=TIMEPRMT(NMATE+2)=DTprmt(nmate+3)=imateprmt(nmate+4)=num

Other element matrix computing SubsPDE expressionContains information of the physical model, such as variables and equations for generating element stiffness matrix.

Fortran Segmentscodes that realize the physical model at element level.

variables

equation

Automated Code Generator

Step 1: From PDE expression to Fortran segments

Segment 1

Segment 2

Segment 3

Segment 4

Page 7: Multi-Scale Parallel computing in Laboratory of Computational Geodynamics, Chinese Academy of Sciences Yaolin Shi 1, Mian Liu 2,1, Huai Zhang 1, David

Step 2: From algorithm expression to Fortran segments

do i=1,k do j=1,k estifn(i,j)=0.0 end do end do do i=1,k estifn(i,i)=estifn(i,i) do j=1,k estifn(i,j)=estifn(i,j)+es(i,j) end do end do

U(IDGF,NODI)=U(IDGF,NODI) *+ef(i)

defistif Smass Mload Ftype emdty lstep 0

equationmatrix = [S]FORC=[F]

SOLUTION Uwrite(s,unod) U

end

Algorithm ExpressionContains information for forming global

stiffness matrix for the model.

Fortran Segmentscodes that realize the physical model at global level.

Stiffness matrix Segment 5

Segment 6

Page 8: Multi-Scale Parallel computing in Laboratory of Computational Geodynamics, Chinese Academy of Sciences Yaolin Shi 1, Mian Liu 2,1, Huai Zhang 1, David

SUBROUTINE ETSUB(KNODE,KDGOF,IT,KCOOR,KELEM,K,KK, *NUMEL,ITYP,NCOOR,NUM,TIME,DT,NODVAR,COOR,NODE,#SUBET.sub *U) implicit double precision (a-h,o-z) DIMENSION NODVAR(KDGOF,KNODE),COOR(KCOOR,KNODE), *U(KDGOF,KNODE),EMATE(300),#SUBDIM.sub *R(500),PRMT(500),COEF(500),LM(500)#SUBFORT.sub#ELEM.subC WRITE(*,*) 'ES EM EF ='C WRITE(*,18) (EF(I),I=1,K)#MATRIX.sub L=0 M=0 I=0 DO 700 INOD=1,NNE ……… U(IDGF,NODI)=U(IDGF,NODI)#LVL.sub DO 500 JNOD=1,NNE ………500 CONTINUE700 CONTINUE ……… return end

Program StencilFortran Segments generated

Step 3: Plug Fortran segments into a stencil, forming final FE program

Segment 1

Segment 2

Segment 4

Segment 3

Segment 5

Segment 6

…………..

Page 9: Multi-Scale Parallel computing in Laboratory of Computational Geodynamics, Chinese Academy of Sciences Yaolin Shi 1, Mian Liu 2,1, Huai Zhang 1, David

Grid computing profileGrid computing profile

Computing gridsData grids

Data grids

Computing grids

clusters

High speed interconnection

and middleware for grid computing

Is there one computing environment which can use these facilities as a

WHOLE?

Page 10: Multi-Scale Parallel computing in Laboratory of Computational Geodynamics, Chinese Academy of Sciences Yaolin Shi 1, Mian Liu 2,1, Huai Zhang 1, David

Asian tectonic, from theory samples to large-scale similation

Asian Tectonic problem

Page 11: Multi-Scale Parallel computing in Laboratory of Computational Geodynamics, Chinese Academy of Sciences Yaolin Shi 1, Mian Liu 2,1, Huai Zhang 1, David

parallel investigate Asian plate defarmation

Page 12: Multi-Scale Parallel computing in Laboratory of Computational Geodynamics, Chinese Academy of Sciences Yaolin Shi 1, Mian Liu 2,1, Huai Zhang 1, David

Investigate Asian plate defarmations

Page 13: Multi-Scale Parallel computing in Laboratory of Computational Geodynamics, Chinese Academy of Sciences Yaolin Shi 1, Mian Liu 2,1, Huai Zhang 1, David

Developing full 3-D model of tsunami

Page 14: Multi-Scale Parallel computing in Laboratory of Computational Geodynamics, Chinese Academy of Sciences Yaolin Shi 1, Mian Liu 2,1, Huai Zhang 1, David

Tsunami Modeling 2Tsunami Modeling 2

Page 15: Multi-Scale Parallel computing in Laboratory of Computational Geodynamics, Chinese Academy of Sciences Yaolin Shi 1, Mian Liu 2,1, Huai Zhang 1, David

Tsunami modeling 5 Details of Finite Element Element

Data from Gtop30, and generation of Finite element meshes, more than 2

million nodes for parallel version

Local zoom in

Page 16: Multi-Scale Parallel computing in Laboratory of Computational Geodynamics, Chinese Academy of Sciences Yaolin Shi 1, Mian Liu 2,1, Huai Zhang 1, David

Uplifts formation around islands

0

cos

cos

R

huhht

fR

gd

R

gh

R

u

R

uuut

coscoscos

fuR

gd

R

gh

RR

ut

cos

Tsunami Modeling 3Tsunami Modeling 3

Three dimensional simulation of tsunami generation

Page 17: Multi-Scale Parallel computing in Laboratory of Computational Geodynamics, Chinese Academy of Sciences Yaolin Shi 1, Mian Liu 2,1, Huai Zhang 1, David

Full 3D simulation of tsunami propagation

Our formulation allows the tracking and simulation of three stages , principally the formation, propagation and run-up stages of tsunami and waves coming ashore. The sequential version of this code can run on a workstation with 4 Gbyte memory less than 2 minutes per time step for one million grid points. This code has also been parallelized with MPI-2 and scales linearly . We have employed the actual ocean seafloor topographical data to construct oceanic volume and attempt to construct the coastline as realistic as possible, using 11 levels structure meshes in the radial direction of the earth. In order to understand the intricate dynamics of the wave interactions, we have implemented a visualization overlay based on Amira, a 3-D volume rendering visualization tools for massive data post-processing.

Employed Amira visualization package (www. amiravis. com )

Visualization of tsunami wave propagation Visualization of tsunami wave propagation