Upload
megan
View
26
Download
1
Embed Size (px)
DESCRIPTION
Data-Driven Time-Parallelization in the AFM Simulation of Proteins. L. Ji, H. Nymeyer, A. Srinivasan, and Y. Yu Florida State University http://www.cs.fsu.edu/~asriniva. Aim: Simulate for long time spans Solution features: Use data from prior simulations to parallelize the time domain. - PowerPoint PPT Presentation
Citation preview
Data-Driven Time-Parallelization in the AFM
Simulation of ProteinsL. Ji, H. Nymeyer, A. Srinivasan, and Y. Yu
Florida State University
http://www.cs.fsu.edu/~asriniva
Aim: Simulate for long time spans
Solution features: Use data from prior simulations to parallelize the time domain
Acknowledgments: NSF, ORNL, NERSC, NCSA
Outline• Background
– Limitations of Conventional Parallelization
• Time Parallelization– Other Time Parallelization Approaches
– Data-Driven Time Parallelization
• Nano-Mechanics Application
• Time Parallelization of AFM Simulation of Proteins– Prediction
– Experimental Results
• Scaled to an order of magnitude larger number of processors when
combined with conventional parallelization
• Conclusions and Future Work
Background• Molecular dynamics
– In each time step, forces of atoms on each other modeled using some potential
– After force is computed, update positions
– Repeat for desired number of time steps• Time steps size ~ 10 –15 seconds, due to physical and numerical
considerations
– Desired time range is much larger• A million time steps are required to reach 10-9 s
• ~ 500 hours of computing for ~ 40K atoms using GROMACS
• MD uses unrealistically large pulling speed
– 1 to 10 m/s instead of 10-7 to10-5 m/s
Limitations of Conventional Parallelization
• Results on IBM Blue Gene– Does not scale efficiently
beyond 10 ms/iteration
• If we want to simulate to a ms– Time step 1 fs
1012 iterations 1010s ≈ 300 years
• If we scaled to 10 s per iteration– 4 months of
computing time
NAMD, 327K atom ATPase PME, IPDPS 2006
NAMD, 92K atom ApoA1 PME, IPDPS 2006
IBM Blue Matter, 43K Rhodopsin, Tech Report 2005
Desmond, 92K atom ApoA1, SC 2006
Time Parallelization
• Other Time Parallelization Approaches– Dynamic Iterations/ Waveform Relaxation
• Slow convergence
– Parareal Method• Related to shooting methods
• Not shown effective in realistic settings
• Data-Driven Time-Parallelization– Nano-Mechanics Application
• Tensile test on a Carbon Nanotube
– Achieved granularity of 13.6 s/iteration in one simulation
Other Time Parallelization Approaches
• Special case: Picard iterations– Ex: dy/dt = y, y(0) = 1 becomes
• dyn+1/dt = yn(t), y0(t) = 1
• In general– dy/dt = f(y,t), y(0) = y0 becomes
• dyn+1/dt = g(yn, yn+1, t), y0(t) = y0
• g(u, u, t) = f(u, t)
• g(yn, yn+1, t) = f(yn, t): Picard
• g(yn, yn+1, t) = f(yn+1, t): Converges in 1 iteration
– Jacobi, Gauss-Seidel, and SOR versions of g defined
• Many improvements– Ex: DIRM combines above with
reduced order modeling
Exact
N = 1
N = 2
N = 3
N = 4
Waveform Relaxation Variants
Data-Driven Time Parallelization
• Each processor simulates a different time interval
• Initial state is obtained by prediction, using prior data (except for processor 0)
• Verify if prediction for end state is close to that computed by MD
• Prediction is based on dynamically determining a relationship between the current simulation and those in a database of prior results
If time interval is sufficiently large, then communication overhead is small
Nano-Mechanics Application Carbon Nanotube Tensile Test
• Pull the CNT • Determine stress-strain
response and yield strain (when CNT starts breaking) using MD
• Use dimensionality reduction for prediction
u1 (blue) and u2 (red) for z
u1 (green) for x is not “significant”
• Red line: Ideal speedup
• Blue: v = 0.1m/s
• Green: v = 1m/s, using v = 10m/s
Blue: Exact 450K
Red: 200 processors
Problems with multiple time-scales
• Fine-scale computations (such as MD) are more accurate, but more time consuming– Much of the details at the finer scale are unimportant, but
some are
A simple schematic of multiple time scales
Time-Parallelization of AFM Simulation of
Proteins• Example System: Muscle Protein - Titin
– Around 40K atoms, mostly water– Na+ and Cl- added for charge neutrality– NVT conditions, Langevin thermostat, 400K– Force constant on springs: 400kJ/(mol nm2)– GROMACS used for MD simulations
Verification of prediction
• Definition of equivalence of two states– Atoms vibrate around their mean position– Consider states equivalent if differences are within the
normal range of fluctuations
Mean position Displacement (from mean)
Differences between trajectories that differ only due to the random number sequence
Prediction• Use prior results with higher velocity
– Trajectories with different random number sequences – Predict based on prior result closest to current states
• Use only the last verified state
• Use several recent verified states
• Fit parameters to the log-Weibull distribution
• (1/b) e (a-x)/b-e (a-x)/b
• Location: a = 0.159
• Scale: b = 0.0242
Speedup
Speedup on Xeon/Myrinet cluster at NCSA Speedup with combined space (8-way) - time parallelization
• One time interval is 10K time steps -- ~5 hours sequential time
• The parallel overheads, excluding prediction errors, are relatively insignificant
• Above results use last verified state to choose prior run
• Using several verified states parallelized almost perfectly on 32 processors
Validation
Spatially parallel
Time parallel
Mean (spatial), time parallel
Experimental data
Typical Differences
RMSD
Solid: Between exact and a time parallel runs
Dashed: Between conventional runs using different random number sequences
Force
Dashed: Time parallel runs
Solid: Conventional runs
Conclusions and Future Work
• Conclusions– Data-driven time parallelization promises an order of
magnitude improvement in speed when combined with conventional parallelization
• Future Work– Better prediction– Satisfy detailed balance