Upload
chace
View
26
Download
0
Embed Size (px)
DESCRIPTION
Terascale Computing for FLASH. Rusty Lusk Ian Foster, Rick Stevens Bill Gropp. Outline. Goals Requirements and objectives for FLASH computations Strategy Experiments, development, and research Accomplishments Results, tools, prototypes and demonstrations Interactions - PowerPoint PPT Presentation
Citation preview
The University of Chicago
Center on Astrophysical Thermonuclear FlashesCenter on Astrophysical Thermonuclear Flashes
Terascale Computing for FLASH
Rusty LuskIan Foster, Rick Stevens
Bill Gropp
Center for Astrophysical Thermonuclear Flashes
The University of Chicago
Outline
Goals Requirements and objectives for FLASH
computations Strategy
Experiments, development, and research Accomplishments
Results, tools, prototypes and demonstrations Interactions
Universities, ASCI labs, other ASCI centers, students
Center for Astrophysical Thermonuclear Flashes
The University of Chicago
Why does FLASH Need Terascale Computing?
Complex non-linear physics on 109 zones Problem size determined by
3D nature of the physical problem (required by turbulence and magnetic field evolution)
Extended dynamic range required to distinguish microphysics from large-scale physics
Current methods require multiple TeraFLOPS per time step on grids of this size for tens of thousands of time steps
1 Tflop sustained required to complete full 10243 calculation (50,000 time steps) in ~60 hours and will generate TBs of output data
Center for Astrophysical Thermonuclear Flashes
The University of Chicago
Requirements for Scientific Progress
Apply a scientific approach to code development for FLASH-1 Scalable performance of astrophysics
simulation code in next-generation computing environment
Develop and test on high-end machines Use scalable system and math libraries Use scalable I/O and standard data formats
Scalable tools for converting output into scientific insight through advanced visualization and data management
Ease of use for scientists in an environment with distributed resources
me
Ian Foster
Rick Stevens
Center for Astrophysical Thermonuclear Flashes
The University of Chicago
Near-Term Strategy for Code Development
Capitalize on existing sophisticated astrophysics simulation code: ASTRO3D from U. of C. Astrophysics Already 3D, parallel, producing visualization output Not portable, not instrumented for performance studies
Use ASTRO3D as immediate tool for experimentation, to connect astrophysicists and computer scientists “probe” ASCI machines use as template and data source for new visualization
work and distributed computing framework use as test case for portability and code management
experiments, performance visualization tools
Center for Astrophysical Thermonuclear Flashes
The University of Chicago
Long-Term Strategy for Scientific Code Development
Tools work in preparation for FLASH-1 code scalable performance visualization convenient and secure distributed computing advanced visualization, standard data
representations adapt numerical libraries (e.g., PETSc) as necessary adaptive mesh refinement research studies and implementation for standard parallel I/O
Research into fundamental questions for future code meshes, AMR schemes, and discretization strategies multiresolution volume visualization programming models for near-future architectures
Center for Astrophysical Thermonuclear Flashes
The University of Chicago
FY98 Accomplishments
ASTRO3D message-passing component ported to MPI (Andrea
Malagoli, Paul Plassmann, Bill Gropp, Henry Tufo I/O ported to MPI-I/O, for portability and performance
(Rajeev Thakur) Testing of source code control, configuration
management (Bill Gropp) Using large machines (more on Tuesday)
Use of all three ASCI machines (Henry Tufo, Lori Freitag, Anthony Chan, Debbie Swider)
Use of large machines at ANL, NCSA, Pittsburgh, others Scalability studies on ASCI machines using ASTRO3D
and SUMAA3d (scalable unstructured mesh computations) (Lori Freitag)
Center for Astrophysical Thermonuclear Flashes
The University of Chicago
Accomplishments (cont.)
MPI-related work MPICH, portable implementation of MPI, with
extra features Improving handling of datatypes
Parallel part of MPI-2 on all ASCI machines MPICH-G, integrating MPICH and Globus
Program visualization for understanding performance in detail Jumpshot - new Web-based system for
examining logs New effort in scalability of program
visualization Joint project with IBM, motivated by Livermore
requirements
Center for Astrophysical Thermonuclear Flashes
The University of Chicago
FY99 Plans
Apply lessons learned with ASTRO3D to emerging FLASH-1 code.
Incorporate Multigrid computations in PETSc Continue research into discretization issues Explore component approach to building the FLASH
code FLASH code motivator, for flexible experimentation:
with multiple meshing packages (DAGH, Paramesh, SUMAA3d, MEGA)
with a variety of discretization approaches multiple solvers multiple physics modules
MPI-2: beyond the message-passing model Scalable performance visualization (with IBM and LLNL)
Center for Astrophysical Thermonuclear Flashes
The University of Chicago
FLASH Center Computer Science Interactions
With ASCI Labs LLNL: MPICH development, MPICH-G, MPI-IO for HPSS,
PETSc with PVODE LANL: MPICH with TotalView, MPI-IO on SGI, Visualization
SNL: SUMAA3d with CUBIT, URB with Allegra, MPI-IO With other ASCI centers
Caltech Level 1 center: parallel I/O Utah Level 1 and AVTC: visualization Princeton Level 2 center: visualization Northwestern Level 2 center: data management Old Dominion Level 2 center: parallel radiation transport
With University Groups NCSA: HDF5 data formats group, parallel I/O for DMF ISI: Globus
With Vendors IBM, SGI, HP, Dolphin
Center for Astrophysical Thermonuclear Flashes
The University of Chicago
A Course in Tools for Scientific Computing
CS-341: Tools for High-Performance Scientific Computing Graduate and advanced undergraduate Expected 10 students, got 35 From Chicago departments of Physics, Chemistry,
Computer Science, Social Sciences, Astrophysics, Geophysical Sciences, Mathematics, Economics
Hands-on (half of each class is in computer lab) Taught primarily by Argonne team
Features tools used by, and in many cases written by, Argonne computer scientists
The University of Chicago
Center on Astrophysical Thermonuclear FlashesCenter on Astrophysical Thermonuclear Flashes
Visualization and Data Management
Mike Papka, Randy Hudson, Rick Stevens, Matt Szymanski
Futures Laboratory Argonne National Laboratory
and FLASH Center
Center for Astrophysical Thermonuclear Flashes
The University of Chicago
Visualization and Data Management
Requirements for FLASH-1 simulation output Large-scale 3D datasets
2562 X128 10243 :-) Variety of data formats and data management scenarios
binary restart files HDF5 and MPI-IO
Our strategy for FLASH scientific visualization Scaling visualization performance and function
parallelism, faster surface and volume rendering higher resolution displays and immersion tests improving ability to visualize multiresolution data
Improve ability to manage TB class datasets standard data and I/O formats interfaces to hierarchical storage managers strategies for high-speed navigation
Center for Astrophysical Thermonuclear Flashes
The University of Chicago
FLASH Visualization and Data Management Accomplishments
Taught UC Course on Visualization CS-334 Scientific Visualization Tools and Technologies
Developed Parallel Multipipe Volume Renderer* Developed Scalable Isosurface Renderer*
Developed HDF/netCDF I/O Exchange Module Leveraging AVTC developments for FLASH
Integrated vTK library with CAVE environment* Desktop integration with high-end visualization tools* Developed a Prototype Tiled Wall Display*
Captured FLASH seminars with FL-Voyager
* funded in part by ASCI Advanced Visualization Technology Center
Center for Astrophysical Thermonuclear Flashes
The University of Chicago
UC Course on Visualization
CS-334 Spring Quarter, 1998 17 Students about 1/2 undergrad and 1/2 grad Course provide a base for more advanced
work in VR and Visualization Students constructed
VR and visualizationApplications
Students used high-endenvironment at ANLand workstations at UC
Argonne FL Group
Center for Astrophysical Thermonuclear Flashes
The University of Chicago
Scientific Visualization for FLASH
Created FLASH dataset repository Currently five datasets in repository Use as challenge problems for rendering and viz
research Rendered all FLASH related datasets
ASTRO3D (multiple runs) PROMETHEUS (current largest-scale dataset)
Provided design input on visualization interfaces FLASH -1 code design
FY99 work closely with FLASH groups to produce visualizations of all large-scale computations
Center for Astrophysical Thermonuclear Flashes
The University of Chicago
Developed Parallel Multipipe Volume Renderer
Accelerating volume rendering of 3D datasets using multiple Infinite Reality hardware pipes
Integrated into CAVE/Idesk environment Providing software for use of
SGI Reality Monster (FY98) Commodity Graphics Cluster (FY99)
Performance experiments FY99 goals
realtime exploration ~2563
offline movies up to ~10243
ASTRO3D Jet
Center for Astrophysical Thermonuclear Flashes
The University of Chicago
Developed Scalable Isosurface Renderer
Designed to scale to 10243 x N datasets surface rendered movies
Uses remote compute resources to compute isosurfaces realtime
Uses Globus FY99 plan to integrate with
ASCI compute resources viaGlobus FLASH dataset test other ASCI data
ASTRO3D
Center for Astrophysical Thermonuclear Flashes
The University of Chicago
Integrated vTK library with CAVE Environment
Enabling high-functionality visualizations Builds on 600+ classes in vtk library Enables exploration of immersion vis within vTK
applications Enables very high-resolution offline rendering
FY98 basic prototype (collaboration with LANL) demonstrate on ASTRO3D/PROMETHEUS runs
FY99 parallel objects, performance tuning
Center for Astrophysical Thermonuclear Flashes
The University of Chicago
HDF4/5 and netCDF I/O Exchange Modules
FY98 developed two prototype interface modules Support portable I/O for visualization
FY99 plans to integrate these modules with FLASH codes to facilitate ease in visualization
Sim Restart Filter A VizFilter B
Visualization/Data Management Chain FY98
VizFDF
Visualization/Data Management Chain FY00
Sim
Center for Astrophysical Thermonuclear Flashes
The University of Chicago
Integrated Visualization Tools with Desktop Tools for Remote Visualization
Provides desktop video view of immersive visualization
Enables remote desktop/CAVE/Idesk collaboration
FY99 plans tie to high-end visualization suite
Center for Astrophysical Thermonuclear Flashes
The University of Chicago
Developed a Prototype High-Resolution Tiled Wall Display
ActiveMural Project (AVTC funded) collaboration with Princeton (Kai Li’s group) eight projector prototype 2500 x 1500 pixels (up to
date) twenty projector design 4000 x 3000 pixels (up
january 99) FY99 tie into visualization tools, validate on
high resolution output
Center for Astrophysical Thermonuclear Flashes
The University of Chicago
Use of Voyager Media Recorder to Capture FLASH Seminars
Enabling remote collaboration (ANL-UC) Asynchronous playback for FLASH members FY99 make FLASH seminars available to ASCI
labs
Distributed Multimedia Filesystem Nodes
Java-basedVoyager User Interface
Voyager Server
Recording Meta-data
Voyager RTSP Control Streams via Corba
Network
DBCalls
RTP Encoded Streams Audio/Video
RTP Encoded Streams Audio/Video
The University of Chicago
Center on Astrophysical Thermonuclear FlashesCenter on Astrophysical Thermonuclear Flashes
TerascaleDistance and
Distributed Computing
Ian Foster, Joe Insley, Jean Tedesco, Steve Tuecke
Distributed Systems Laboratory& FLASH ASAP Center
Argonne National Laboratory
Center for Astrophysical Thermonuclear Flashes
The University of Chicago
Distance and Distributed Computing
Future simulation science (including FLASH & ASCI) requires “virtual” computers integrating distant resources Scientists, computers, storage systems, etc., are rarely colocated!
Hence, need “simulation grid” to overcome barriers of distance, heterogeneity, scale
Argonne, via its Globus toolkit and GUSTO efforts, provides access to considerable expertise & technology
Many opportunities for productive interactions with ASCI Access to distant Terascale computer and data resources End-to-end resource management (“distance corridors”) Security, instrumentation, communication protocols, etc. High-performance execution on distributed systems
Center for Astrophysical Thermonuclear Flashes
The University of Chicago
FLASHDistance and Distributed Computing Strategy
Build on capabilities provided by Globus grid toolkit and GUSTO grid testbed
Use desktop access to Astro3D as initial model problem Resource location, allocation, authentication, data access
Use remote navigation of terabyte datasets as additional research and development driver Data-visualization pipelines, protocols, scheduling
Outreach effort to DP labs LANL, LLNL, SNL-A, SNL-L
Center for Astrophysical Thermonuclear Flashes
The University of Chicago
Globus Project Goals(Joint with USC/ISI [Caltech ASAP])
Enable high-performance applns that use resources from a “computational grid” Computers, databases, instruments,
people Via
Research in grid-related technology Development of Globus toolkit: Core
services for grid-enabled tools & applns
Construction of large grid testbed: GUSTO
Extensive application experiments
Resource allocation
Resource location
Security
QoS
Code management
Communication
Remote I/O
Instrumentation
Directory
Fault detection
DUROC,Nimrod,...
MPICH-G, PAWS,...
RIO,PPFS,...
Metro,CAVERNsoft
Astrophysics
Shock Tube
IP, MPI, shm ...
SGI, SP,...
Kerberos, PKI,...
LSF, PBS, NQE,...
Applications
Tools
Platforms
......
Center for Astrophysical Thermonuclear Flashes
The University of Chicago
Model Problem:Remote Execution of
Astro3D Prototype “global shell”
that allows us to Sign-on once via public key
technology Locate available computers Start computation on an
appropriate system Monitor progress of
computation Get [subsampled] output
files Manipulate locally
Center for Astrophysical Thermonuclear Flashes
The University of Chicago
Performance Driver:Remote Browsing of Large Datasets
Problem: interactive exploration of very large (TB+) datasets
Interactive client VRUI with view management support
Data reduction at remote client (subsampling)
Use of Globus to authenticate, transfer data, access data
Future driver for protocol, quality of service issues
Center for Astrophysical Thermonuclear Flashes
The University of Chicago
Outreach to ASCI Labs
Globus deployed at LANL, LLNL, SNL-L Pete Beckman, LANL: remote visualization Mark Seager, Mary Zosel, LLNL: multi-method MPICH Robert Armstrong, Robert Clay, SNL-L: clusters
Visits ANL<->LANL, LLNL, SNL-A, SNL-L DP lab participation in Globus user meeting Extensive work on multi-cluster MPI for Pacific
Blue Pacific (MPICH-G) Multi-method communication (shared memory, MPI,
IP): demonstrated better performance than IBM MPI Scalable startup for thousands of nodes
Center for Astrophysical Thermonuclear Flashes
The University of Chicago
Challenges and Next Steps
Can we use Globus to obtain access to DP lab resources? Numerous enthusiasts within labs But clearly “different” and requires buy-in Smart card support may help with acceptance
Push further on “desktop Astro3D” driver; use to drive deployment
Use interactive analysis of remote TB datasets as performance driver
Incorporate additional Globus features: quality of service, smart cards, instrumentation, etc.
Center for Astrophysical Thermonuclear Flashes
The University of Chicago
END