27
Cactus Grid Computing Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)

Cactus Grid Computing Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)

Embed Size (px)

DESCRIPTION

What is Cactus? n Flesh (ANSI C) provides code infrastructure (parameter, variable, scheduling databases, error handling, APIs, make, parameter parsing) n Thorns (F77/F90/C/C++/[Java/Perl/Python]) are plug-in and swappable modules or collections of subroutines providing both the computational instructructure and the physical application. Well-defined interface through 3 config files n Just about anything can be implemented as a thorn: Driver layer (MPI, PVM, SHMEM, …), Black Hole evolvers, elliptic solvers, reduction operators, interpolators, web servers, grid tools, IO, … n User driven: easy parallelism, no new paradigms, flexible n Collaborative: thorns borrow concepts from OOP, thorns can be shared, lots of collaborative tools n Computational Toolkit: existing thorns for (Parallel) IO, elliptic, MPI unigrid driver, n Integrate other common packages and tools: HDF5, Globus, PETSc, PAPI, Panda, FlexIO, GrACE, Autopilot, LCAVision, OpenDX, Amira,... n Trivially Grid enabled!

Citation preview

Page 1: Cactus Grid Computing Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)

Cactus Grid Computing

Gabrielle Allen

Max Planck Institute for Gravitational Physics,(Albert Einstein Institute)

Page 2: Cactus Grid Computing Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)

Cactus

THE GRID: Dependable, consistent, pervasive access

to high-end resources

CACTUS is a freely available, modular, portable and manageable environment

for collaboratively developing parallel, high-performance multi-dimensional simulations

www.CactusCode.org

Page 3: Cactus Grid Computing Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)

What is Cactus? Flesh (ANSI C) provides code infrastructure (parameter, variable,

scheduling databases, error handling, APIs, make, parameter parsing) Thorns (F77/F90/C/C++/[Java/Perl/Python]) are plug-in and swappable

modules or collections of subroutines providing both the computational instructructure and the physical application. Well-defined interface through 3 config files

Just about anything can be implemented as a thorn: Driver layer (MPI, PVM, SHMEM, …), Black Hole evolvers, elliptic solvers, reduction operators, interpolators, web servers, grid tools, IO, …

User driven: easy parallelism, no new paradigms, flexible Collaborative: thorns borrow concepts from OOP, thorns can be shared,

lots of collaborative tools Computational Toolkit: existing thorns for (Parallel) IO, elliptic, MPI

unigrid driver, Integrate other common packages and tools: HDF5, Globus, PETSc,

PAPI, Panda, FlexIO, GrACE, Autopilot, LCAVision, OpenDX, Amira, ... Trivially Grid enabled!

Page 4: Cactus Grid Computing Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)

Modularity of Cactus...Sub-app

AMR (GrACE, etc)

I/O layer 2

Globus Metacomputing Services

User selectsdesired functionality…Code created...

Abstractions...

Remote Steer 2MDS/Remote

Spawn

Legacy App 2Symbolic

Manip App

Unstructured...

Application 2...

Cactus Flesh

MPI layer 3

Application 1

Page 5: Cactus Grid Computing Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)

Motivation: Grand Challenge Simulations

EU NetworkAstrophysics 10 EU Institutions,

3 years Try to finish these

problems … Entire Community

becoming Grid enabled

Examples of Future of Science & Engineering Require Large Scale Simulations,

beyond reach of any single machine Require Large Geo-Distributed

Cross-Disciplinary Collaborations Require Grid Technologies, but not

yet using them! Both Apps and Grids Dynamic…

NSF Black HoleGrand Challenge 8 US Institutions,

5 years Towards colliding

black holes

NASA Neutron Star Grand Challenge 5 US Institutions Towards colliding

neutron stars

Page 6: Cactus Grid Computing Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)

Why Grid Computing? AEI Numerical Relativity Group has access to high-end

resources in over ten centers in Europe/USA They want:

Bigger simulations, more simulations and faster throughput Intuitive IO at local workstation No new systems/techniques to master!!

How to make best use of these resources? Provide easier access … no one can remember ten usernames,

passwords, batch systems, file systems, … great start!!! Combine resources for larger productions runs (more resolution

badly needed!) Dynamic scenarios … automatically use what is available Remote/collaborative visualization, steering, monitoring

Many other motivations for Grid computing ...

Page 7: Cactus Grid Computing Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)

Grand PictureRemote steering and monitoring

from airport

Origin: NCSA

Remote Viz in St Louis

T3E: Garching

Simulations launched from Cactus PortalGrid enabled

Cactus runs on distributed machines

Remote Viz and steering from Berlin

Viz of data from previous simulations in

SF café

DataGrid/DPSSDownsampling

Globus

http

HDF5

IsoSurfaces

Page 8: Cactus Grid Computing Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)

Cactus Grid Projects: User Portal (KDI Astrophysics Simulation Collaboratory)

Efficient, easy, access to resources … interfaces to everything else Collaborative Working Methods (KDI ASC) Large Scale Distributed Computing (Globus)

Only way to get the kind of resolution we really need Remote Monitoring (TiKSL/GriKSL)

Direct access to simulation from anywhere Remote Visualization (Live/Offline) (TiKSL/GriKSL)

Collaborative analysis during simulations/Viz of large datasets Remote Steering (TiKSL/GriKSL)

Live collaborative interaction with simulation (eg IO/Analysis) Dynamic, Adaptive Scenarios (GridLab/GrADs)

Simulation adapts to changing Grid environment Make Grid Computing useable/accessible for application users !!

GridLab: Grid Application Toolkit

Page 9: Cactus Grid Computing Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)

Remote Monitoring/Steering: Thorn HTTPD

Thorn which allows simulation any to act as its own web server

Connect to simulation from any browser anywhere … collaborate

Monitor run: parameters, basic visualization, ...

Change steerable parameters

See running example at www.CactusCode.org

Wireless remote viz, monitoring and steering

Page 10: Cactus Grid Computing Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)

Developments VizLauncher: Output data (remote files/streamed data)

automatically launched into appropriate local Viz Client (extending to include application specific networks)

Debugging information (individual thorns can easily provide their own information)

Timing information (thorns, communications, IO), allows users to steer their simulation for better performance (switch of analysis/IO)

Page 11: Cactus Grid Computing Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)

Remote Visualization

IsoSurfaces and Geodesics

All Remote FilesVizLauncher(download)

Grid Functions Streaming HDF5(downsampling to match bandwidth)

Amira

Amira

LCA Vision

OpenDXOpenDX

Amira

Use variety of local clients to view remote simulation data.Collaborative, colleagues can access from anywhere.Now adding matching of data to network characteristics

Page 12: Cactus Grid Computing Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)

Remote Steering

Remote Viz data

Remote Viz data

XML HTTP

HDF5

Amira

Any Viz Client

Page 13: Cactus Grid Computing Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)

Remote File AccessViz Client (Amira)

HDF5 VFD

DataGrid (Globus)

DPSS FTP HTTP

VisualizationClient

DPSS Server

FTP Server

Web Server Remote

Data Server

Downsampling, hyperslabs

Viz in Berlin

4TB at NCSA

Only what is needed

Page 14: Cactus Grid Computing Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)

RemoteData

Storage

TB Data File

Remote File Access

grr,psi on tim

estep 2

Lapse for r<1, every other point

HDF5 VFD/GridFTP:

Clients use file URL(downsampling,hyperslabbing)

Network Monitoring

Service

NCSA (USA)Analysis of Simulation

DataAEI (Germany)

Visualization

Chandos Hall (UK)

More BandwidthAvailable

Page 15: Cactus Grid Computing Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)

Computational Physics: Complex Workflow

Acquire CodeModules

ConfigureAnd Build

Bugs?

Report/Fixbugs

Set ParamsInitial Data

Run ManyTest Jobs

Steer, Kill,Or restart

Correct?

Select largestRsrc and runFor a week

Remove visand steer

NovelResults?

Archive TB’sOf Data

Select andStage data toStorage array

Regression

Rmt Vis

Data Mine

ObservationY

NY

NN

Y

PapersNobel Prizes

Page 16: Cactus Grid Computing Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)

Cactus/ASC Portal KDI ASC Project (Argonne, NCSA,

AEI, LBL, WashU) Technology: Web Based (end user

requirement) Globus, GSI, DHTML, Java CoG,

MyProxy, GPDK, TomCat, Stronghold/Apache, SQL/RDBMS

Portal should hide/simplify the Grid for users Single access, locates resources,

builds/finds executables, central management of parameter files/job output, submit jobs to local batch queues, tracks active jobs. Submission/management of distributed runs

Accesses the ASC Grid Testbed

Page 17: Cactus Grid Computing Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)

Grid Applications: Some Examples Dynamic Staging: move to faster/cheaper/bigger machine

“Cactus Worm” Multiple Universe

create clone to investigate steered parameter Automatic Convergence Testing

from intitial data or initiated during simulation Look Ahead

spawn off and run coarser resolution to predict likely future Spawn Independent/Asynchronous Tasks

send to cheaper machine, main simulation carries on Thorn Profiling

best machine/queue, choose resolution parameters based on queue Dynamic Load Balancing

inhomogeneous loads, multiple grids Intelligent Parameter Surveys

farm out to different machines …Must get application community to rethink algorithms…

Page 18: Cactus Grid Computing Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)

Physicist has new idea !

S1 S2

P1

P2

S1S2

P2P1

SBrill Wave

Dynamic Grid Computing

Found a horizon,try out excision

Look forhorizon

Calculate/OutputGrav. Waves

Calculate/OutputInvariants

Find bestresources

Free CPUs!!

NCSA

SDSC

RZG

LRZ

Archive data

SDSC

Add more resources

Clone job with steered parameter

Queue time over, find new machine

Archive to LIGOpublic database

Page 19: Cactus Grid Computing Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)

Dynamic Distributed apps with Grid-threads (gthreads) Code should be aware of its environment

What resources are out there NOW, and what is their current state? What is my allocation? What is the bandwidth/latency between sites?

Code should be able to make decisions on its own A slow part of my simulation can run asynchronously…spawn it off! New, more powerful resources just became available…migrate there! Machine went down…reconfigure and recover! Need more memory…get it by adding more machines!

Code should be able to publish this information to Portal for tracking, monitoring, steering… Unexpected event…notify users! Collaborators from around the world all connect, examine simulation.

Two protypical examples: Dynamic, Adaptive Distributed Computing Cactus Worm: Intelligent Simulation Migration

New Paradigms

Page 20: Cactus Grid Computing Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)

Large Scale Physics Calculation:

For accuracy need more resolution than memory of one machine can provide

Dynamic Adaptive Distributed Computation(with Argonne/U.Chicago)

SDSC IBM SP1024 procs5x12x17 =1020

NCSA Origin Array256+128+1285x12x(4+2+2) =480

OC-12 line(But only 2.5MB/sec)

GigE:100MB/sec17

125

4 212

5

2

This experiment: Einstein Equations (but could be any Cactus application)

Achieved: First runs: 15% scaling With new techniques: 70-85% scaling, ~ 250GF

Page 21: Cactus Grid Computing Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)

Distributing Computing Why do this?

Capability: Need larger machine memory than a single machine has Throughput: For smaller jobs, can still be quicker than queues

Technology Globus GRAM for job submission/authentification MPICH-G2 for communications (Native MPI/TCP) Cactus simply compiled with MPICH-G2 implementation of MPI

– gmake cactus MPI=globus New Cactus Communication Technologies

Overlap communication/communications Simulation dynamically adapts to WAN network

– Compression/Buffer size for communication– Extra ghostzones, communicate across WAN every N timesteps

Available generically: all applications/grid topologies

Page 22: Cactus Grid Computing Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)

Dynamic Adaptation

Adapt:

2 ghosts

3 ghosts Compress on!

Automatically adapt to bandwidth latency issues

Application has NO KNOWLEDGE of machines(s) it is on, networks, etc

Adaptive techniques make NO assumptions about network

Issues: More intellegent

adaption algorithm Eg if network conditions

change faster than adaption…

Next: Real BH hole run across Linux Clusters for high quality data for viz.

Page 23: Cactus Grid Computing Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)

Cactus simulation starts Queries a Grid Information Server, finds resources Makes intelligent decision to move Locates new resource & migrates Registers new location to GIS

Continues around Europe… Basic prototypical example of many things we want to do!

Cactus Worm: Basic ScenarioLive Demo at http://www.cactuscode.org

Page 24: Cactus Grid Computing Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33

Clock Time

Itera

tions

/Sec

ond

Migration due to Contract Violation(Foster, Angulo, Cactus Team…)

Loadapplied

3 successivecontract

violations

RunningAt UIUC

(migrationtime not to scale)

Resourcediscovery

& migration

RunningAt UC

Page 25: Cactus Grid Computing Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)

Grid Application Toolkit

Application developer should be able to build simulations, such as these, with tools that easily enable dynamic grid capabilities

Want to build programming API to easily incorporate: Query information server (e.g. GIIS)

– What’s available for me? What software? How many processors? Network Monitoring Resource Brokering Decision Routines (Thorns)

– How to decide? Cost? Reliability? Size? Spawning Routines (Thorns)

– Now start this up over here, and that up over there Authentication Server

– Issues commands, moves files on your behalf (can’t pass-on Globus proxy) Data Transfer

– Use whatever method is desired (Gsi-ssh, Gsi-ftp, Streamed HDF5, scp…) Etc…

Need to be able to test Grid tools as swappable plug-and-play

Page 26: Cactus Grid Computing Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)

GridLab: Enabling Dynamic Grid Applications

Large EU Project under negotiation with EC

Members: AEI, ZIB, PSNC, Lecce, Athens, Cardiff, Amsterdam, SZTAKI, Brno, ISI, Argonne, Wisconsin, Sun, Compaq

Grid Application Toolkit for application developers and infrastructure (APIs/Tools)

Will be around 20 new Grid positions in Europe !! Look at www.gridlab.org for details

Page 27: Cactus Grid Computing Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)

Credits