14
DRAFT Page 1 of 14 ACES Alliance for Computational Earth Science Areas for a possible MIT GRID computing collaboration with industry. Chris Hill ([email protected] ) Sai Ravela ([email protected] ) John Marshall ([email protected] ) MIT March 2003 Summary Researchers at MIT are currently building a distributed high-performance computing resource designed for researching computational technologies for application in advanced Earth simulation. The MIT group plans four 256-node clusters, connected by a dedicated fiber network linking the Laboratory for Computer Science, Environmental Engineering and the Department of Earth, Atmospheric and Planetary Science, as sketched in Fig. 1. Construction, based on low-cost commodity hardware that can deliver substantial performance for Earth simulation and related applications, is in progress. The groups involved, who have extensive experience in computer science, mathematics and physical science, are looking for partners with an interest in collaboration in relevant science and computer science areas. This effort is a near-term goal that will lead to establishing an evolving architecture that can scale well beyond the initial plan. To this end the MIT group is approaching industry with a number of possible research areas that they believe could form the basis for research collaboration on both near-term and longer term issues related to the application of high-performance, high-productivity computational technology to Earth simulation problems. This document describes possible areas of collaboration that are of interest to the MIT group and that connect to their overall ACES (Alliance for Computational Earth Science) goals. However, before potential areas of collaborative research are outlined we present some of the background. Alliance for Computational Earth Science

Areas for a possible MIT GRID computing collaboration with ...paoc.mit.edu › isa › aces.pdf · simulation by delivering higher-performing compute engines to applications. k. Utility

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Areas for a possible MIT GRID computing collaboration with ...paoc.mit.edu › isa › aces.pdf · simulation by delivering higher-performing compute engines to applications. k. Utility

DRAFT Page 1 of 14

ACES Alliance for Computational Earth Science

Areas for a possible MIT GRID computing collaboration

with industry.

Chris Hill ([email protected]) Sai Ravela ([email protected])

John Marshall ([email protected])

MIT March 2003

Summary Researchers at MIT are currently building a distributed high-performance computing resource designed for researching computational technologies for application in advanced Earth simulation. The MIT group plans four 256-node clusters, connected by a dedicated fiber network linking the Laboratory for Computer Science, Environmental Engineering and the Department of Earth, Atmospheric and Planetary Science, as sketched in Fig. 1. Construction, based on low-cost commodity hardware that can deliver substantial performance for Earth simulation and related applications, is in progress. The groups involved, who have extensive experience in computer science, mathematics and physical science, are looking for partners with an interest in collaboration in relevant science and computer science areas. This effort is a near-term goal that will lead to establishing an evolving architecture that can scale well beyond the initial plan. To this end the MIT group is approaching industry with a number of possible research areas that they believe could form the basis for research collaboration on both near-term and longer term issues related to the application of high-performance, high-productivity computational technology to Earth simulation problems. This document describes possible areas of collaboration that are of interest to the MIT group and that connect to their overall ACES (Alliance for Computational Earth Science) goals. However, before potential areas of collaborative research are outlined we present some of the background.

Alliance for Computational Earth Science

Page 2: Areas for a possible MIT GRID computing collaboration with ...paoc.mit.edu › isa › aces.pdf · simulation by delivering higher-performing compute engines to applications. k. Utility

DRAFT Page 2 of 14

Background

The Earth - its atmosphere, oceans, cryosphere, land surface and interior - are continually monitored by numerous arrays of sophisticated sensor networks. Space borne and in-situ sensors yield a real-time view of the state of the entire planet that is unprecedented in accuracy and in raw detail. However, making sense of this wealth of information, disseminating processed results and distilling clear, quantitative inferences from raw observational data-streams, remains a formidable intellectual and computational challenge.

The ACES focus covers four distinct areas that together play pivotal roles in Earth simulation disciplines ranging from weather forecasting and climate change science to earthquake prediction and subsurface exploration: (i) the development of numerical Earth system models, (ii) the development of sensor processing algorithms to extract meaningful observations, (iii) the use of observations to constrain models and using model behavior to undertake adaptive observations and (iv) research in computer systems and theory to develop and deploy efficient and effective systems.

The members of the ACES team (see list of ACES team members in Appendix A) collectively possess expertise that spans the four focus areas listed above. The alliance members’ research activities include highly visible national initiatives in Geo-science and computational science, and offers excellent opportunities for collaboration. These initiatives include:

Earth System Models:

a. General circulation modeling: The climate modeling team at MIT is part of ACES. This group has produced an influential modeling tool (MITgcmi) that is today used by many researchers around the world. The applications of MITgcm include the Estimation of Ocean Circulation and Climate Project (ECCO2), a joint NASA, ONR and NSF initiative.

b. Abstractions for weather and climate models: ACES members are actively developing the next generation middleware for the US Climate and Weather Modeling community (the ESMF3 framework). This effort is defining a standard set of high-level abstractions (middleware) for parallel Earth system codes that will allow significant flexibility in implementation detail, with room for innovative optimizations that allow application scaling.

c. Automatic Differentiation(AD): AD refers to source-to-source translation to synthesize code for computing partial derivatives from existing numerical code. ACEs team members have been pioneers in the application of AD to ocean state-estimation, where compositions of adjoints, backwards in time, are necessary to solve the problem. An NSF funded ITR project (the Adjoint Compiler Toolkis and Standards Project) to develop next-generation adjoint compilers is being lead by ACE members.

Alliance for Computational Earth Science

Page 3: Areas for a possible MIT GRID computing collaboration with ...paoc.mit.edu › isa › aces.pdf · simulation by delivering higher-performing compute engines to applications. k. Utility

DRAFT Page 3 of 14

Sensors, Signals and Observations:

d. Distributed Sensor Network Architectures: ACE members play a leading role in a global GPS station based4 seismic monitoring program. The GPS transceiver network is truly physically distributed, being scattered over much of the Earth’s surface.

e. Real-time Flux Tracking, Motion Estimation and Segmentation: ACE members have an active laboratory-based research and teaching program in which collections of cameras and in-situ sensors collect raw data which is ingested and used to constrain the trajectory of fluid algorithms in real time.

Data-assimilation and Adaptive Observations:

f. Probabilistic forecasting: Significant work is being undertaken to develop forecast methods centered on ensemble techniques. Ensemble-based data assimilation is being applied to both meteorology and oceanography, in the field and the laboratory.

g. Real-time now-casting and state estimation: ACE team members are key participants in a pioneering effort (ECCO2) to continually estimate, by combining observations and models, the three-dimensional state of the global ocean. Science goals are centered on understanding long time scale signals of climate change. MIT researchers provide most of the core applications and infrastructure software used by this project.

Computer Systems, Theory, Visualization and Retrieval:

h. Matlab*P: A key technological goal is development of parallel environments that are easy to use and scale well. The Matlab*P5 tool is being explored as a platform that supports a transparent view of a parallel system while maintaining good scalability.

i. Realtime Visualization: A particular goal is fully three-dimensional rendition of a time evolving virtual representation of a physical fluid (in laboratory studies) on a compute system.

j. High-performance compute engines: These include IPC technologies and methodologies, algorithms research, system level optimizations such as compressed data transfer schemes, latency hiding techniques and tertiary storage exploitation mechanisms. All these have potential impact for Earth simulation by delivering higher-performing compute engines to applications.

k. Utility computing: we require flexible resources that can be rapidly reconfigured to service the need of different projects. It is planned to deploy Grid services to support virtualization and near real-time needs.

Alliance for Computational Earth Science

l. Grid computing: ACES aims to further the ties that already exist between different several disciplines at MIT (Earth Science, Environmental Engineering, Computer Science, Math). Grid technologies will be used to create a unified high-quality compute environment connecting these areas, as sketched in Fig.1.

Page 4: Areas for a possible MIT GRID computing collaboration with ...paoc.mit.edu › isa › aces.pdf · simulation by delivering higher-performing compute engines to applications. k. Utility

DRAFT Page 4 of 14

Figure 1: The MIT group plans four 256-node clusters, connected by a dedicated fiber network linking the Laboratory for Computer Science, Environmental Engineering and the Department of Earth, Atmospheric and Planetary Science. The system, already under construction, will use Grid services to provide virtualized, location independent access to ACES participants.

Alliance for Computational Earth Science

Page 5: Areas for a possible MIT GRID computing collaboration with ...paoc.mit.edu › isa › aces.pdf · simulation by delivering higher-performing compute engines to applications. k. Utility

DRAFT Page 5 of 14

An Example: Instrumented Fluid Laboratory While there are several projects that embody different elements of the research themes discussed in the previous section, we present one example here to demonstrate how these elements come together into a single application. In the instrumented fluid dynamics laboratory, we are studying a microcosm of the fluid earth and related phenomena. Our motivation is both pedagogical and as a test bed to refine our techniques for deployment on the planetary scale. In particular, we are interested in examining the ability of fluid algorithms to capture the behavior of real fluids when constrained with observations.

Compute ClusterSensors and

Camera Network

Visualization & Retrieval

1 m

Laboratory Figure 2: The ‘instrumented fluid laboratory’. A rotating tank containing fluid and wavelength sensitive fluorescent dye and particles is illuminated with a laser. Observations of the evolving fluid - in-situ measurements of temperature and remotely observed flow using a camera network - are interfaced with a numerical simulation of the experiment using a compute resource. Sensor processing, model propagation and assimilation are carried out in real-time on multi-processor clusters interacting over a network, with forecasts, state estimates and other outputs produced for visualization and retrieval.

Alliance for Computational Earth Science

Page 6: Areas for a possible MIT GRID computing collaboration with ...paoc.mit.edu › isa › aces.pdf · simulation by delivering higher-performing compute engines to applications. k. Utility

DRAFT Page 6 of 14

This setup consists of a rotating tank containing a fluid (see fig 2). The tank is

o simulate and observe the fluid in real-time poses a significant computational

he user is presented with two views. One is of the physical state of the system and the

he instrumented fluid laboratory integrates all four primary elements that are the focus

instrumented with a sensory apparatus including cameras (to track dye and particles embedded in the fluid), and thermo-couples (to track temperature). These sensors are connected to nodes in a distributed computing backbone. Tchallenge. Due to the immense sensing bandwidth necessary as resolution increases, a decentralized approach is adopted which utilizes a collection of low cost sensors to make dense observations; computationally the sensor-interfaces are treated as distributed information sources. This approach has appeal from a systems integration and scalability viewpoint, and can form the basis for real-world implementations. Data acquisition from individual sensors employs automated techniques for sensor management, timing and synchronization across the nodes. The sensor data-stream is processed to extract fluid parameters. On a second compute platform, a multi-processor numerical model of the experiment executes. To constrain the model with observations and generate, for example, model forecasts, an ensemble-based approach can be used. Alternative approaches involve variational formulations and are also investigated. Tother is model output(s) over a desired time-scale, with facility for querying by the user. Tof ACES, in a controlled environment. This includes the extraction of observations from sensor data, the propagation of a numerical fluid simulation model, the data-assimilation machinery and the underlying computer systems technology for, high-speed, real-time parallel simulation and sensor processing, visualization and querying. Moreover the laboratory is a microcosm of the real world, planetary scale observing systems used in Earth simulation; it can be configured to produce observations of phenomena of interest for constraining planetary scale models.

Alliance for Computational Earth Science

Page 7: Areas for a possible MIT GRID computing collaboration with ...paoc.mit.edu › isa › aces.pdf · simulation by delivering higher-performing compute engines to applications. k. Utility

DRAFT Page 7 of 14

Portfolio of Potential Research Collaborations Here we outline some specific research areas that are possible points for collaboration between MIT and industry. 1. Sensor network data ingestion, processing and archival. 2. Innovative compute paradigms for Earth simulation. 3. Visualization. 4. Flexible utility compute models. Topic 1: Sensor network data ingestion, processing and archival

A set of ambitious projects is being pursued in this area by the ACES group. These

projects are linked by a common theme of bringing together computational and physical sensor technologies to understand and monitor the Earth system. The range of scenarios being addressed spans controlled laboratory situations to major national and inter-national multi-institution planetary scale efforts.

a) Data streams in the laboratory: As described above, ACES participants are developing a “sensorized” rotating fluids laboratory for use in teaching and research. A key ingredient of this system are the accompanying multi-processor compute engines that will be used for both processing raw sensor data (for example to identify and track features observed with a camera), for executing concurrent simulations of the time evolving fluid tank and for real-time rendering of volumetric images. Clear areas for collaboration exist in developing a responsive compute engine, with some real-time capacity, for both processing and simulation along with accompanying storage and networking infrastructure. Two prototype x86, Pentium 4 based cluster systems are already in place for this project. However, a number of open issues remain. These include the provision of dedicated cycles needed for real-time data processing, development of effective network capacity for data streams and exploration of hybrid 32-bit, 64-bit architectures for assimilation algorithms.

b) Data streams in the field: Collaborations involving distributed networks of field

measurement devices would focus on the automated collection of geographically dispersed data sources into a repository, on the provision of compute farms for processing observations and on the deployment of storage facilities, with appropriate structure and ontology, for disseminating processed data. In this arena the ACES team has been utilizing the MOSIX extensions to Linux as a tool for transparent parallelism and load sharing to great effect. A possible area of collaboration could involve developing the potential of more sophisticated transparent parallelism technologies further in this application domain.

Alliance for Computational Earth Science

c) Real-time, global assimilation and monitoring - “now-casting”: ACES members participating in the ECCO2 effort at MIT provide most of the core applications and infrastructure software that is used by the ECCO2 project. The upcoming phase in this effort involves developing the capability to provide up-to-the-minute state estimates of the surface and deep ocean climate on an ongoing and continual basis. Collaborations here would focus on developing a compute system with

Page 8: Areas for a possible MIT GRID computing collaboration with ...paoc.mit.edu › isa › aces.pdf · simulation by delivering higher-performing compute engines to applications. k. Utility

DRAFT Page 8 of 14

appropriate data handling capacity to support a nationally available, continually updated, database of ocean climate. Estimates based on the ECCO2 project work to date suggest that a 256-processor cluster facility could sustain the compute throughput (roughly 150Gflop/s sustained arithmetic processing rate) to keep up with real-time for a nominal ocean resolution of 1/40 over the globe. Prototyping for the design of a balanced system for this purpose is already being undertaken.

Topic 2: Innovative compute paradigms for Earth simulation Earth simulation is a good test environment for driving innovation in overall compute paradigms. While the compute systems that are available today have enabled great advances the field of Earth simulation still has many needs that current generation compute platforms do not meet. Many of the desired innovations would have broad impact and not just be confined to Earth simulation.

a) Interactive Supercomputing with Matlab*P: A center piece of the ACES project will be the deployment of the Matlab*P5 toolkit on the ACES systems and demonstration of its efficacy as a technology for totally transparent parallel processing. Collaborations involving the development of optimized and extended libraries for supporting more features in Matlab*P would be of interest.

b) Probabilistic Forecasting: The primary focus of this work is on synoptic time-

scale meteorological phenomena (storm forecasts, hurricane tracking etc…). Ensemble-based data assimilation makes parallel integration of ensemble members trivial (each ensemble member is independent of the others during its processing). However, to construct the covariance between ensemble members every ensemble member must be compared with every other ensemble member. This entails a large amount of interprocessor bandwidth and implementing this step generally and efficiently is challenging. Collaborative efforts to develop and integrate the appropriate methodologies into standard linear algebra toolkits would also be of significant interest to the ACES group.

c) Automatic Differentiation: Application of automatic differentiation to Earth

science problems often entails traversing the computational trajectory of the original numerical code in reverse. Efficiency of trajectory traversal in reverse could be enhanced by a number of possible system level innovations. These include pre-staging I/O subsystems, specialized distributed storage architectures, and integrated program state check pointing. Collaborative projects that examine these system level issues in the context of large processor count parallel computations are of interest to the ACES team.

Alliance for Computational Earth Science

d) Modeling Frameworks: Collaborative investigations into how to design the ESMF abstractions to allow for good performance on a wide range of target hardware would be of great interest. A collaboration that examines the possibilities of implementing hardware support in the network interface for some of these abstractions would also be of interest. Applications and the middleware under

Page 9: Areas for a possible MIT GRID computing collaboration with ...paoc.mit.edu › isa › aces.pdf · simulation by delivering higher-performing compute engines to applications. k. Utility

DRAFT Page 9 of 14

development in ESMF make extensive use of Fortran 90 and Fortran 95 as well as C++. Collaborations on the development of suitable testbed problems for compiler development would also be of interest as would collaborations that focus on tests for behavior and performance of the MPI2 parallel library.

e) Innovative parallelism support: The latency and bandwidth of interprocessor

communication remains a major limit on application scalability for many problems of significant scientific interest in Earth system modeling. In many situations the interprocessor communication patterns of Earth system applications are known in advance and are highly repetitive. Collaboration focused on system designs that can exploit synchronous, pre-programmed communication patterns to reduce aggregate start-up latencies of messaging protocols to less than a microsecond would be of interest. Related performance modeling studies of the balance between density of processing elements and multiplexed communication channels in a range of Earth system modeling situations would also be of interest. These studies could include investigations into self-optimizing parallel application configurations that could adapt to platform characteristics, such as achieved interconnect latency and bandwidth, in multi-tiered architectures. Such collaborations could also extend to innovative studies of the role semi-custom compute elements within commodity architectures that could draw on the experience of domains such as graphics processor communities.

Topic 3: Visualization

Visual tools are an invaluable aid for both teaching and research. However, Earth simulation applications are notoriously challenging to visualize effectively and easily. Areas of collaboration that could be of benefit are: a) Real time visualization: A collaborative focus in this area would be centered on

real-time visualization of a fully three-dimensional rendition of a time evolving virtual representation of the physical fluid laboratory. We are interested in scalable approaches to three-dimensional volume rendering and visualization that can accommodate significant real-time demands and data rates. This system will allow fluid property (for example potential-vorticity iso-surfaces) behaviors to be “observed” in real-time in a way that is not possible in a lab experiment or numerical simulation alone. This effort links to the work under other topics on computational and sensor technology integration.

Alliance for Computational Earth Science

b) Tools for analysis in high dimensions: A second visualization focus will be on techniques for graphically visualizing and examining ensemble simulation predictions for both laboratory and large-scale planetary scenarios. In this project the goal will be defining techniques and graphical tools for representing high-dimensional data that expresses the probability of a forecast? Ensemble forecasting is today used extensively by operational Numerical Weather Prediction (NWP) centers worldwide. The huge amount of model output

Page 10: Areas for a possible MIT GRID computing collaboration with ...paoc.mit.edu › isa › aces.pdf · simulation by delivering higher-performing compute engines to applications. k. Utility

DRAFT Page 10 of 14

generated by these ensemble approaches is typically made available to forecast groups that use it as guidance for issuing official forecast estimates. The forecast time window is often too short for a forecaster to truly evaluate all the ensemble members in detail and so forecasts are often made using the control, or “best”, estimate forecast alone. Much of the difficulty in evaluating a full ensemble of forecasts stems from the lack of an efficient visual means for querying and displaying the relevant information from the ensembles. An effective technology for analyzing these forecasts could contribute to increased forecast skill and enhance the value of near-term weather risk mitigation endeavors.

Topic 4: Flexible utility compute models

The near-term ACEs goals envisage a distributed compute resource, which is closely aligned physically with the individual ACES research groups but that can also act as a communal pool for large, full-system capability experiments – see Fig.1. This architecture was chosen to support diverse application requirements (including real-time sensor network processing), to satisfy the need for responsive, interactive high-end compute environments in groundbreaking research and to allow incremental ongoing system evolution to take advantage of new technologies. We are interested in collaborations that would lead to a utility model of computing that is consistent with these first-generation ACES system capabilities.

a) Utility based continuous state-estimation: A utility based compute environment

could potentially meet the throughput needs of the on-going large-scale ocean monitoring system that ACE members are currently prototyping. We would be interested in a collaborative project that demonstrated the viability of a utility approach for this problem and that could evaluate its potential for long-term viability. Developing the mechanics for sharing large volumes of results from such a system with the broad community would also be of interest.

b) Optimum resource selection: The application of utility resources to problems

involving matching distributed sources of data and compute horsepower in a transparent way is also of interest. The ACEs facility is anticipated to contain a mix of 32-bit and 64-bit hardware and an associated set of applications that can exploit, for example, large-memory systems. Exploring utility models that can bring together or appear to bring together application, compute, storage and data resources in one place as needed would be of interest of the ACE group. Examining the role of high-speed network technologies and utility software and other middleware layers in making a distributed resource appear concentrated would be of particular interest.

Alliance for Computational Earth Science

c) Utility computing in real-time laboratory and teaching environments: Research into the application of a utility model to near real-time compute requirements such as laboratory data ingestion and visualization or classroom teaching situations would also be of interest. In these scenarios critical resources must be made

Page 11: Areas for a possible MIT GRID computing collaboration with ...paoc.mit.edu › isa › aces.pdf · simulation by delivering higher-performing compute engines to applications. k. Utility

DRAFT Page 11 of 14

available at a given time in an orchestrated fashion. Exploring cost-effective strategies for handling these demands in a utility based compute environment would be of interest.

d) Continual technological evolution: Developing appropriate standards and layered

interfaces that give a utility compute the resource the capability of transparently adapting evolving technologies would also be of interest. In particular developing appropriate layering that provides the ability to readily track successive generations of technology without disruption would be of interest and has direct relevance to the middleware development projects involving ACE members.

e) Grids as a tool for cross-discipline collaboration: The main goal of the ACE effort

is to exploit the synergies between different areas of Earth simulation and computer science in exploring common technology solutions to common challenges. A natural extension of the virtualization of compute technology envisioned by the utility computing model is the development of cross-disciplinary research collaborations. Showcasing the role of a utility compute model in a collaborative research and teaching environment is therefore of paramount interest.

Educational and Research Benefits The cost-effectiveness of commodity based parallel compute resources is likely to herald significant new developments in computational technology based learning. Taken together with “sensorized” physical systems we can now deploy simulation systems that are high-fidelity virtual representations of the physical world. This permits a new, more accessible, way to explore and explain complex fluid and solid Earth problems.

Broader Impacts Technical innovations on a number of fronts make this an ideal time to pursue the projects described above. Satellite and field measurement technologies have improved vastly in the last decade thanks to advances in microelectronics and materials engineering. Today we can make raw data scans the planet in real-time of unprecedented fidelity. However, the computational technologies available to help make sense of this raw data need continual improvement to make the most of these increasingly rich data streams. Today, even with the most advanced simulation tools we are still today unable to answer basic questions such as has sea level risen or fallen globally over the last century or when will the next major earthquake occur in some region? More effectively connecting a virtual simulated Earth on a computer with the physically observed Earth remains a vital missing link to many local, regional and planetary scale environmental and climate questions.

Alliance for Computational Earth Science

Page 12: Areas for a possible MIT GRID computing collaboration with ...paoc.mit.edu › isa › aces.pdf · simulation by delivering higher-performing compute engines to applications. k. Utility

DRAFT Page 12 of 14

Benefits to industry The collaboration proposed here could have the following direct benefits to industry:

• The proposal emphasizes collaborative, multi-disciplinary research directed at common core compute technologies for planetary scale simulation and monitoring, rather than a narrow focus on the needs of a specific science discipline. To our knowledge there are no other university-industry partnerships that are taking this approach. The opportunities for identifying common standards and broadly applicable innovations in system level support for high-end geo-informatics are likely to be numerous. These could include identifying emerging software standards for which built-in, optimized, system support could be provided or identifying communication protocol optimizations that would deliver application performance and scaling advantages.

• The project would bring world-class researchers in environmental engineering, physical sciences and computer science at MIT together with computational technology developers. A broad community of students would be involved in the project at both the graduate and undergraduate level, providing ready opportunities for internships and recruitment.

• The project would help to define and validate an on-demand compute model that can deliver satisfactory service to a suite of applications with significant real-time needs. This presents an excellent opportunity for showcasing utility computing in action in advanced scenarios.

• The distributed grid of compute resources proposed would allow resources to be exploited seamlessly throughout the system. This would set the stage for subsequent extensions to demonstrate the value of resource virtualization.

Alliance for Computational Earth Science

Page 13: Areas for a possible MIT GRID computing collaboration with ...paoc.mit.edu › isa › aces.pdf · simulation by delivering higher-performing compute engines to applications. k. Utility

DRAFT Page 13 of 14

Appendix A – ACES members

Earth, Atmospheric and Planetary Sciences

Solid Earth

Brad Hager [email protected] (geophysics and seismology) Tom Herring [email protected] (geophysics) Maria Zuber [email protected] (planetary geophysics)

Earth Resources Laboratory

Nafi Toksöz [email protected] (exploration geophysics) Dan Burns [email protected] (geophysics)

Fluid Earth

Meteorology: Jim Hansen [email protected], Kerry Emanuel [email protected] (ensemble forecasting) Climate: Ron Prinn [email protected], Peter Stone [email protected] (coupled climate modeling) Oceanography: John Marshall [email protected] , Carl Wunsch [email protected], Raffaele Ferrari [email protected] (fluid modeling, ocean state estimation) Biogeochemical cycles: Mick Follows [email protected]

Civil and Environmental Engineering

Dennis McLaughlin [email protected] (ensemble forecasting) Dara Entakhabi [email protected] (land surface)

Electrical Engineering

Jacob White [email protected] (optimization algorithms)

LCS/Math

Alan Edelman [email protected] (parallel algorithms)

LCS

Alliance for Computational Earth Science

Arvind [email protected] (computer systems and architecture), Larry Rudolph [email protected] (computer systems and architecture), Charles Leiserson [email protected] (algorithms and systems).

Page 14: Areas for a possible MIT GRID computing collaboration with ...paoc.mit.edu › isa › aces.pdf · simulation by delivering higher-performing compute engines to applications. k. Utility

DRAFT Page 14 of 14

Alliance for Computational Earth Science

References 1. Further information can be found at the MITgcm web site, http://mitgcm.org 2. Further information can be found at the ECCO project web site, http://www.ecco-group.org 3. Further information can be found at the ESMF project web site, http://www.esmf.ucar.edu 4. Further information can be found at the UNAVCO global GPS network web site web site, http://www.unavco.ucar.edu 5. Further information can be found at the Matlab*P web site, http://supertech.lcs.mit.edu/~cly/matlabp.html