59
X# X# Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Towards the CrossGrid Architecture Marian Bubak, Maciej Malawski, and Katarzyna Zajac X# TAT Institute of Computer Science & ACC CYFRONET AGH, Kraków, Poland www. eu - crossgrid .org

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002 Towards the CrossGrid Architecture Marian Bubak, Maciej Malawski, and Katarzyna Zajac X# TAT Institute

  • View
    215

  • Download
    1

Embed Size (px)

Citation preview

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Towards the CrossGrid Architecture

Marian Bubak, Maciej Malawski, and Katarzyna Zajac

X# TATInstitute of Computer Science & ACC CYFRONET

AGH, Kraków, Poland

www.eu-crossgrid.org

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Overview

– X# and other # projects– Collaboration and Objectives– Applications and their requirements– New grid services– Tools for X# applications development– X# architecture– Work-packages– Collaboration with other # projects– Conclusions

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

A new IST Grid project space (Kyriakos Baxevanidis)

GRIDLAB

GRIA

EGSO

DATATAG

CROSSGRID

DATAGRID

Applications

GRIP EUROGRID

DAMIENMiddleware

& Tools

Underlying Infrastructures

ScienceIndustry / business

- Links with European National efforts

- Links with US projects (GriPhyN, PPDG, iVDGL,…)

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

CrossGrid Collaboration

Poland:Cyfronet & INP CracowPSNC PoznanICM & IPJ Warsaw

Portugal:LIP Lisbon

Spain:CSIC SantanderValencia & RedIrisUAB BarcelonaUSC Santiago & CESGA

Ireland:TCD Dublin

Italy:DATAMAT

Netherlands:UvA Amsterdam

Germany:FZK KarlsruheTUM MunichUSTU Stuttgart

Slovakia:II SAS Bratislava

Greece:AlgosystemsDemo AthensAuTh Thessaloniki

Cyprus:UCY Nikosia

Austria:U.Linz

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Main Objectives

– New category of Grid enabled applications• computing and data intensive

• distributed

• near real time response (a person in a loop)

• layered

– New programming tools

– Grid more user friendly, secure and efficient

– Interoperability with other Grids

– Implementation of standards

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Layered Structure of X#

Interactive and Data Intensive Applications (WP1) Interactive simulation and visualization of a biomedical system Flooding crisis team support Distributed data analysis in HEP Weather forecast and air pollution modeling

Grid Application Programming Environment (WP2)

MPI code debugging and verification Metrics and benchmarks Interactive and semiautomatic performance evaluation tools

Grid Visualization Kernel Data Mining

New CrossGrid Services (WP3)

Globus Middleware

Fabric Infrastructure (Testbed WP4)

DataGridGriPhyN

...

Services

HLA

Portals and roaming access Grid resource management Grid monitoring Optimization of data access

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Biomedical Application

– Input: 3-D model of arteries

– Simulation: LB of blood flow

– Results: in a virtual reality

– User: analyses results in near real-time, interacts, changes the structure of arteries

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Interaction in Biomedical Application

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Biomedical Application Use Case

CT / MRI scan

MedicalDB

Segmentation

MedicalDB

LB flowsimulation

VE

Visualisation

Interaction

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Asynchronous Execution of Biomedical Application

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Current architecture of biomedical application

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Modules of the Biomedical Application

– Medical scanners - data acquisition system– Software for segmentation – to get 3-D images– Database with medical images and metadata– Blood flow simulator with interaction capability– History database– Visualization for several interactive 3-D platforms– Interactive measurement module– Interaction module– User interface for coupling visualization, simulation, steering

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Flooding Crisis Team Support

Storage systems

databases

surface automatic meteorological and hydrological stations

systems for acquisition and processing of satellite information

meteorological radars

External sources of informationGlobal and regional centers GTSEUMETSAT and NOAAHydrological services of other countries

Data sources

meteorological models

hydrological models

hydraulic models

High performance computers

Grid infrastructure

Flood crisis teams meteorologistshydrologistshydraulic engineers

Users

river authoritiesenergyinsurance companiesnavigation

mediapublic

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Simulation Flood Cascade

Data sources

Meteorological simulation

Hydrological simulation

Hydraulic simulation

Portal

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Basic Characteristics of Flood Simulation

– Meteorological

• intensive simulation (1.5 h/simulation) – maybe HPC• large input/output data sets (50MB~150MB /event)• high availability of resources (24/365)

– Hydrological

• Parametric simulations - HTC• Each sub-catchment may require different models

(heterogeneous simulation)– Hydraulic

• Many 1-D simulations - HTC• 2-D hydraulic simulations need HPC

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Complementarity with DataGrid HEP application package:

• Crossgrid will develop interactive final user application for physics analysis, will make use of the products of non-interactive simulation & data-processing preceeding stages of Datagrid• Apart from the file-level service that will be offered by Datagrid, CrossGrid will offer an object-level service to optimise the use of distributed databases:

-Two possible implementations (will be tested in running experiments):

–Three-tier model accesing OODBMS or O/R DBMS

–More specific HEP solution like ROOT.

• User friendly due to specific portal tools

Distributed Data Analysis in HEP

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

•Several challenging points:–Access to large distributed databases in the Grid.–Development of distributed data-mining techniques.–Definition of a layered application structure.–Integration of user-friendly interactive access.

•Focus on LHC experiments (ALICE, ATLAS, CMS and LHCb)

Distributed Data Analysis in HEP

Weather Forecast and Air Pollution Modeling

– Distributed/parallel codes on Grid• Coupled Ocean/Atmosphere Mesoscale Prediction System

• STEM-II Air Pollution Code

– Integration of distributed databases

– Data mining applied to downscaling weather forecast

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

COAMPSCoupled Ocean/Atmosphere Mesoscale Prediction System: Atmospheric Components

•Complex Data Quality Control•Analysis:

• Multivariate Optimum Interpolation Analysis (MVOI) of Winds and Heights• Univariate Analyses of Temperature and Moisture• OI Analysis of Sea Surface Temperature

•Initialization:• Variational Hydrostatic Constraint on Analysis Increments• Digital Filter

•Atmospheric Model:• Numerics: Nonhydrostatic, Scheme C, Nested Grids, Sigma-z, Flexible Lateral BCs• Physics: PBL, Convection, Explicit Moist Physics, Radiation, Surface Layer

•Features:• Globally Relocatable (5 Map Projections)• User-Defined Grid Resolutions, Dimensions, and Number of Nested Grids• 6 or 12 Hour Incremental Data Assimilation Cycle • Can be Used for Idealized or Real-Time Applications• Single Configuration Managed System for All Applications• Operational at FNMOC:

• 7 Areas, Twice Daily, using 81/27/9 km or 81/27 km grids• Forecasts to 72 hours

• Operational at all Navy Regional Centers (w/GUI Interface)

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Air Pollution Model – STEM-II

– Species: 56 chemical, 16 long-lived, 40 short-lived, 28 radicals (OH, HO2 )

– Chemical mechanisms:• 176 gas-phase reactions• 31 aqueous-phase reactions.• 12 aqueous-phase solution equilibria.

– Equations are integrated with locally 1-D finite element method (LOD-FEM)

– Transport equations are solved with Petrov-Crank-Nicolson-Galerkin (FEM)

– Chemistry & mass transfer terms are integrated with semi-implicit Euler and pseudo-analytic methods

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Key Features of X# Applications

– Data • Data generators and data bases geographically distributed

• Selected on demand

– Processing• Needs large processing capacity; both HPC & HTC

• Interactive

– Presentation• Complex data require versatile 3D visualisation

• Support interaction and feedback to other components

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Problems to be Solved

– How to build interactive Grid environment ? (Globus is more batch-oriented than interactive-oriented; performance issue)

– How to use with Globus and DataGrid SW, how to define interfaces ?

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

User Interaction Services

NimrodNimrod

User Interaction Services

User Interaction Services

Resource Broker

Resource Broker

Scheduler(3.2)

Scheduler(3.2)

GIS / MDS(Globus)

GIS / MDS(Globus)

Grid Monitoring (3.3)

Grid Monitoring (3.3)Condor-GCondor-G

•Advance reservation•Start interactive application•Steer the simulation: cancel, restart

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Roaming Access

ApplicationsApplicationsPortals(3.1)

Portals(3.1)

Roaming Access Server (3.1)

Roaming Access Server (3.1)

Scheduler(3.2)

Scheduler(3.2)

GIS / MDS(Globus)

GIS / MDS(Globus)

Grid Monitoring (3.3)

Grid Monitoring (3.3)

•Remote Access Server:user profiles, authentication, authorization, job submission

•Migrating Desktop•Application portal

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Grid Monitoring

– OMIS-based application monitoring system

– Jiro-based service for monitoring of the Grid infrastructure

– Additional service for non-invasive monitoring

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Monitoring of Grid Applications

– Monitor = obtain information on or manipulate target application• e.g. read status of application’s processes, suspend

application, read / write memory, etc.– Monitoring module needed by tools

• Debuggers• Performance analyzers• Visualizers• ...

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

OMIS Approach to Grid Monitoring

– Application oriented• on-line• data collected immediately delivered to tools• normally no storing for later processing

– Data collection based on run-time instrumentation• enables dynamic choosing of data to be collected• reduced monitoring overhead

– Standardized interface between tools and the monitoring system – OMIS

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Monitoring – autonomous system

• Separate monitoring system

• Tool / Monitor interface – OMIS

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Grid-enabled OMIS-compliant Monitoring System – OCM-G

– Scalable• distributed

• decentralized

– Efficient• local buffers

Three types of components• local monitors (LM)

• service managers (SM)

• application monitors (AM)

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Service Managers and Local Monitors

– Service Managers• one or more in the system• request distribution• reply collection

– Local Monitors• one per node• handle local objects• actual execution of requests

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Application monitors

– Embedded in applications– Handle some actions locally

• buffering data• filtering of instrumentation• monitoring requests

– E.g. REQ: read variable a, REP: value of a• asynchronous• no OS mechanisms involved

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Optimization of Grid Data Access

– Different storage systems and applications’ requirements

– Optimization by selection of data handlers

– Service consists of• Component-expert system• Data-access estimator• GridFTP plugin

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Optimization of Grid Data Access

ApplicationsApplicationsPortals(3.1)

Portals(3.1)

Optimization of Grid Data Access (3.4)

Optimization of Grid Data Access (3.4)

Scheduling Agents(3.2)

Scheduling Agents(3.2)

Replica Manager(DataGrid / Globus)Replica Manager

(DataGrid / Globus)Grid Monitoring

(3.3)Grid Monitoring

(3.3)

GridFTPGridFTP

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Modules of Tool Environment

GridMonitoring(Task 3.3)

PerformancePrediction

Component

High LevelAnalysis

Component

User Interface and Visualization Component

PerformanceMeasurementComponent

Benchmarks(Task 2.3)

Applications (WP1)executing

on Grid testbed

Applicationsourcecode

G-PM

RMD PMD

LegendRMD – raw monitoring data

PMD – performance measurement data

data flow

manual information transfer

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Tools for Application Development

ApplicationsApplicationsPortals(3.1)

Portals(3.1)

G-PMPerformance

Measurement Tools (2.4)

G-PMPerformance

Measurement Tools (2.4)

MPI Debugging and Verification

(2.2)

MPI Debugging and Verification

(2.2)

Metrics and Benchmarks

(2.4)

Metrics and Benchmarks

(2.4)

Grid Monitoring (3.3)(OCM-G, RGMA)

Grid Monitoring (3.3)(OCM-G, RGMA)

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Building Blocks of the CrossGrid

CrossGridCrossGrid

DataGridDataGrid

GLOBUSGLOBUS

EXTERNALEXTERNAL

To be developed in X#

From DataGrid

Globus Toolkit

Other

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Overview of the CrossGrid Architecture

Supporting Tools

1.4Meteo

Pollution

1.4Meteo

Pollution

3.1 Portal & Migrating Desktop

3.1 Portal & Migrating Desktop

ApplicationsDevelopment

Support

2.4Performance

Analysis

2.4Performance

Analysis

2.2 MPI Verification

2.2 MPI Verification

2.3 Metrics and Benchmarks

2.3 Metrics and Benchmarks

App. Spec Services

1.1 Grid Visualisation

Kernel

1.1 Grid Visualisation

Kernel

1.3 DataMining on Grid (NN)

1.3 DataMining on Grid (NN)

1.3 Interactive Distributed

Data Access

1.3 Interactive Distributed

Data Access

3.1Roaming Access

3.1Roaming Access

3.2Scheduling

Agents

3.2Scheduling

Agents

3.3Grid

Monitoring

3.3Grid

Monitoring

MPICH-GMPICH-G

Fabric

1.1, 1.2 HLA and others

1.1, 1.2 HLA and others

3.4Optimization of

Grid Data Access

3.4Optimization of

Grid Data Access

1.2Flooding

1.2Flooding

1.1BioMed

1.1BioMed

Applications

Generic Services

1.3Interactive

Session Services

1.3Interactive

Session Services

GRAMGRAM GSIGSIReplica CatalogReplica CatalogGIS / MDSGIS / MDSGridFTPGridFTP Globus-IOGlobus-IO

DataGridReplica

Manager

DataGridReplica

Manager

DataGrid Job Submission

Service

DataGrid Job Submission

Service

Resource Manager

(CE)

Resource Manager

(CE)

CPUCPU

ResourceManagerResourceManager

Resource Manager

(SE)

Resource Manager

(SE)Secondary

StorageSecondary

Storage

ResourceManagerResourceManager

Instruments ( Satelites,

Radars)

Instruments ( Satelites,

Radars)

3.4Optimization of

Local Data Access

3.4Optimization of

Local Data Access

Tertiary StorageTertiary Storage

Replica CatalogReplica Catalog

GlobusReplica

Manager

GlobusReplica

Manager

1.1User Interaction

Services

1.1User Interaction

Services

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Components for Biomedical Application

Supporting Tools

1.4Meteo

Pollution

3.1 Portal & Migrating Desktop

3.1 Portal & Migrating Desktop

ApplicationsDevelopment

Support

2.4Performance

Analysis

2.4Performance

Analysis

2.2 MPI Verification

2.2 MPI Verification

2.3 Metrics and Benchmarks

2.3 Metrics and Benchmarks

App. Spec Services

1.1 Grid Visualisation

Kernel

1.1 Grid Visualisation

Kernel

1.3 DataMining on Grid (NN)

1.3 Interactive Distributed

Data Access

3.1Roaming Access

3.1Roaming Access

3.2Scheduling

Agents

3.2Scheduling

Agents

3.3Grid

Monitoring

3.3Grid

Monitoring

MPICH-G

Fabric

1.1, 1.2 HLA and others

1.1, 1.2 HLA and others

3.4Optimization of

Grid Data Access

1.2Flooding

1.1BioMed

1.1BioMed

Applications

Generic Services

1.3Interactive

Session Services

GRAMGRAM GSIGSIReplica CatalogGIS / MDSGIS / MDSGridFTPGridFTP Globus-IOGlobus-IO

DataGridReplica

Manager

DataGrid Job Submission

Service

DataGrid Job Submission

Service

Resource Manager

(CE)

Resource Manager

(CE)

CPUCPU

ResourceManager

Resource Manager

(SE)

Resource Manager

(SE)Secondary

StorageSecondary

Storage

3.4Optimization of

Local Data Access

Tertiary Storage

Replica Catalog

GlobusReplica

Manager

1.1User Interaction

Services

1.1User Interaction

Services

ResourceManager

Instruments ( Satelites,

Radars)

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Components for Flooding Crisis Team Support

Supporting Tools

1.4Meteo

Pollution

3.1 Portal & Migrating Desktop

3.1 Portal & Migrating Desktop

ApplicationsDevelopment

Support

2.4Performance

Analysis

2.4Performance

Analysis

2.2 MPI Verification

2.2 MPI Verification

2.3 Metrics and Benchmarks

2.3 Metrics and Benchmarks

App. Spec Services

1.1 Grid Visualisation

Kernel

1.1 Grid Visualisation

Kernel

1.3 DataMining on Grid (NN)

1.3 Interactive Distributed

Data Access

3.1Roaming Access

3.1Roaming Access

3.2Scheduling

Agents

3.2Scheduling

Agents

3.3Grid

Monitoring

3.3Grid

Monitoring

MPICH-GMPICH-G

Fabric

1.1, 1.2 HLA and others

1.1, 1.2 HLA and others

3.4Optimization of

Grid Data Access

3.4Optimization of

Grid Data Access

1.2Flooding

1.2Flooding

1.1BioMed

Applications

Generic Services

1.3Interactive

Session Services

GRAMGRAM GSIGSIReplica CatalogReplica CatalogGIS / MDSGIS / MDSGridFTPGridFTP Globus-IOGlobus-IO

DataGridReplica

Manager

DataGridReplica

Manager

DataGrid Job Submission

Service

DataGrid Job Submission

Service

Resource Manager

(CE)

Resource Manager

(CE)

CPUCPU

ResourceManagerResourceManager

Resource Manager

(SE)

Resource Manager

(SE)Secondary

StorageSecondary

Storage

ResourceManagerResourceManager

Instruments (Medical Scaners, Satelites, Radars)

Instruments (Medical Scaners, Satelites, Radars)

3.4Optimization of

Local Data Access

3.4Optimization of

Local Data Access

Tertiary StorageTertiary Storage

Replica CatalogReplica Catalog

GlobusReplica

Manager

GlobusReplica

Manager

1.1User Interaction

Services

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Components for Distributed Data Analysis in HEP

Supporting Tools

1.4Meteo

Pollution

3.1 Portal & Migrating Desktop

3.1 Portal & Migrating Desktop

ApplicationsDevelopment

Support

2.4Performance

Analysis

2.4Performance

Analysis

2.2 MPI Verification

2.2 MPI Verification

2.3 Metrics and Benchmarks

2.3 Metrics and Benchmarks

App. Spec Services

1.1 Grid Visualisation

Kernel

1.3 DataMining on Grid (NN)

1.3 DataMining on Grid (NN)

1.3 Interactive Distributed

Data Access

1.3 Interactive Distributed

Data Access

3.1Roaming Access

3.1Roaming Access

3.2Scheduling

Agents

3.2Scheduling

Agents

3.3Grid

Monitoring

3.3Grid

Monitoring

MPICH-GMPICH-G

Fabric

1.1, 1.2 HLA and others

3.4Optimization of

Grid Data Access

3.4Optimization of

Grid Data Access

1.2Flooding

1.1BioMed

Applications

Generic Services

1.3Interactive

Session Services

1.3Interactive

Session Services

GRAMGRAM GSIGSIReplica CatalogReplica CatalogGIS / MDSGIS / MDSGridFTPGridFTP Globus-IOGlobus-IO

DataGridReplica

Manager

DataGridReplica

Manager

DataGrid Job Submission

Service

DataGrid Job Submission

Service

Resource Manager

(CE)

Resource Manager

(CE)

CPUCPU

ResourceManagerResourceManager

Resource Manager

(SE)

Resource Manager

(SE)Secondary

StorageSecondary

Storage

ResourceManager

Instruments ( Satelites,

Radars)

3.4Optimization of

Local Data Access

3.4Optimization of

Local Data Access

Tertiary StorageTertiary Storage

Replica CatalogReplica Catalog

GlobusReplica

Manager

GlobusReplica

Manager

1.1User Interaction

Services

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Components for Weather Forecasting/Pollution Modeling

Supporting Tools

1.4Meteo

Pollution

1.4Meteo

Pollution

3.1 Portal & Migrating Desktop

3.1 Portal & Migrating Desktop

ApplicationsDevelopment

Support

2.4Performance

Analysis

2.4Performance

Analysis

2.2 MPI Verification

2.2 MPI Verification

2.3 Metrics and Benchmarks

2.3 Metrics and Benchmarks

App. Spec Services

1.1 Grid Visualisation

Kernel

1.3 DataMining on Grid (NN)

1.3 Interactive Distributed

Data Access

3.1Roaming Access

3.1Roaming Access

3.2Scheduling

Agents

3.2Scheduling

Agents

3.3Grid

Monitoring

3.3Grid

Monitoring

MPICH-GMPICH-G

Fabric

1.1, 1.2 HLA and others

3.4Optimization of

Grid Data Access

3.4Optimization of

Grid Data Access

1.2Flooding

1.1BioMed

Applications

Generic Services

1.3Interactive

Session Services

1.3Interactive

Session Services

GRAMGRAM GSIGSIReplica CatalogReplica CatalogGIS / MDSGIS / MDSGridFTPGridFTP Globus-IOGlobus-IO

DataGridReplica

Manager

DataGridReplica

Manager

DataGrid Job Submission

Service

DataGrid Job Submission

Service

Resource Manager

(CE)

Resource Manager

(CE)

CPUCPU

ResourceManagerResourceManager

Resource Manager

(SE)

Resource Manager

(SE)Secondary

StorageSecondary

Storage

ResourceManager

Instruments ( Satelites,

Radars)

3.4Optimization of

Local Data Access

3.4Optimization of

Local Data Access

Tertiary StorageTertiary Storage

Replica CatalogReplica Catalog

GlobusReplica

Manager

GlobusReplica

Manager

1.1User Interaction

Services

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Rules for X# SW Development

– Iterative improvement:• development, testing on testbed, evaluation,

improvement

– Modularity

– Open source approach

– SW well documented

– Collaboration with other # projects

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Project Phases

M 1 - 3: requirements definition and merging

M 4 - 12: first development phase: design, 1st prototypes, refinement of requirements

M 13 -24: second development phase: integration of components, 2nd prototypes

M 25 -32: third development phase: complete integration, final code versions

M 33 -36: final phase: demonstration and documentation

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Tasks

1.0 Co-ordination and management (Peter M.A. Sloot, UvA)

1.1 Interactive simulation and visualisation of a biomedical system

(G. Dick van Albada, Uva)

1.2 Flooding crisis team support (Ladislav Hluchy, II SAS)

1.3 Distributed data analysis in HEP (C. Martinez-Rivero, CSIC)

1.4 Weather forecast and air pollution modelling (Bogumil Jakubiak, ICM)

WP1 – CrossGrid Application Development

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Tasks

2.0 Co-ordination and management (Holger Marten, FZK)

2.1 Tools requirement definition (Roland Wismueller, TUM)

2.2 MPI code debugging and verification (Matthias Mueller, USTUTT)

2.3 Metrics and benchmarks (Marios Dikaiakos, UCY)

2.4 Interactive and semiautomatic performance evaluation tools

(Wlodek Funika, Cyfronet)

2.5 Integration, testing and refinement (Roland Wismueller, TUM)

WP2 - Grid Application Programming Environments

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Tasks

3.0 Co-ordination and management (Norbert Meyer, PSNC)

3.1 Portals and roaming access (Miroslaw Kupczyk, PSNC)

3.2 Grid resource management (Miquel A. Senar, UAB)

3.3 Grid monitoring (Brian Coghlan, TCD)

3.4 Optimisation of data access (Jacek Kitowski, Cyfronet)

3.5 Tests and integration (Santiago Gonzalez, CSIC)

WP3 – New Grid Services and Tools

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Partners in WP4

WP4 lead by

CSIC (Spain)

WP4 - International Testbed Organization

Auth Thessaloniki

U v Amsterdam

FZK Karlsruhe

TCD Dublin

U A Barcelona

LIP Lisbon

CSIC Valencia CSIC Madrid

USC Santiago CSIC Santander

DEMO Athens UCY Nikosia

CYFRONET Cracow

II SAS Bratislava

PSNC Poznan

ICM & IPJ Warsaw

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Tasks

4.0 Coordination and management

(Jesus Marco, CSIC, Santander)–Coordination with WP1,2,3–Collaborative tools–Integration Team

4.1 Testbed setup & incremental evolution (Rafael Marco, CSIC, Santander)

–Define installation–Deploy testbed releases–Trace security issues

WP4 - International Testbed Organization

Testbed site responsibles:– CYFRONET (Krakow) A.Ozieblo– ICM(Warsaw) W.Wislicki– IPJ (Warsaw) K.Nawrocki– UvA (Amsterdam) D.van Albada– FZK (Karlsruhe) M.Kunze– IISAS (Bratislava) J.Astalos– PSNC(Poznan) P.Wolniewicz– UCY (Cyprus) M.Dikaiakos– TCD (Dublin) B.Coghlan– CSIC (Santander/Valencia) S.Gonzalez– UAB (Barcelona) G.Merino– USC (Santiago) A.Gomez– UAM (Madrid) J.del Peso– Demo (Athenas) C.Markou– AuTh (Thessaloniki) D.Sampsonidis– LIP (Lisbon) J.Martins

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Tasks

4.2 Integration with DataGrid (Marcel Kunze, FZK)–Coordination of testbed setup–Exchange knowledge–Participate in WP meetings

4.3 Infrastructure support (Josep Salt, CSIC, Valencia)–Fabric management–HelpDesk–Provide Installation Kit–Network support

4.4 Verification & quality control (Jorge Gomes, LIP)–Feedback –Improve stability of the testbed

WP4 - International Testbed Organization

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Tasks

5.1 Project coordination and administration (Michal Turala, INP)

5.2 CrossGrid Architecture Team (Marian Bubak, Cyfronet)

5.3 Central dissemination (Yannis Perros, ALGO)

WP5 – Project Management

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Architecture Team - Activity

– Merging of requirements from WP1, WP2, WP3– Specification of the X# architecture (i.e. new

protocols, services, SDKs, APIs)– Establishing of standard operational procedures– Specification of the structure of deliverables– Improvement of X# architecture according to

experience from SW development and testbed operation

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Person-months

WP WP Title PM Funded

PM Total

WP1 CrossGrid Applications Development

365 537

WP2 Grid Application Programming Environment

156 233

WP3 New Grid Services and Tools 258 421

WP4 International Testbed Organization 435 567

WP5 Project Management 102 168

Total 1316 1926

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Collaboration with other # Projects

– Objective – exchange of• information• software components

– Partners• DataGrid• DataTag• GridLab• EUROGRID and GRIP

– GRIDSTART – Participation in GGF

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

X# - EDG: Grid Architecture

Similar layered structure, similar functionality of components

– Interoperability of Grids

– Reuse of Grid components

– Joint proposals to GGF

– Participation of chairmen of EDG ATF and X# AT in meetings and other activities

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

X# - EDG: Applications

– Interactive applications• Methodology• Generic structure• Grid services• Data security for medical applications

– HEP applications• X# will extend functionality of EDG sw

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

X# - EDG: Testbed

– Goal: Interoperability of EDG and X# testbeds

– Joint Grid infrastructure for HEP applications

– Already X# members from Spain, Germany and Portugal are taking part in EDG testbed

– Collaboration of testbed support teams

– Mutual recognition of Certification Authorities

– Elaboration of common access/usage policy and procedures

– Common installation/configuration procedures

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Summary

– Layered structure of the all X# applications– Reuse of SW from DataGrid and other # projects– Globus as the bottom layer of the middleware– Heterogeneous computer and storage systems– Distributed development and testing of SW

• 12 partners in applications• 14 partners in middleware• 15 partners in testbeds

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Thanks to

– Michal Turala

– Peter M.A. Sloot

– Roland Wismueller

– Wlodek Funika

– Marek Garbacz

– Ladislav Hluchy

– Bartosz Balis

– Jacek Kitowski

– Norbert Meyer

– Jesus Marco

X#X#

Workshop CESGA - HPC’2002 - A Coruna, May 30, 2002

Thanks to CESGA for invitation to HPC’2002

For more about the X# Project see

www.eu-crossgrid.org