25
e-Science Technology/Middleware (Grid, Cyberinfrastructure) Gap Analysis e-Science Town Meeting Strand Palace Hotel May 14 2003 Geoffrey Fox, Indiana University David Walker, Cardiff University Note for this report the terms e-Science Technology/Middleware, Grid, and Cyberinfrastructure are NOT distinguished

Gapanalysis

Embed Size (px)

Citation preview

Page 1: Gapanalysis

e-Science Technology/Middleware(Grid, Cyberinfrastructure)

Gap Analysis

e-Science Town Meeting Strand Palace Hotel May 14 2003Geoffrey Fox, Indiana UniversityDavid Walker, Cardiff University

Note for this report the terms e-Science Technology/Middleware, Grid, and Cyberinfrastructure are NOT distinguished

Page 2: Gapanalysis

Features of Study• Draft report distributed to TAG April 28 2003

– A: Summary– B: Technology/Project/Worldwide Service Context– C: Gaps by Category– D: Appendix of UK activities of relevance– E: Action Plan for OMII

• Interviewed 80 people -- reasonably complete within the UK• Extracted and categorized over 120 comments (gaps)• Developed an action plan that could be used to guide Core e-

Science effort (UK OMII Open Middleware Infrastructure Initiative) to produce robust useable e-Science (Grid) infrastructure by 2006

• Interview part of project ran from mid February to early April – currently adding TAG comments and completing worldwide Service context (largely literature/web-based not interviews)– Integrating UK and Worldwide Service studies with uniform

terminology/classification

“85% finished”

Page 3: Gapanalysis

Features of Gap Analysis• Examined requirements and services already

understood/developed for e-Science (reasonably broad coverage) and e-Business, e-Government and e-Services (inevitably rather spotty coverage)

• Gaps divided into four broad areas– Near-term Technical– Education and Support– Research (not well separated from Near-term Technical)– Perception and Organization

• Appendix listed over 60 significant UK services (perhaps clustered together) and tools – in the context of a total of some 150 world wide Grid services

Page 4: Gapanalysis

Network 8.11

InformationCompute Resources

PortalsPSE’s8.10

Application SpecificResource SpecificGeneric

Grid Services:

Architectureand Style 8.1 Basic Technology

Runtime and Hosting Environment 8.2

Information 8.7Compute/File 8.8

Security 8.3Workflow 8.4Notification 8.5Meta-data 8.6Other 8.9

Categorization of Technical Gaps and Grid Services

Page 5: Gapanalysis

Taxonomy of Grid FunctionalitiesName of Grid

TypeDescription of Grid Functionality

Compute/File Grid Run multiple jobs with distributed compute and data resources (Global “UNIX Shell”)

Desktop Grid “Internet Computing” and “Cycle Scavenging” with secure sandbox on large numbers of untrusted computers

Information Grid Grid service access to distributed information, data and

knowledge repositories Complexity or Hybrid Grid

Hybrid combination of Information and Compute/File Grid emphasizing integration of experimental data, filters and simulations

Campus Grid Grid supporting University community computing

Enterprise Grid Grid supporting a company’s enterprise infrastructure

Note: Term Data Grid not used consistently in community so avoided

Page 6: Gapanalysis

HPCSimulation

DataFilter

Data FilterD

ata

Filt

er

Data

Filter

Data

Filter

Distributed Filters massage dataFor simulation

Other

Grid

and W

eb

Servi

ces

AnalysisControl

Visualize

Complexity Grid Computing Model

Grid

OGSA-DAIGrid Services

This Type of Gridintegrates with

Parallel computinge.g. HPC(x)

Page 7: Gapanalysis

Taxonomy of Grid Operational StyleName of Grid

StyleDescription of Grid Operational or

Architectural Style

Semantic Grid Integration of Grid and Semantic Web meta-data and ontology technologies

Peer-to-peer Grid Grid built with peer-to-peer mechanisms

Lightweight Grid Grid designed for rapid deployment and minimum life-cycle support costs

Collaboration Grid Grid supporting collaborative tools like the Access Grid, whiteboard and shared applications.

R3 or Autonomic Grid

Fault tolerant and self-healing Grid

Robust Reliable Resilient R3

Page 8: Gapanalysis

“Central” Architecture/Functionality/Style Gaps• Substantial comments on “hosting environments”

OGSI and “permeating principles”– Agreement on Web service model

4: Key OGSA Services

5: OGSA-compliant System Grid Services

6: Domain-Specific (Application) Grid Services

1: Hosting Environment

WS WS WS WS

2: OGSI Web service Enhancements

3: Permeating Principles and Policies

“Central ServicesAnd Architecture”

Central Gaps

“Modular” Servicesnatural for

distributed teamsSpecific Gaps

Page 9: Gapanalysis

An OGSA Grid Architecture in detail (from GGF GPA)

Page 10: Gapanalysis

Permeating Principles and Policies• Meta-data rich Message-linked Web Services as the permeating paradigm• “User” Component Model such as “Enterprise JavaBean (EJB)” or .NET. • Service Management framework including a possible Factory mechanism • High level Invocation Framework describing how you interact with system

components.– This could for example be used to allow the system to built from either

W3C or GGF style (OGSI) Web Services and to protect the user from changes in their specifications.

• Security is a service but the need for fine grain selective authorization encourages

• Policy context that sets the rules for each particular Grid. – Currently OGSA supports policies for routing, security and resource use.

• The Grid Fabric or set of resources needs mechanisms to manage them. This includes automatic recording of meta-data and configuration of software.

• Quality of service (QoS) for the Network and this implies performance monitoring and bandwidth reservation services. – Challenging as end-to-end and not just backbone QoS is needed.

• Messaging systems like MQSeries from IBM provide robustness from asynchronous delivery and can abstract destination and allow customization of content such as converting between different interface specifications.

• Messaging is built on transport mechanisms which can be used to support mechanisms to implement QoS and to virtualize ports

Page 11: Gapanalysis

World Wide Grid Service Activities I• This was implicit in original report for TAG and now is being made explicit

based on interviews plus survey of major worldwide activities• Commercial activities especially those of IBM, Avaki, Platform, Sun,

Entropia and United Devices• The GT2 and GT3 Globus Toolkits. Here we effectively covering not just the

Globus team but the major projects such the NASA Information Power Grid that have blazed the trail of “productizing” Grids. – Note that we can “already” see GT3 (Grid Service) like functionality from GT2 wrapped

with the various (Java, Perl, Python, CORBA) CoG kits. So GT2 capabilities can be classified as Services

• Trillium (GriPhyn, iVDGL and PPDG) and NeesGrid; the major NSF (DoE for PPDG) projects in the USA. – Condor from the University of Wisconsin which is being integrated into Grid services

through the Trillium and NMI activities.• The NSF Middleware Initiative (NMI) packaging a suite of Globus, Condor

and Internet2 software. – This has overlaps with the VDT (Virtual Data Toolkit from GriPhyn)

Page 12: Gapanalysis

World Wide Grid Service Activities II• Unicore (GRIP), GridLab, the European Data Grid (EDG) and

LCG (LHC Computing Grid) – Many other (20) EU Projects but these have most of technology

development• Storage Resource Broker SRB-MCAT from SDSC• The DoE Science Grid and related activities such as the Common

Component Architecture (CCA) project• Examination of services from a collection of portal projects in the

US from Argonne, Indiana, Michigan, NCSA and Texas. – This includes best practice discussion from Global Grid Forum

in portals.• Review of contributions to the recent book Grid Computing:

Making the Global Infrastructure a Reality edited by Fran Berman, Geoffrey Fox and Tony Hey, John Wiley & Sons, Chichester, England, ISBN 0-470-85319-0, March 2003– This includes other major projects like Cactus, NetSolve, Ninf

• Some 6 Core and other application specific UK e-Science Projects

Page 13: Gapanalysis

Categories of Worldwide Grid Services• Types of Grid

– R3– Lightweight– P2P– Federation and Interoperability

• Core Infrastructure and Hosting Environment– Service Management– Component Model– Service wrapper/Invocation – Messaging

• Security Services– Certificate Authority– Authentication– Authorization– Policy

• Workflow Services and Programming Model– Composition/Development– Languages and Programming– Compiler– Enactment Engines (Runtime)

• Notification Services• Metadata and Information Services

– Basic including Registry– Semantically rich Services and meta-data– Information Aggregation (events)– Provenance

• Information Grid Services– OGSA-DAI/DAIT– Integration with compute resources– P2P and database models

• Compute/File Grid Services– Job Submission– Job Planning Scheduling Management– Access to Remote Files, Storage and

Computers– Replica (cache) Management– Virtual Data– Parallel Computing

• Other services including– Grid Shell– Accounting– Fabric Management– Visualization Data-mining and

Computational Steering– Collaboration

• Portals and Problem Solving Environments• Network Services

– Performance– Reservation– Operations

Page 14: Gapanalysis

Features of Worldwide Grid Services• UK activities have a strong web service and Information Grid

emphasis– Important compute/file activities as well (White Rose,

RealityGrid, UK part of EDG etc.)• Non UK activities are dominantly focused on compute/file Grids

– Submit jobs in distributed UNIX shell (Gridshell) fashion– Gather data from instruments (accelerator, satellite, medical

device); process in batch mode mapping between filesets• Little emphasis on lightweight or R3 Grids but NSF in USA and

EDG have aimed at better support and software quality– EDG has useful “tension” between technology and application

focus working groups– NMI and even GT3 have changed packaging and added

service view – have not changed “underlying” architecture for robustness

• Coordinated set of Portal activities in USA• Little work on integrating parallel computing and Grid although

TeraGrid in USA could change this• Gaps are omissions/deficiencies in UK or worldwide Grid

services of importance to UK e-Science

Page 15: Gapanalysis

Central Gaps:Gaps in Grid Styles and Execution Environment• Need for both robust (fault tolerant) and lightweight

(suitable for small groups) Grid styles identified– Peer-to-peer style supports smaller decentralized virtual

organizations

• Noted opportunities for modern middleware ideas to be used – lightweight, message-based

• Noted that Enterprise JavaBeans not optimized for Science which has high volume dataflow

• Federated Grid Architecture natural for integration of heterogeneous functionality, style and security

• Bioinformatics and other fields require integration of Information and Compute/File Grids

Page 16: Gapanalysis

Information Grid

Enterprise Grid

Compute Grid

Campus Grid

R2R1

Teacher

Students

Dynamic light-weight Peer-to-peerCollaboration Training Grid

Overlapping HeterogeneousDynamic Grid Islands

Page 17: Gapanalysis

(a) Layered OGSA Grid

CoreService

CoreService

CoreService

CoreService

ApplicationService

ApplicationService

ApplicationService

OGSA Interface

OGSA Mediation

CoreService

CoreService

CoreService

CoreService

CoreService

CoreService

Appl.Service

Appl.Service

Appl.Service

Appl.Service

Grid-1 Grid-2OGSA or non OGSA Interface-2OGSA or non OGSA Interface-1

(b) Federated OGSA Grid

Page 18: Gapanalysis

Many Gaps in Generic Services• Some gaps like Workflow and Notification are to make

production versions of current projects– Appendix shows workflow from DAME, DiscoveryNet, EDG,

Geodise, ICENI, myGrid, Unicore plus Cardiff, NEReSC ….• RGMA and Semantic Grid offer improved meta-data

and Information services compared to UDDI and MDS (Globus)– Need comprehensive federated Information service

• Security requires architecture supporting dynamic fine-grain authorization

• UK e-Science has pioneered Information Grids but gap is continuation of OGSA-DAI, integration with other services and P2P decentralized models

• Functionality of Compute/File Grids quite advanced but services probably not robust enough for LCG or Campus Grids

Page 19: Gapanalysis

Gaps in Other Grid services• Portals and User Interfaces – Noted gap that not

using Grid Computing Environment “best practice” with component based user-interfaces matching component-based middleware

• Programming Models (using workflow runtime)• Fabric Management (should be integrated with

central service management and Information system), Computational Steering, Visualization, Datamining, Accounting, Gridmake, Debugging, Semantic Grid tools (consistent with Information system), Collaboration, provenance

• Application-specific services• Note new production central Infrastructure can

support both research and production services of this type

Page 20: Gapanalysis

NCSA Jetspeed Computing Portal

Page 21: Gapanalysis

Some Non-Technical Gaps (Sections 9 and 11)

• Some confusion as to “future” of Grid software and how projects should evolve to match evolution of Globus, OGSA etc.

• Correspondingly need special attention to education (training) in rapidly changing technologies

• Need dedicated testbeds and repositories• Current e-Science projects are typically

aimed at “demonstrator” and not broadly deployable “production” software– Correct initial strategy and supports new focus

for next phase of core e-Science

TechnologyRepositoryand TestbedTeam

Architectureand ProjectCoordination

DistributedSub-projectTeams

ACTION PLAN

Page 22: Gapanalysis

Action Plan (OMII) Structure• Technology Repository and Testbed Team

– Compliance testing– Track, training coordination with pro-active alerting technology

status/directions– Approximately 6 people

• Architecture and Project Coordination– Agile Software Engineering and Project Management– Central technology architecture and development– Work with Advisory board meeting about once per month

initially– 6-12 “professional” people in 1-2 sites– Clear relationship to application requirements

• Distributed Sub-project Teams– “Independent” activities as now but aiming at deployable

production software• Set of focused workshops to refine key services and

architecture– e.g. service management, messaging, workflow, integration of

OGSA-DAI with Compute/File Grids (just a representative set)

Page 23: Gapanalysis

Central Action Plan Projects• Develop Grid infrastructure supporting

– Robust Reliable Resilient (R3) Essential– Lightweight and Desirable– Peer-to-peer styles Desirable

• Could involve asynchronous messaging, federated security (fine-grain authorization), “e-ScienceBean”, notification (as part of service management), invocation frameworks “virtualizing” service component structure

• Integrate network monitoring/ reservation/ management including end-to-end network operations

• Support critical policies like security, provenance• Powerful Service management (Research needed here)• Need to either federate and/or interoperate a world of “Grid

Islands”

Page 24: Gapanalysis

Essential Services in Action Plan(layer 4)

• Workflow runtime supporting transactions and high volume dataflow– Different e-Science programming models/languages can

use same runtime and be developed independently

• Federated Distributed Information System– From low level service registration through high-level

semantic metadata (separated or integrated)– Support of service semantics most quoted “gap”

(Semantic Grid leadership important)– Support P2P, Central (MDS style) and service-based

(SDE) metadata– Here as elsewhere can collaborate with GT3, EDG …

Page 25: Gapanalysis

Specific Grid Services (layers 5, 6)• Core Domain Grid Services cover the critical Services

for major Grid functionalities– Information Grid: OGSA-DAIT– Compute/File Grid: work with LCG, EDG (follow on),

Trillium(USA) on robust infrastructure• New central (R3) architecture affects strategy • Include Campus Grid support

– Hybrid Grids (Complexity Grids) integrating computing (filters, transformations) possibly on major parallel computing facilities and data repository access for Bioinformatics, Environmental (Earth) Science, Virtual Observatories ……

• Other Services as identified in Gap Analysis with distributed teams working on different services in concert with central team for software engineering and OGSA interfaces as appropriate