18
Data Grids for Next Generation Experiments Harvey B Newman Harvey B Newman California Institute of California Institute of Technology Technology ACAT2000 ACAT2000 Fermilab, October 19, 2000 Fermilab, October 19, 2000 http://l3www.cern.ch/~newman/grids_acat2k.ppt http://l3www.cern.ch/~newman/grids_acat2k.ppt

Data Grids for Next Generation Experiments Harvey B Newman California Institute of Technology

  • Upload
    lolita

  • View
    40

  • Download
    1

Embed Size (px)

DESCRIPTION

Data Grids for Next Generation Experiments Harvey B Newman California Institute of Technology ACAT2000 Fermilab, October 19, 2000 http://l3www.cern.ch/~newman/grids_acat2k.ppt. Physics and Technical Goals. - PowerPoint PPT Presentation

Citation preview

Page 1: Data Grids for Next  Generation Experiments Harvey B Newman California Institute of Technology

Data Grids for Next Generation Experiments

Harvey B NewmanHarvey B NewmanCalifornia Institute of TechnologyCalifornia Institute of Technology

ACAT2000ACAT2000Fermilab, October 19, 2000Fermilab, October 19, 2000

http://l3www.cern.ch/~newman/grids_acat2k.ppthttp://l3www.cern.ch/~newman/grids_acat2k.ppt

Page 2: Data Grids for Next  Generation Experiments Harvey B Newman California Institute of Technology

Physics and Technical GoalsPhysics and Technical Goals

The extraction of small or subtle new “discovery” The extraction of small or subtle new “discovery” signals from large and potentially overwhelming signals from large and potentially overwhelming backgrounds; or “precision” analysis of large samplesbackgrounds; or “precision” analysis of large samples

Providing rapid access to event samples and subsets Providing rapid access to event samples and subsets from massive data stores, from ~300 Terabytes in 2001 from massive data stores, from ~300 Terabytes in 2001 Petabytes by ~2003, ~10 Petabytes by 2006, to ~100 Petabytes by ~2003, ~10 Petabytes by 2006, to ~100 Petabytes by ~2010.Petabytes by ~2010.

Providing analyzed results with rapid turnaround, byProviding analyzed results with rapid turnaround, bycoordinating and managing the coordinating and managing the LIMITED LIMITED computing, computing, data handling and network resources effectivelydata handling and network resources effectively

Enabling rapid access to the data and the collaboration, Enabling rapid access to the data and the collaboration, across an ensemble of networks of varying capability, across an ensemble of networks of varying capability, using heterogeneous resources.using heterogeneous resources.

Page 3: Data Grids for Next  Generation Experiments Harvey B Newman California Institute of Technology

Four LHC Experiments: The Four LHC Experiments: The Petabyte to Exabyte Petabyte to Exabyte

ChallengeChallengeATLAS, CMS, ALICE, LHCBATLAS, CMS, ALICE, LHCB

Higgs + New particles; Quark-Gluon Plasma; CP ViolationHiggs + New particles; Quark-Gluon Plasma; CP Violation

Data written to tapeData written to tape ~25 Petabytes/Year and UP ~25 Petabytes/Year and UP (CPU: 6 MSi95 and UP) (CPU: 6 MSi95 and UP)

0.1 to 1 Exabyte (1 EB = 100.1 to 1 Exabyte (1 EB = 101818 Bytes) Bytes) (~2010) (~2020 ?) Total for the LHC Experiments(~2010) (~2020 ?) Total for the LHC Experiments

Page 4: Data Grids for Next  Generation Experiments Harvey B Newman California Institute of Technology

LHC Vision: Data Grid HierarchyLHC Vision: Data Grid Hierarchy

Tier 1

Tier2 Center

Online System

Offline Farm,CERN Computer Ctr > 30 TIPS

FranceCenter

FNAL Center

Italy Center

UK Center

InstituteInstitute

InstituteInstitute ~0.25TIP

S

Workstations

~100 MBytes/se

c

~2.5 Gbits/sec

100 - 1000 Mbits/sec

1 Bunch crossing; ~17 interactions per 25 nsecs; 100 triggers per second. Event is ~1 MByte in size

Physicists work on analysis “channels”

Each institute has ~10 physicists working on one or more channels

Physics data cache

~PBytes/sec

~0.6-2.5 Gbits/sec

Tier2 CenterTier2 CenterTier2 Center

~622 Mbits/sec

Tier 0 +1

Tier 3

Tier 4

Tier2 Center Tier 2

Experiment

Page 5: Data Grids for Next  Generation Experiments Harvey B Newman California Institute of Technology

Why Worldwide Computing?Why Worldwide Computing?Regional Center Concept: AdvantagesRegional Center Concept: Advantages

Managed, fair-shared access for Physicists everywhereManaged, fair-shared access for Physicists everywhere Maximize total funding resources while meeting the Maximize total funding resources while meeting the

total computing and data handling needstotal computing and data handling needs Balance between proximity of datasets to appropriate Balance between proximity of datasets to appropriate

resources, and to the usersresources, and to the users Tier-N ModelTier-N Model

Efficient use of network: higher throughputEfficient use of network: higher throughput Per Flow: Local > regional > national > internationalPer Flow: Local > regional > national > international

Utilizing all intellectual resources, in several time zonesUtilizing all intellectual resources, in several time zones CERN, national labs, universities, remote sitesCERN, national labs, universities, remote sites Involving physicists and students at their home institutionsInvolving physicists and students at their home institutions

Greater flexibility to pursue different physics interests, Greater flexibility to pursue different physics interests, priorities, and resource allocation strategies by regionpriorities, and resource allocation strategies by region

And/or by Common Interests (physics topics, subdetectors,…)And/or by Common Interests (physics topics, subdetectors,…) Manage the System’s ComplexityManage the System’s Complexity

Partitioning facility tasks, to manage and focus resourcesPartitioning facility tasks, to manage and focus resources

Page 6: Data Grids for Next  Generation Experiments Harvey B Newman California Institute of Technology

SDSS Data Grid (In GriPhyN): SDSS Data Grid (In GriPhyN):

A Shared VisionA Shared VisionThree main functions:Three main functions: Raw data processing on a Grid (FNAL)Raw data processing on a Grid (FNAL) Rapid turnaround with TBs of dataRapid turnaround with TBs of data Accessible storage of all image dataAccessible storage of all image data

Fast science analysis environmentFast science analysis environment(JHU)(JHU)

Combined data access + analysis Combined data access + analysis of calibrated dataof calibrated data

Distributed I/O layer and processing Distributed I/O layer and processing layer; shared by whole collaborationlayer; shared by whole collaboration

Public data accessPublic data access SDSS data browsing for SDSS data browsing for

astronomers, and studentsastronomers, and students Complex query engine for the publicComplex query engine for the public

Page 7: Data Grids for Next  Generation Experiments Harvey B Newman California Institute of Technology

US-CERN BW RequirementsUS-CERN BW RequirementsProjection Projection (PRELIMINARY)(PRELIMINARY)

2001 2002 2003 2004 2005 2006

Installed LinkBW in MbpsIncl. New SLACThroughput [*]

310

(120)

622

(250)

1600

(400)

2400

(600)

4000

(1000)

6500 [#]

(1600)

[#] Includes ~1.5 Gbps Each for ATLAS and CMS, Plus Babar, Run2 and Other[*] D0 and CDF at Run2: Needs Presumed to Be to be Comparable to BaBar

Page 8: Data Grids for Next  Generation Experiments Harvey B Newman California Institute of Technology

Daily, Weekly, Monthly and Yearly Statistics on Daily, Weekly, Monthly and Yearly Statistics on the 45 Mbps US-CERN Linkthe 45 Mbps US-CERN Link

Page 9: Data Grids for Next  Generation Experiments Harvey B Newman California Institute of Technology

RD45, RD45, GIODGIOD Networked Object DatabasesNetworked Object Databases Clipper/GC Clipper/GC High speed access to Objects or File data High speed access to Objects or File data FNAL/SAM FNAL/SAM for processing and analysisfor processing and analysis SLAC/OOFS SLAC/OOFS Distributed File System + Objectivity Distributed File System + Objectivity Interface Interface NILE, Condor:NILE, Condor: Fault Tolerant Distributed ComputingFault Tolerant Distributed Computing

MONARCMONARC LHC Computing Models: LHC Computing Models: Architecture, Simulation, Strategy, PoliticsArchitecture, Simulation, Strategy, Politics

ALDAPALDAP OO Database Structures & Access Methods OO Database Structures & Access Methods for Astrophysics and HENP Datafor Astrophysics and HENP Data

PPDGPPDG First Distributed Data Services and First Distributed Data Services and Data Grid System PrototypeData Grid System Prototype

GriPhyN GriPhyN Production-Scale Data GridsProduction-Scale Data Grids EU Data GridEU Data Grid

Roles of ProjectsRoles of Projectsfor HENP Distributed Analysisfor HENP Distributed Analysis

Page 10: Data Grids for Next  Generation Experiments Harvey B Newman California Institute of Technology

Grid Services Architecture [*]Grid Services Architecture [*]

GridGridFabricFabric

GridGridServicesServices

ApplnApplnToolkitsToolkits

ApplnsApplns

Data stores, networks, computers, display Data stores, networks, computers, display devices,… ; associated local servicesdevices,… ; associated local services

Protocols, authentication, policy, resource Protocols, authentication, policy, resource management, instrumentation, discovery,etc.management, instrumentation, discovery,etc.

......RemotRemot

eevizviz

toolkittoolkit

RemotRemotee

comp.comp.toolkittoolkit

RemotRemotee

datadatatoolkittoolkit

RemotRemotee

sensorssensorstoolkittoolkit

RemotRemotee

collab.collab.toolkittoolkit

A Rich Set of HEP Data-Analysis A Rich Set of HEP Data-Analysis Related ApplicationsRelated Applications

[*] [*] Adapted from Ian Foster: there are computing grids, Adapted from Ian Foster: there are computing grids, access (collaborative) grids, data grids, ...access (collaborative) grids, data grids, ...

Page 11: Data Grids for Next  Generation Experiments Harvey B Newman California Institute of Technology

The Particle Physics Data Grid (PPDG)The Particle Physics Data Grid (PPDG)

First Round Goal: First Round Goal: Optimized cached read access to 10-100 Gbytes Optimized cached read access to 10-100 Gbytes drawn from a total data set of 0.1 to ~1 Petabytedrawn from a total data set of 0.1 to ~1 Petabyte

PRIMARY SITEPRIMARY SITEData Acquisition,Data Acquisition,

CPU, Disk, CPU, Disk, Tape RobotTape Robot

SECONDARY SITESECONDARY SITECPU, Disk, CPU, Disk, Tape RobotTape Robot

Site to Site Data Replication Service

100 Mbytes/sec

ANL, BNL, Caltech, FNAL, JLAB, LBNL, ANL, BNL, Caltech, FNAL, JLAB, LBNL, SDSC, SLAC, U.Wisc/CSSDSC, SLAC, U.Wisc/CS

Multi-Site Cached File Access Service

UniversityUniversityCPU, Disk, CPU, Disk,

UsersUsers

PRIMARY SITEPRIMARY SITEDAQ, Tape, DAQ, Tape,

CPU, CPU, Disk, RobotDisk, Robot

Satellite SiteSatellite SiteTape, CPU, Tape, CPU, Disk, RobotDisk, Robot

UniversityUniversityCPU, Disk, CPU, Disk,

UsersUsers

UniversityUniversityCPU, Disk, CPU, Disk,

UsersUsers

UniversityUniversityCPU, Disk, CPU, Disk,

UsersUsers

UniversityUniversityCPU, Disk, CPU, Disk,

UsersUsers

Satellite SiteSatellite SiteTape, CPU, Tape, CPU, Disk, RobotDisk, Robot

Matchmaking, Co-Scheduling: SRB, Condor, Globus services; HRM, NWSMatchmaking, Co-Scheduling: SRB, Condor, Globus services; HRM, NWS

Page 12: Data Grids for Next  Generation Experiments Harvey B Newman California Institute of Technology

PPDG WG1: Request ManagerPPDG WG1: Request Manager

tape system

HRM

Replicacatalog

NetworkWeatherServicePhysical file

transfer requests GRID

RequestInterpreter

DiskCache

Event-file Index

DRM

DiskCache

RequestExecutor

Logical Set of Files Request

Planner(Matchmaking)DRMDisk

Cache

CLIENT CLIENT

Logical Request

REQUEST MANAGER

Page 13: Data Grids for Next  Generation Experiments Harvey B Newman California Institute of Technology

LLNL

Earth Grid System Prototype Earth Grid System Prototype Inter-communication DiagramInter-communication Diagram

Disk

Client

Request Manager

ISIGSI-

wuftpd

Disk

SDSCGSI-pftpd

HPSS

LBNLGSI-wuftpd

Disk

ANLGSI-

wuftpd

Disk

NCARGSI-

wuftpd

Disk

LBNL

Diskon

Clipper

HPSS

HRM

ANLReplica Catalog

GIS with NWS

GSI-ncftp

GS

I-ncftpGSI-n

cftp

LDAP Script

LDAP C API or Script

GSI-ncftp

GSI-ncftpGSI-ncftp CORBA

Page 14: Data Grids for Next  Generation Experiments Harvey B Newman California Institute of Technology

Grid Data Management Grid Data Management Prototype (GDMP)Prototype (GDMP)

Distributed Distributed Job Job ExecutionExecution and and Data Handling:Data Handling:

GoalsGoals TransparencyTransparency PerformancePerformance Security Security Fault ToleranceFault Tolerance AutomationAutomation

Submit job

Replicate data

Replicatedata

Site A Site B

Site C

Jobs are executed locally or

remotely Data is always

written locally Data is replicated

to remote sites

Job writes data locally

GDMP V1.1: Caltech + EU DataGrid WP2 Tests by CALTECH, CERN, FNAL, Pisa for CMS “HLT” Production 10/2000;

Integration with ENSTORE, HPSS, Castor

Page 15: Data Grids for Next  Generation Experiments Harvey B Newman California Institute of Technology

WorkPackageNumber

Work Package title Leadcontractor

WP1 Grid Workload Management INFN

WP2 Grid Data Management CERN

WP3 Grid Monitoring Services PPARC

WP4 Fabric Management CERN

WP5 Mass Storage Management PPARC

WP6 Integration Testbed CNRS

WP7 Network Services CNRS

WP8 High Energy Physics Applications CERN

WP9 Earth Observation Science Applications ESA

WP10 Biology Science Applications INFN

WP11 Dissemination and Exploitation INFN

WP12 Project Management CERN

EU-Grid ProjectEU-Grid ProjectWork PackagesWork Packages

Page 16: Data Grids for Next  Generation Experiments Harvey B Newman California Institute of Technology

GriPhyN: PetaScale GriPhyN: PetaScale Virtual Data GridsVirtual Data Grids

Build the Foundation for Petascale Virtual Data GridsBuild the Foundation for Petascale Virtual Data Grids

Virtual Data Tools

Request Planning &

Scheduling ToolsRequest Execution & Management Tools

Transforms

Distributed resources(code, storage,

computers, and network)

Resource Management

Services

Resource Management

Services

Security and Policy

Services

Security and Policy

Services

Other Grid ServicesOther Grid

Services

Interactive User Tools

Production TeamIndividual Investigator

Workgroups

Raw data source

Page 17: Data Grids for Next  Generation Experiments Harvey B Newman California Institute of Technology

Data Grids: Better Global Resource Data Grids: Better Global Resource Use Use andand Faster Turnaround Faster Turnaround

Build Information and Security InfrastructuresBuild Information and Security Infrastructures Across Several World RegionsAcross Several World Regions Authentication: Prioritization, Resource AllocationAuthentication: Prioritization, Resource Allocation

Coordinated use of computing, data handling and Coordinated use of computing, data handling and network resources through:network resources through: Data caching, query estimation, co-schedulingData caching, query estimation, co-scheduling Network and site “instrumentation”: performance Network and site “instrumentation”: performance

tracking, monitoring, problem trapping and tracking, monitoring, problem trapping and handlinghandling

Robust TransactionsRobust Transactions Agent Based: Autonomous, Adaptive, Agent Based: Autonomous, Adaptive,

Network Efficient, Resilient Network Efficient, Resilient Heuristic, Adaptive Load-Balancing Heuristic, Adaptive Load-Balancing

E.g. Self-Organzing Neural Nets (Legrand) E.g. Self-Organzing Neural Nets (Legrand)

Page 18: Data Grids for Next  Generation Experiments Harvey B Newman California Institute of Technology

GRIDs In 2000: SummaryGRIDs In 2000: Summary

Grids will change the way we do science Grids will change the way we do science and engineering: computation to large scale dataand engineering: computation to large scale data

Key services and concepts have been Key services and concepts have been identified, and development has startedidentified, and development has started

Major IT challenges remainMajor IT challenges remain AnAn Opportunity & Obligation for HEP/CSOpportunity & Obligation for HEP/CS

CollaborationCollaboration Transition of services and applications to production Transition of services and applications to production

use is starting to occuruse is starting to occur In future more sophisticated integrated services and In future more sophisticated integrated services and

toolsets (Inter- and IntraGrids+) could drive advances toolsets (Inter- and IntraGrids+) could drive advances in many fields of science & engineeringin many fields of science & engineering

HENP, facing the need for Petascale Virtual Data, HENP, facing the need for Petascale Virtual Data, is both an early adopter, and a leading developer of is both an early adopter, and a leading developer of Data Grid technologyData Grid technology