16
Grid Computing Test beds in Europe and the Netherlands David Groep, NIKHEF 2001-10-01

Grid Computing Test beds in Europe and the Netherlands

  • Upload
    lilah

  • View
    25

  • Download
    0

Embed Size (px)

DESCRIPTION

Grid Computing Test beds in Europe and the Netherlands. David Groep, NIKHEF 2001-10-01. Resource access: then & now. In ye olde days (till approx 1992): Hardly any network security All machines in a local net were created equal All local users happy with remote shell, rlogin, rcp Now: - PowerPoint PPT Presentation

Citation preview

Grid Computing Test bedsin Europe and the Netherlands

David Groep, NIKHEF2001-10-01

Resource access: then & now

• In ye olde days (till approx 1992):– Hardly any network security– All machines in a local net were created equal– All local users happy with remote shell, rlogin, rcp

• Now:– Want to communicate globally over Gigabit WAN, but– Internet is full of firewalls and barriers,

to retain local control and keep crackers out…• The Grid:

– Bring back single sign on and trust– use the WAN as the 80’s LANs: all global users happy

What is Grid computing?

• Dependable, consistent and pervasive access to (high-end) resources

• Combine resources from various organizations• User-based view: `Virtual Organizations’

• Transparent decisions for the user

• Security is of paramount importance:– Authentication, Authorization, Accounting&Quota

The One-Liner

• Resource sharing and coordinated problem solving in dynamic multi-institutional virtual organisations

‘Virtual Organizations’

Grid Architecture

Applications

Grid Services GRAMGSI

Grid FabricCondor MPI PBS Internet Linux

Application ToolkitsDUROC MPICH-G2Condor-G

GridFTPMDS

SUN

VLAM-G

Make all resources talk standard protocols

Promote interoperability of application toolkit, similar to interoperability of networks by Internet standards

• Globus Project started 1997• de facto-standard• Reference implementation of Gridforum standards

• Large community effort• Basis of several projects, including EU-DataGrid

• Toolkit `bag-of-services' approach

• Successful test beds, with single sign-on, etc…

Grid Middleware

Single Sign-on

• Based on PKI (Public Key Infrastructure), using X.509 `certificates’:– Grid Credential Name (~login name for the Grid)– Private key, for signing and encryption– Certificate signed by trusted third parties (CA’s)

(see http://certificate.nikhef.nl/)

• Co-operates with local security policies

• Exchange certificates, authenticate, delegate• Limit vulnerability by using limited `proxies’

Looking at Resources

• Per Virtual Organization (or test bed)

• Directory of Resources and their Characteristics

• Used to find `best resource out there’

DataGrid httap://marianne.in2p3.fr/DutchGrid?ldap://giishost.nikhef.nl:30001/

Submitting a Job

Sending your Data

• Tape robots, disks, etc. share GridFTP interface• Optimize for high-speed >1Gbit/s networks

• In the future: automatic optimizations, bandwidth reservations, directory-enabled networking, …

EU DataGrid Work Packages

Applications

Grid Services GRAM

Grid FabricCondor PBS Internet Linux

Application Toolkits MPICH-G2Condor-G

GridFTPMDS

SUN

WP8-10

WP1& 8-10 sw

WP2,3,5,(7)

WP4,7

EU DataGrid Test bed 1

• DutchGrid embedded inDataGrid Test bed 1

• DataGrid TB1:– 14 countries– 21 major sites

– Shared PKI– Mutual authorization

– Major applications:HEP, Earth Obs,Bio-informatics

DutchGrid platform

Amsterdam

UtrechtKNMI

Delft

Leiden

Nijmegen

Enschede

• DutchGrid aims:– Share experience– Test bed coordination– PKI security

• Participation byNIKHEF:

FOM, VU, UvA, Utrecht, Nijmegen

KNMI, SARAAMOLFDAS (ASCI):

TUDelft, Leiden, VU, UvA, Utrecht

Telematics Institute

DutchGrid: build-up now

• Current startup-resources to be abused:– NIKHEF:

• 50x2 CPU’s D0 cluster• 2x10x2 (=40) CPU’s LHCb at NIKHEF(WCW) &VU• 10x2 CPU’s Alice NIKHEF(WCW)• ca. 4x2 CPU’s Alice Utrecht• ca. 10x2 CPU’s D0 Nijmegen• Lots of disk & dedicated 1.3TByte cache server

– DAS-II: 200 dual-PIII’s systems & some disk (~2TByte)• Spread over 5 locations (NIKHEF is one!)

– SARA: tape robot (>200TByte), some clusters

Present and Future

• Plenty more systems (~ 5000 CPU’s in 2003/4??)• Mass storage (@SARA)• Network capacity to CERN and Fermi (SURFnet)

• Buildup of software infrastructure• Support for changing Analysis/MC software• Bringing the Grid to the (interactive) Desktop