Networks for HENP and ICFA SCIC Networks for HENP and ICFA SCIC Harvey B. Newman Harvey B. Newman California Institute of Technology APAN High Energy Physics

Embed Size (px)

Text of Networks for HENP and ICFA SCIC Networks for HENP and ICFA SCIC Harvey B. Newman Harvey B. Newman...

  • Slide 1

Networks for HENP and ICFA SCIC Networks for HENP and ICFA SCIC Harvey B. Newman Harvey B. Newman California Institute of Technology APAN High Energy Physics Workshop January 21, 2003 California Institute of Technology APAN High Energy Physics Workshop January 21, 2003 Slide 2 Next Generation Networks for Experiments: Goals and Needs u Providing rapid access to event samples, subsets and analyzed physics results from massive data stores From Petabytes by 2002, ~100 Petabytes by 2007, to ~1 Exabyte by ~2012. u Providing analyzed results with rapid turnaround, by coordinating and managing the large but LIMITED computing, data handling and NETWORK resources effectively u Enabling rapid access to the data and the collaboration Across an ensemble of networks of varying capability u Advanced integrated applications, such as Data Grids, rely on seamless operation of our LANs and WANs With reliable, monitored, quantifiable high performance Large data samples explored and analyzed by thousands of globally dispersed scientists, in hundreds of teams Slide 3 Four LHC Experiments: The Petabyte to Exabyte Challenge ATLAS, CMS, ALICE, LHCB Higgs + New particles; Quark-Gluon Plasma; CP Violation Data stored ~40 Petabytes/Year and UP; CPU 0.30 Petaflops and UP Data stored ~40 Petabytes/Year and UP; CPU 0.30 Petaflops and UP 0.1 to 1 Exabyte (1 EB = 10 18 Bytes) (2007) (~2012 ?) for the LHC Experiments 0.1 to 1 Exabyte (1 EB = 10 18 Bytes) (2007) (~2012 ?) for the LHC Experiments Slide 4 LHC Data Grid Hierarchy Tier 1 Tier2 Center Online System CERN 700k SI95 ~1 PB Disk; Tape Robot FNAL IN2P3 Center INFN Center RAL Center Institute Institute ~0.25TIPS Workstations ~100-400 MBytes/sec 2.5 Gbps 0.1 to 10 Gbps Tens of Petabytes by 2007-8. An Exabyte within ~5 Years later. Physics data cache ~PByte/sec ~2.5 Gbps Tier2 Center ~2.5 Gbps Tier 0 +1 Tier 3 Tier 4 Tier2 Center Tier 2 Experiment CERN/Outside Resource Ratio ~1:2 Tier0/( Tier1)/( Tier2) ~1:1:1 Slide 5 ICFA and Global Networks for HENP uNational and International Networks, with sufficient (rapidly increasing) capacity and capability, are essential for The daily conduct of collaborative work in both experiment and theory Detector development & construction on a global scale; Data analysis involving physicists from all world regions The formation of worldwide collaborations The conception, design and implementation of next generation facilities as global networks uCollaborations on this scale would never have been attempted, if they could not rely on excellent networks Slide 6 ICFA and International Networking uICFA Statement on Communications in Intl HEP Collaborations of October 17, 1996 See http://www.fnal.gov/directorate/icfa/icfa_communicaes.html ICFA urges that all countries and institutions wishing to participate even more effectively and fully in international HEP Collaborations should: Review their operating methods to ensure they are fully adapted to remote participation Strive to provide the necessary communications facilities and adequate international bandwidth Slide 7 ICFA Network Task Force: 1998 Bandwidth Requirements Projection (Mbps) 1001000 X Bandwidth Increase Foreseen for 1998-2005 See the ICFA-NTF Requirements Report: http://l3www.cern.ch/~newman/icfareq98.html NTF Slide 8 ICFA Standing Committee on Interregional Connectivity (SCIC) uCreated by ICFA in July 1998 in Vancouver ; Following ICFA-NTF uCHARGE: Make recommendations to ICFA concerning the connectivity between the Americas, Asia and Europe (and network requirements of HENP) As part of the process of developing these recommendations, the committee should Monitor traffic Keep track of technology developments Periodically review forecasts of future bandwidth needs, and Provide early warning of potential problems uCreate subcommittees when necessary to meet the charge uThe chair of the committee should report to ICFA once per year, at its joint meeting with laboratory directors (Feb. 2003) uRepresentatives: Major labs, ECFA, ACFA, NA Users, S. America Slide 9 ICFA-SCIC Core Membership u Representatives from major HEP laboratories: W. Von Reuden(CERN) Volker Guelzow (DESY) Vicky White(FNAL) Yukio Karita (KEK) Richard Mount (SLAC) u User Representatives Richard Hughes-Jones (UK) Harvey Newman(USA) Dean Karlen (Canada) u For Russia: Slava Ilyin (MSU) u ECFA representatives: Denis Linglin (IN2P3, Lyon) Frederico Ruggieri (INFN Frascati) u ACFA representatives: Rongsheng Xu (IHEP Beijing) H. Park, D. Son (Kyungpook Natl University) u For South America: Sergio F. Novaes (University of Sao Paulo) Slide 10 SCIC Sub-Committees Web Page http://cern.ch/ICFA-SCIC/ u Monitoring: Les Cottrell (http://www.slac.stanford.edu/xorg/icfa/scic-netmon) With Richard Hughes-Jones (Manchester), Sergio Novaes (Sao Paolo); Sergei Berezhnev (RUHEP), Fukuko Yuasa (KEK), Daniel Davids (CERN), Sylvain Ravot (Caltech), Shawn McKee (Michigan) u Advanced Technologies: Richard Hughes-Jones, With Vladimir Korenkov (JINR, Dubna), Olivier Martin(CERN), Harvey Newman u The Digital Divide: Alberto Santoro (Rio, Brazil) With Slava Ilyin, Yukio Karita, David O. Williams Also Dongchul Son (Korea), Hafeez Hoorani (Pakistan), Sunanda Banerjee (India), Vicky White (FNAL) u Key Requirements: Harvey Newman Also Charlie Young (SLAC) Slide 11 Transatlantic Net WG (HN, L. Price) Bandwidth Requirements [*] u [*] BW Requirements Increasing Faster Than Moores Law See http://gate.hep.anl.gov/lprice/TAN Slide 12 History One large Research Site Current Traffic ~400 Mbps; ESNet Limitation Projections: 0.5 to 24 Tbps by ~2012 Much of the Traffic: SLAC IN2P3/RAL/INFN; via ESnet+France; Abilene+CERN Slide 13 Tier0-Tier1 Link Requirements Estimate: for Hoffmann Report 2001 1)Tier1 Tier0 Data Flow for Analysis0.5 - 1.0 Gbps 2)Tier2 Tier0 Data Flow for Analysis0.2 - 0.5 Gbps 3)Interactive Collaborative Sessions (30 Peak) 0.1 - 0.3 Gbps 4)Remote Interactive Sessions (30 Flows Peak) 0.1 - 0.2 Gbps 5)Individual (Tier3 or Tier4) data transfers 0.8 Gbps Limit to 10 Flows of 5 Mbytes/sec each uTOTAL Per Tier0 - Tier1 Link1.7 - 2.8 Gbps NOTE: Adopted by the LHC Experiments; given in the upcoming Hoffmann Steering Committee Report: 1.5 - 3 Gbps per experiment Corresponds to ~10 Gbps Baseline BW Installed on US-CERN Link Hoffmann Panel also discussed the effects of higher bandwidths For example all-optical 10 Gbps Ethernet across WANs Slide 14 Tier0-Tier1 BW Requirements Estimate: for Hoffmann Report 2001 u Does Not Include the more recent ATLAS Data Estimates 270 Hz at 10 33 Instead of 100Hz 400 Hz at 10 34 Instead of 100Hz 2 MB/Event Instead of 1 MB/Event u Does Not Allow Fast Download to Tier3+4 of Small Object Collections Example: Download 10 7 Events of AODs (10 4 Bytes) 100 Gbytes; At 5 Mbytes/sec per person (above) thats 6 Hours ! u This is a still a rough, bottoms-up, static, and hence Conservative Model. A Dynamic distributed DB or Grid system with Caching, Co-scheduling, and Pre-Emptive data movement may well require greater bandwidth Does Not Include Virtual Data operations: Derived Data Copies; Data-description overheads Further MONARC Computing Model Studies are Needed Slide 15 ICFA SCIC Meetings[*] and Topics u Focus on the Digital Divide This Year Identification of problem areas; work on ways to improve u Network Status and Upgrade Plans in Each Country Performance (Throughput) Evolution in Each Country, and Transatlantic u Performance Monitoring World-Overview (Les Cottrell, IEPM Project) u Specific Technical Topics (Examples): Bulk transfer, New Protocols; Collaborative Systems, VOIP u Preparation of Reports to ICFA (Lab Directors Meetings) Last Report: World Network Status and Outlook - Feb. 2002 Next Report: Digital Divide, + Monitoring, Advanced Technologies; Requirements Evolution Feb. 2003 [*] Seven Meetings in 2002; at KEK In December 13. Slide 16 Network Progress in 2002 and Issues for Major Experiments u Backbones & major links advancing rapidly to 10 Gbps range Gbps end-to-end throughput data flows have been tested; will be in production soon (in 12 to 18 Months) Transition to Multi-wavelengths 1-3 yrs. in the most favored regions u Network advances are changing the view of the nets roles Likely to have a profound impact on the experiments Computing Models, and bandwidth requirements More dynamic view: GByte to TByte data transactions; dynamic path provisioning u Net R&D Driven by Advanced integrated applications, such as Data Grids, that rely on seamless LAN and WAN operation With reliable, quantifiable (monitored), high performance All of the above will further open the Digital Divide chasm. We need to take action Slide 17 ICFA SCIC: R&E Backbone and International Link Progress uGEANT Pan-European Backbone (http://www.dante.net/geant) http://www.dante.net/geant Now interconnects >31 countries; many trunks 2.5 and 10 Gbps uUK: SuperJANET Core at 10 Gbps 2.5 Gbps NY-London, with 622 Mbps to ESnet and Abilene uFrance (IN2P3): 2.5 Gbps RENATER backbone from October 2002 Lyon-CERN Link Upgraded to 1 Gbps Ethernet Proposal for dark fiber to CERN by end 2003 uSuperSINET (Japan): 10 Gbps IP and 10 Gbps Wavelength Core Tokyo to NY Links: 2 X 2.5 Gbps started; Peer with ESNet by Feb. uCA*net4 (Canada): Interconnect customer-owned dark fiber nets across Canada at 10 Gbps, started July 2002 Lambda-Grids by ~2004-5 uGWIN (Germany): 2.5 Gbps Core; Connect to US at 2 X 2.5 Gbps; Support for SILK Project: Satellite links to FSU Republics uRussia: 155 Mbps Links to Moscow (Typ. 30-45 Mbps for Science) Moscow-Starlight Link to 155 Mbps (US NSF + Russia Support) Moscow-GEANT and Moscow-Stockholm Links 155 Mbps Slide 18 R&E Backbone and Intl Link Progress uAbilene (Internet2) Upgrade from 2.5 to 10 Gbps in 2002 Encourage high throughput use for targeted applications; FAST uESNET: Upgrade: to 10 Gbps As Soon as Possible uUS-CERN to 622 Mbps in August; Mo