Networks and Grids for Global Science Networks and Grids for Global Science Harvey B. Newman Harvey B. Newman California Institute of Technology 3 rd International

  • View
    216

  • Download
    2

Embed Size (px)

Text of Networks and Grids for Global Science Networks and Grids for Global Science Harvey B. Newman Harvey...

  • Slide 1
  • Networks and Grids for Global Science Networks and Grids for Global Science Harvey B. Newman Harvey B. Newman California Institute of Technology 3 rd International Data Grid Workshop Daegu, Korea August 26, 2004 California Institute of Technology 3 rd International Data Grid Workshop Daegu, Korea August 26, 2004
  • Slide 2
  • Challenges for Global HENP Experiments Major Challenges (Shared with Other Fields) Major Challenges (Shared with Other Fields) Worldwide Communication and Collaboration Managing Globally Distributed Computing & Data Resources, for Cooperative Software Development and Analysis LHC Example- 2007 5000+ Physicists 250+ Institutes 60+ Countries BaBar/D0 Example - 2004 500+ Physicists 100+ Institutes 35+ Countries
  • Slide 3
  • First Beams: Summer 2007 Physics Runs: from Fall 2007 TOTEM LHCb: B-physics ALICE : HI pp s =14 TeV L=10 34 cm -2 s -1 27 km Tunnel in Switzerland & France Large Hadron Collider (LHC) CERN, Geneva: 2007 Start CMS Atlas Higgs, SUSY, QG Plasma, CP Violation, the Unexpected 5000+ Physicists 250+ Institutes 60+ Countries
  • Slide 4
  • LHC Data Grid Hierarchy: Developed at Caltech Tier 1 Tier2 Center Online System CERN Center PBs of Disk; Tape Robot FNAL Center IN2P3 Center INFN Center RAL Center Institute Workstations ~100-1500 MBytes/sec ~10 Gbps 1 to 10 Gbps Tens of Petabytes by 2007-8. An Exabyte ~5-7 Years later. Physics data cache ~PByte/sec 10 - 40 Gbps Tier2 Center ~1-10 Gbps Tier 0 +1 Tier 3 Tier 4 Tier2 Center Tier 2 Experiment CERN/Outside Resource Ratio ~1:2 Tier0/( Tier1)/( Tier2) ~1:1:1 Emerging Vision: A Richly Structured, Global Dynamic System
  • Slide 5
  • ICFA and Global Networks for Collaborative Science National and International Networks, with sufficient (rapidly increasing) capacity and seamless end-to-end capability, are essential for The daily conduct of collaborative work in both experiment and theory Experiment development & construction on a global scale Grid systems supporting analysis involving physicists in all world regions The conception, design and implementation of next generation facilities as global networks Collaborations on this scale would never have been attempted, if they could not rely on excellent networks
  • Slide 6
  • Challenges of Next Generation Science in the Information Age Flagship Applications High Energy & Nuclear Physics, AstroPhysics Sky Surveys: TByte to PByte block transfers at 1-10+ Gbps Fusion Energy: Time Critical Burst-Data Distribution; Distributed Plasma Simulations, Visualization, Analysis eVLBI: Many real time data streams at 1-10 Gbps BioInformatics, Clinical Imaging: GByte images on demand Provide results with rapid turnaround, over networks of varying capability in different world regions Advanced integrated applications, such as Data Grids, rely on seamless operation of our LANs and WANs With reliable, quantifiable high performance Petabytes of complex data explored and analyzed by 1000s of globally dispersed scientists, in hundreds of teams
  • Slide 7
  • Intl Networks BW on Major Links for HENP: US-CERN Example Rate of Progress >> Moores Law (US-CERN Example) 9.6 kbps Analog (1985) 64-256 kbps Digital (1989 - 1994) [X 7 27] 1.5 Mbps Shared (1990-3; IBM) [X 160] 2 -4 Mbps (1996-1998) [X 200-400] 12-20 Mbps (1999-2000) [X 1.2k-2k] 155-310 Mbps (2001-2) [X 16k 32k] 622 Mbps (2002-3) [X 65k] 2.5 Gbps (2003-4) [X 250k] 10 Gbps (2005) [X 1M] 4x10 Gbps or 40 Gbps (2007-8)[X 4M] A factor of ~1M Bandwidth Improvement over 1985-2005 (a factor of ~5k during 1995-2005) A prime enabler of major HENP programs HENP has become a leading applications driver, and also a co-developer of global networks
  • Slide 8
  • History of Bandwidth Usage One Large Network; One Large Research Site SLAC Traffic ~400 Mbps; Growth in Steps (ESNet Limit): ~ 10X/4 Years. Projected: ~2 Terabits/s by ~2014 ESnet Accepted Traffic 1/90 1/04 Exponential Growth Since 92; Annual Rate Increased from 1.7 to 2.0X Per Year In the Last 5 Years
  • Slide 9
  • Internet Growth in the World At Large Amsterdam Internet Exchange Point Example 20 Gbps 30 Gbps Average 5 Minute Max Some Annual Growth Spurts; Typically In Summer-Fall The Rate of HENP Network Usage Growth (~100% Per Year) is Similar to the World at Large 11.08.04
  • Slide 10
  • Judged on product of transfer speed and distance end-to-end, using standard (TCP/IP) protocols. Across Production Net: Abilene IPv6 record: 4.0 Gbps between Geneva and Phoenix (SC2003) IPv4 Multi-stream record with Windows & Linux: 6.6 Gbps between Caltech and CERN (16 kkm; Grand Tour dAbilene) June 2004 Exceeded 100 Petabit-m/sec Single Stream 7.5 Gbps X 16 kkm with Linux Achieved in July Concentrate now on reliable Terabyte-scale file transfers Note System Issues: CPU, PCI-X Bus, NIC, I/O Controllers, Drivers Monitoring of the Abilene traffic in LA: Internet 2 Land Speed Record (LSR) Petabitmeter (10^15 bit*meter) LSR History IPv4 single stream http://www.guinnessworldrecords.com/ June 2004 Record Network
  • Slide 11
  • HENP Bandwidth Roadmap for Major Links (in Gbps) Continuing Trend: ~1000 Times Bandwidth Growth Per Decade; Keeping Pace with Network BW Usage (ESNet, SURFNet etc.)
  • Slide 12
  • Evolving Quantitative Science Requirements for Networks (DOE High Perf. Network Workshop) Science Areas Today End2End Throughput 5 years End2End Throughput 5-10 Years End2End Throughput Remarks High Energy Physics 0.5 Gb/s 100 Gb/s 1000 Gb/s High bulk throughput Climate (Data & Computation) 0.5 Gb/s 160-200 Gb/s N x 1000 Gb/s High bulk throughput SNS NanoScience Not yet started 1 Gb/s 1000 Gb/s + QoS for Control Channel Remote control and time critical throughput Fusion Energy 0.066 Gb/s (500 MB/s burst) 0.198 Gb/s (500MB/ 20 sec. burst) N x 1000 Gb/s Time critical throughput Astrophysics 0.013 Gb/s (1 TByte/week) N*N multicast 1000 Gb/s Computational steering and collaborations Genomics Data & Computation 0.091 Gb/s (1 TBy/day) 100s of users 1000 Gb/s + QoS for Control Channel High throughput and steering
  • Slide 13
  • National Lambda Rail (NLR) 15808 Terminal, Regen or OADM site Fiber route NLR Coming Up Now Initially 4 10G Wavelengths Northern Route LA-JAX by 4Q04 Internet2 HOPI Initiative (w/HEP) To 40 10G Waves in Future Transition beginning now to optical, multi- wavelength Community owned or leased dark fiber networks for R&E PIT POR FRE RAL WAL NAS PHO OLG ATL CHI CLE KAN OGD SAC BOS NYC WDC STR DAL DEN LAX SVL SEA SDG JAC nl, ca, pl, cz, uk, ko, jp 18 US States
  • Slide 14
  • JGN2: Japan Gigabit Network (4/04 3/08) 20 Gbps Backbone, 6 Optical Cross-Connects Kanazawa Sendai Sapporo Nagano Kochi Nagoya Fukuoka Okinawa Okayama Teleport Okayama (Okayama) Hiroshima University (Higashi Hiroshima) Tottori University of Environmental Studies (Tottori) Techno Ark Shimane (Matsue) New Media Plaza Yamaguchi (Yamaguchi) Kyoto University (Kyoto) Osaka University (Ibaraki) NICT Kansai Advanced Research Center (Kobe) Lake Biwa Data Highway AP * (Ohtsu) Nara Prefectural Institute of Industrial Technology (Nara) Wakayama University (Wakayama) Hyogo Prefecture Nishiharima Technopolis (Kamigori-cho, Hyogo Prefecture) Kyushu University (Fukuoka) NetCom Saga (Saga) Nagasaki University (Nagasaki) Kumamoto Prefectural Office (Kumamoto) Toyonokuni Hyper Network AP *(Oita) Miyazaki University (Miyazaki) Kagoshima University (Kagoshima) Kagawa Prefecture Industry Promotion Center (Takamatsu) Tokushima University (Tokushima) Ehime University (Matsuyama) Kochi University of Technology (Tosayamada-cho, Kochi Prefecture) Nagoya University (Nagoya) University of Shizuoka (Shizuoka) Softopia Japan (Ogaki, Gifu Prefecture) Mie Prefectural College of Nursing (Tsu) Ishikawa Hi-tech Exchange Center (Tatsunokuchi-machi, Ishikawa Prefecture) Toyama Institute of Information Systems (Toyama) Fukui Prefecture Data Super Highway AP * (Fukui) Niigata University (Niigata) Matsumoto Information Creation Center (Matsumoto, Nagano Prefecture) Tokyo University (Bunkyo Ward, Tokyo) NICT Kashima Space Research Center (Kashima, Ibaraki Prefecture) Yokosuka Telecom Research Park (Yokosuka, Kanagawa Prefecture) Utsunomiya University (Utsunomiya) Gunma Industrial Technology Center (Maebashi) Reitaku University (Kashiwa, Chiba Prefecture) NICT Honjo Information and Communications Open Laboratory (Honjo, Saitama Prefecture) Yamanashi Prefecture Open R&D Center (Nakakoma-gun, Yamanashi Prefecture) Tohoku University (Sendai) NICT Iwate IT Open Laboratory (Takizawa-mura, Iwate Prefecture) Hachinohe Institute of Technology (Hachinohe, Aomori Prefecture) Akita Regional IX * (Akita) Keio University Tsuruoka Campus (Tsuruoka, Yamagata Prefecture) Aizu University (Aizu Wakamatsu) Hokkaido Regional Network Association AP * (Sapporo) NICT Tsukuba Research Center 20Gbps 10Gbps 1Gbps Optical testbeds Core network nodes Access points Osaka Otemachi USA NICT Keihannna Human Info-Communications Research Center NICT Kita Kyushu IT Open Laboratory NICT Koganei Headquarters [Legends ] * IX:Internet eXchange AP:Access Point
  • Slide 15
  • GLIF: Global Lambda Integrated Facility GLIF is a World Scale Lambda based Lab for Application and Middleware development, where Grid applications ride on dynamically configured networks based on optical wavelengths... Coexisting with more traditional packet- switched network traffic 4th GLIF Workshop: Nottingham UK Sept. 2004 10 Gbps Wavelengths For R&E Network Development A

Recommended

View more >