23
Summary of the Track on Computing Facilities and Networking Sverre Jarp, CERN Simon Lin, ASGC Les Robertson, CERN QuickTime™ and a TIFF (Uncompressed) decompressor are needed to see this picture.

Summary of the Track on Computing Facilities and Networking Sverre Jarp, CERN Simon Lin, ASGC Les Robertson, CERN

Embed Size (px)

Citation preview

Page 1: Summary of the Track on Computing Facilities and Networking Sverre Jarp, CERN Simon Lin, ASGC Les Robertson, CERN

Summary of the Track on Computing Facilities and

Networking

Sverre Jarp, CERN

Simon Lin, ASGC

Les Robertson, CERNQuickTime™ and a

TIFF (Uncompressed) decompressorare needed to see this picture.

Page 2: Summary of the Track on Computing Facilities and Networking Sverre Jarp, CERN Simon Lin, ASGC Les Robertson, CERN

Storage

Page 3: Summary of the Track on Computing Facilities and Networking Sverre Jarp, CERN Simon Lin, ASGC Les Robertson, CERN

Storage for the LHC Experiments

• Report of a joint LCG/HEPiX task force set up to look from a Tier-1 perspective at the LHC experiment computing models, examining the requirements in terms of data volumes, access patterns and security for the various classes of data, and trying to map these on to suitable technologies.

• Recommendations include: – Simple disk solutions fulfill performance and reliability requirements, and

are the most cost effective. No clear experience to support reliability and management arguments for more expensive solutions.

– While disk archive should be actively investigated, there are many unresolved questions and it is too soon to plan the replacement of tape as archive medium.

– Technology summary, cost estimates (no surprises) and various pieces of advice on purchasing.

– Draft report available via the LCG GDB page

Page 4: Summary of the Track on Computing Facilities and Networking Sverre Jarp, CERN Simon Lin, ASGC Les Robertson, CERN

Doris Ressmann FZK - Using Tivoli Storage Manager with dCache

• Tape storage manager to match the scheduling strategy of dCache• A concept of “storage agents” to replicate the functions of the TSM

Server • Improves throughput while

optimising tape driveusage

FZK - Using Tivoli Storage Manager to create a high performance tape connection

Page 5: Summary of the Track on Computing Facilities and Networking Sverre Jarp, CERN Simon Lin, ASGC Les Robertson, CERN

dCache – the next Upgrade

Page 6: Summary of the Track on Computing Facilities and Networking Sverre Jarp, CERN Simon Lin, ASGC Les Robertson, CERN

Chimera – a new fast extensible and Grid-enabled namespace

• Development of a filename service independent of the storage system– capable of handling a large number of

different storage locations and systems– separates the

Filesystem view from the metadata

– provides pluggable authentication

– supports an extendable set of front ends

Page 7: Summary of the Track on Computing Facilities and Networking Sverre Jarp, CERN Simon Lin, ASGC Les Robertson, CERN

gPLAZMA: Introducing RBAC Security in dCache

• Scheme for mapping roles in VO to privilege attributes in dCache– E.g. for SE & CE

• Status: production

• Broad deployment before SC4

• Hoping other grid effort LCG/EGEE will adapt GUMS

Page 8: Summary of the Track on Computing Facilities and Networking Sverre Jarp, CERN Simon Lin, ASGC Les Robertson, CERN

Managing small files in Mass Storage systems using Virtual Volumes at PIC

• We have successfully deployed a combination of common O/S tools (mkisofs and amd) in order to handle large numbers of (small) files in “containers” which are (large) ISO 9660 files which are handled through PIC's Castor MSS

• In production for Parc Taulí Hospital and MAGIC

Page 9: Summary of the Track on Computing Facilities and Networking Sverre Jarp, CERN Simon Lin, ASGC Les Robertson, CERN

Networking

Page 10: Summary of the Track on Computing Facilities and Networking Sverre Jarp, CERN Simon Lin, ASGC Les Robertson, CERN

Networks for ATLAS Trigger and Data Acquisition

• Approximately 3000 end-nodes in CERN

• Based entirely on Ethernet technology

• Studied resilience scenarios• Different tradeoff for control

and front-end & Back-end data network

• Introduced interchangeable processing power

FrontEnd Network

Detector buffers

ROS PC

ROB

ROB

ROBROB

ROB

ROBROS PC

ROB

ROB

ROBROB

ROB

ROB

BackEnd Network

L2PUs

EFPs EFPs EFPs

SFIs

SFOs

Massstorage

ATLAS DETECTOR(1.5 Mbyte events)

~40 MHz(60 Tbyte/s)

~100 kHz(150 Gbyte/s)

~3.5 kHz(5.25 Gbyte/s)

~200 Hz(300 Mbyte/s)

Level1HW

Trigger

SVs

~1600 ROBs~150 ROSs

~550L2PUs ~100

SFIs

~1600EFPs

Page 11: Summary of the Track on Computing Facilities and Networking Sverre Jarp, CERN Simon Lin, ASGC Les Robertson, CERN

World thruput seen from US

Behind Europe6 Yrs: Russia, Latin America 7 Yrs: Mid-East, SE Asia10 Yrs: South Asia11 Yrs: Cent. Asia12 Yrs: Africa

South Asia, Central Asia, and

Africa are in Danger of Falling

Even Farther Behind

Page 12: Summary of the Track on Computing Facilities and Networking Sverre Jarp, CERN Simon Lin, ASGC Les Robertson, CERN

The Ultralight Projecttwo talks

• UltraLight is– A four year $2M NSF ITR funded by MPS– Application driven Network R&D

• Two Primary, Synergistic Activities– Network “Backbone”: Perform network R&D /

engineering– Applications “Driver”: System Services R&D /

engineering

• Ultimate goal : Enable physics analysis and discoveries which could not otherwise be achieved

Page 13: Summary of the Track on Computing Facilities and Networking Sverre Jarp, CERN Simon Lin, ASGC Les Robertson, CERN

Lamda Station: Production Applications Exploiting Advanced networks in Data

Intensive High Energy Physics• Function

– Schedule use of one or more reservable network paths– Arrange for traffic to be forwarded onto such paths

Page 14: Summary of the Track on Computing Facilities and Networking Sverre Jarp, CERN Simon Lin, ASGC Les Robertson, CERN

TeraPath: A Qos Enabled Collaborative Data Sharing Infrastructure for Peta-scale

Computing Research

• How to predict reliable petascale data movement

• Show prioritized vs. best effect

• Integrated with web services for control

• BNL

Page 15: Summary of the Track on Computing Facilities and Networking Sverre Jarp, CERN Simon Lin, ASGC Les Robertson, CERN

Performance Analysis of Linux Networking

• Fermilab Analysis of Packet Receive Process with Linux 2.6 kernel• Several potential bottlenecks identified, including switching between

kernel and user space• Mathematical model

developed to aid the analysis

Page 16: Summary of the Track on Computing Facilities and Networking Sverre Jarp, CERN Simon Lin, ASGC Les Robertson, CERN

SLAC: Using Netflow data for forecasting (patterns & profiling)– Collect records for several weeks– Filter 40 major collaborator sites, big (> 100KBytes) flows, bulk

transport apps/ports (bbcp, bbftp, iperf, thrulay, scp, ftp– Divide by remote site, aggregate parallel streams– Fold data onto one week, see bands at known capacities and

RTTs

Page 17: Summary of the Track on Computing Facilities and Networking Sverre Jarp, CERN Simon Lin, ASGC Les Robertson, CERN

General Talks

Page 18: Summary of the Track on Computing Facilities and Networking Sverre Jarp, CERN Simon Lin, ASGC Les Robertson, CERN

Development of the Tier-1 Facility at Fermilab

• Facility services for Grid Interfaces, Processing/Storage/Networking

Page 19: Summary of the Track on Computing Facilities and Networking Sverre Jarp, CERN Simon Lin, ASGC Les Robertson, CERN

Real Time Monitor• The Real Time Monitor has developed from a demo to show

real time usage of the LCG by direct querying of the Resource Brokers

• ~30 Resource Brokers• It is used by the portal

to determine job statuses • Provides daily summary

reports (including per VO)• Further development will

provide real time triggersfor problematic behaviour

• Real time XML files are publicly available

Page 20: Summary of the Track on Computing Facilities and Networking Sverre Jarp, CERN Simon Lin, ASGC Les Robertson, CERN

Benchmarking AMD64 and EMT64

• Conclusions– Today’s 32-bit applications run well on 64-bit

systems (allowing painless transitions)– The 64-bit architecture promises a BIG

increase in computing power– Dual core processors provide almost 2x

computing power (compared to single core)– Optimal move:

• 64bit AND dual-core

Page 21: Summary of the Track on Computing Facilities and Networking Sverre Jarp, CERN Simon Lin, ASGC Les Robertson, CERN

VINCI: Virtual Intelligent Networks for Computing Infrastructure

Application

End UserAgent

Topology Discovery

GMPLS MPLS OS SNMP

Scheduling ; Dynamic Path Allocation

Control Path Provisioning

Failure Detection

Application

End UserAgent

Authentication, Authorization, Accounting

Learning

Prediction

System Evaluation & Optimization

MONITORING

Page 22: Summary of the Track on Computing Facilities and Networking Sverre Jarp, CERN Simon Lin, ASGC Les Robertson, CERN

apeNEXT: Experiences from Initial Operation

• Custom designed “System on Chip”• Operation for COMPLEX DP FLP

– 8 Flops/cycle• Bi-directional

interconnects• Host system :

Master & Slave PCs• Special dedicated compiler• Installed: DESY/Zeuthen,

INFN, Bielefeld U.

Page 23: Summary of the Track on Computing Facilities and Networking Sverre Jarp, CERN Simon Lin, ASGC Les Robertson, CERN

Other talksSystem Management & Operation• DNS load balancing and failover mechanism at CERN• Cluster architecture for Java web hosting at CERN• Embedding Quattor into the Fabric Management Infrastructure at DESY• The DESY-Registry: account management for many backend systemsStorage• Experience with ENSTORE at FermilabNetworking• Network Information and Monitoring Infrastructure at FermilabOther topics• Summary of the conclusions from Phase 1 of CERN’s openlab, and plans

for the next phase – Platform competence centre, Grid interoperability, virtualization.

• High End Visualization with BARC’s Scalable Display System• DESY: Introduction of a Content Management System in a HEP

environment