14
UKI-SouthGrid Overview GridPP27 Pete Gronbech SouthGrid Technical Coordinator CERN September 2011

UKI-SouthGrid Overview GridPP27 Pete Gronbech SouthGrid Technical Coordinator CERN September 2011

Embed Size (px)

Citation preview

Page 1: UKI-SouthGrid Overview GridPP27 Pete Gronbech SouthGrid Technical Coordinator CERN September 2011

UKI-SouthGrid OverviewGridPP27

Pete GronbechSouthGrid Technical Coordinator

CERNSeptember 2011

Page 2: UKI-SouthGrid Overview GridPP27 Pete Gronbech SouthGrid Technical Coordinator CERN September 2011

SouthGrid September 20112

UK Tier 2 reported CPU

– Historical View to present

Page 3: UKI-SouthGrid Overview GridPP27 Pete Gronbech SouthGrid Technical Coordinator CERN September 2011

SouthGrid September 20113

SouthGrid SitesAccounting as reported by

APEL

Page 4: UKI-SouthGrid Overview GridPP27 Pete Gronbech SouthGrid Technical Coordinator CERN September 2011

Resources vs Gridpp3 h/w generated MoU for 2011,12

2011 TB 2012 TB

bham 95 124

bris 27 35

cam 135 174

ox 255 328

RALPPD 440 573

Total 952 1244

2011 HS06 2012 HS06

bham 2,119 2724

bris 1,173 1429

cam 1,445 1738

ox 2,483 2974

RALPPD 13109 16515

5SouthGrid September 2011

HEPSPEC06 Storage (TB)

1381 10.5

3345 195

2187 110

2445 253

7322 620

19655 980

0  

36335 2168

Site

EDFA-JET

Birmingham

Bristol

Cambridge

Oxford

RALPPD

 

Totals

Page 5: UKI-SouthGrid Overview GridPP27 Pete Gronbech SouthGrid Technical Coordinator CERN September 2011

OX Bris Cam Bham RalPP

Cream Glite3.2 creamce Creamce Cream ce 1 glite 3.2 creamce + 2 UMB creamce’s

LCG-ce Gone Gone One driving condor

Still in production

Decommissioning two weeks ago, availability reduced due to anding of LCG-ce with cream while in draining mode.

Glexec Yes no No Installed failing some tests

Yes

Argus Yes No EGI Argus Installed but some issues

Yes

cvmfs Installed working for LHCB waiting for Atlas

No No Not yet, new h/w here to help reorganise the service nodes

Deployed not used yet.

SouthGrid September 20116

Page 6: UKI-SouthGrid Overview GridPP27 Pete Gronbech SouthGrid Technical Coordinator CERN September 2011

JET

• Since last meeting the site has been less well utilised. Partly due to down time associated with upgrades.

SouthGrid August 20107

• Essentially a pure CPU site– 1772 HepSPEC06– 10.5 Tb of storage

• All service nodes have been upgraded to glite3.2, with CREAM ce’s. SE is now 10.5TB

• Aim is to enable the site for Atlas production work, but the Atlas s/w will be easier to manage if we setup CVMFS.

• Oxford will help JET do this.

Page 7: UKI-SouthGrid Overview GridPP27 Pete Gronbech SouthGrid Technical Coordinator CERN September 2011

Birmingham Tier 2 Site Not much has changed since the last meeting! Our hardware is still:

24 8-core machines (192 job slots) @ 9.61-HEP-SPEC06 (Local) ●

48 4-core machines (192 jobs slots) @ 7.93-HEP-SPEC06 (Shared) ●

177.35T of DPM storage across 4 pool nodes ●

As for service nodes, we have:

4 CEs (2 CREAM and 2 LCG), serving the two clusters ●

CREAM CE for local cluster also runs Torque ●

2 ALICE VO Boxes, 1 for each cluster ●

An ARGUS server for the local cluster ●

Usual BDII, APEL and DPM MySQL server nodes ●

All these are running gLite 3.2 SL5 with the exception of the LCG Ces

The main change from last time is we have deployed glexec on the local cluster – still waiting on a tarball install for the shared cluster

Have just taken delivery of 2 new 8 core systems to replace the 4 quad core service machines. Our Future plans include: Decommission the LCG CEs ●

Consolidate service nodes on to new machines ●

Split the Torque server and CREAM CE ●

Deploy CVMFS ●

Turn the older service machines in to workers (maybe!) ●

Hopefully most of this can be done in one go in the next month or so!

Page 8: UKI-SouthGrid Overview GridPP27 Pete Gronbech SouthGrid Technical Coordinator CERN September 2011

Bristol

Status StoRM SE with GPFS, 102TB “almost completely” full of CMS data Currently running StoRM 1.3 on SL4, plan to upgrade as soon as there is a stable new release,

so far 1.6, 1.7 have not been. Bristol has two clusters, both controlled by Physics. Neither of the university HPC clusters are

currently being used. New Dell VM hosting node bought to run service VMs on, with help from Oxford.

Recent changes New Cream ce’s front each cluster, one

glite 3.2 and one using the new UMD release. (Installed by Kashif )

Glexec and Argus have not yet been installed.

Page 9: UKI-SouthGrid Overview GridPP27 Pete Gronbech SouthGrid Technical Coordinator CERN September 2011

Cambridge

• Status– CPU : 246 job slots – 2445 HS06– Storage : 201TB [si] online, plus 38TB exclusively used by Camont

• Most services glite 3.2, exception is the DPM head node and the LCG-ce for the condor

cluster. • DPM v1.8.0 on of the DPM disk servers, SL5• XFS file system for the storage• Batch System – Condor 7.4.4, Torque 2.3.13 • Supported VOs: Mainly Atlas, LHCb and Camont

• Recent Changes– CREAM CE with PBS installed– Also working on CREAM-Condor in parallel

• APEL issues– Problems with the existing APEL implementation for Condor

SouthGrid August 201010

Page 10: UKI-SouthGrid Overview GridPP27 Pete Gronbech SouthGrid Technical Coordinator CERN September 2011

RALPP

• 2056 CPU cores, 19655 HS06• 980TB disk• We now run purely CreamCEs: 1 * glite 3.2 on a VM (soon to be

retired), 2 * UMD (though at time of writing, one doesn’t seem to be publishing properly).

• Lately a lot of problems with CE stability, as per discussions on the various mailing lists.

• Batch system is still Torque from glite 3.1, but we will soon bring up an EMI/UMD torque to replace it (currently installed for test).

• SE is dCache 1.9.5 – planning to ugrade to 1.9.12 in the near future.

• Has been very busy over recent months.

SouthGrid August 201011

Page 11: UKI-SouthGrid Overview GridPP27 Pete Gronbech SouthGrid Technical Coordinator CERN September 2011

Oxford• Oxford’s workload is dominated by ATLAS analysis and production

SouthGrid August 201012

• Installed kit– Autumn 2010 upgrade added 256

cores based on dual 8-core AMD Opterons

– These have dual disks striped with s/w raid to improve I/O.

– And three new 36 bay disk servers took storage up to 290Tb to meet MoU requirements.

• Recent Upgrades– Using Departmental money – 14 Dell R510 disk servers, faster and

smaller chunks with 10Gbit networking

– Some Dell6100 WN’s installed.– Two 10G network switches and new

gigabit switches for the cluster– Are in talks with the University

networking with an aim to convert our link from the computer centre to 10Gbit. The current plan is to us QoS to allow us to use idle bandwidth dependant on usage. The dual 10Gbit Campus JANET link is current running at ~3GBit in and 1Gbit out so there is spare available.

Page 12: UKI-SouthGrid Overview GridPP27 Pete Gronbech SouthGrid Technical Coordinator CERN September 2011

Other Oxford Work• CMS Tier 3

– Supported by RALPPD’s PhEDEx server – Useful for CMS, and for us, keeping the site busy in quiet times– However can block Atlas jobs and during accounting period not so desirable

• ALICE Support– There is a need to supplement the support given to ALICE by Birmingham.– Made sense to keep this in SouthGrid so Oxford have deployed an ALICE VO box– Site being configured by Kashif in conjunction with Alice support

• UK Regional Monitoring– Kashif runs the nagios based WLCG monitoring on the servers at Oxford– These include the Nagios server itself, and support nodes for it, SE, MyProxy and WMS/LB– The WMS is an addition to help the UK NGS migrate their testing.– There are very regular software updates for the WLCG Nagios monitoring, ~6 so far this year.

• Early Adopters– Take part in the testing of CREAM, ARGUS and torque_utils. Have accepted and provided a report for

every new version of CREAM this year.

• SouthGrid Support– Providing support for Bristol– Landslides support at Oxford and Bristol– Helping bring Sussex onto the Grid, (Been too busy in recent months though)

SouthGrid September 201113

Page 13: UKI-SouthGrid Overview GridPP27 Pete Gronbech SouthGrid Technical Coordinator CERN September 2011

• Sussex has a significant local ATLAS group, their system is designed for the high IO bandwidth patterns that ATLAS analysis can generate.

• Up and running as a Tier 3 with the Feynman sub-cluster for Particle Physics, Apollo sub-cluster used by rest of University.

• Feynman : 8 nodes, each node has 2 Intel Xeon X5650 @ 2.67GHz measured at ~15.67 HepSpec06 per core, total of 96 cores. 48GB Ram per node. Apollo currently has 38 nodes totalling 464 cores. The plan is to merge the 2 sub-clusters in next 6 months

• 81T of Lustre storage shared by both sub-clusters. Everything fully interconnected with infiniband. Cluster is Dell hardware, using three R510 disk servers each with two external disk shelves (each with its own RAID controller).

• CVMFS installed and working, being used by the ATLAS group as Sussex.• In process of installing and configuring grid services to become a Tier 2 site (UKI-

SOUTHGRID-SUSX) for SouthGrid.We have registered the service nodes and got grid certificates for them. 4 machines are set up ready for BDII, CreamCE, Apel and SE.

• BDII and Apel done, working on CE and SE. Hoping to be fully up and running within 2 months.

Sussex

SouthGrid August 201014

Page 14: UKI-SouthGrid Overview GridPP27 Pete Gronbech SouthGrid Technical Coordinator CERN September 2011

SouthGrid September 201115

Conclusions

• SouthGrid sites utilisation generally improving, but some sites small compared with others.

• Birmingham supporting Atlas, Alice and LHCb.• Bristol; Need to get new version of STORM working if the hope to

be a CMS tier2 site• Cambridge; only partly using PBS so APEL still reports low. The

Condor part does not report correctly into APEL. Accounting metrics come direct from ATLAS so less critical for that.

• Could enable JET for ATLAS production as they now have enough disk, but ATLAS say they would prefer them to use CVMFS, so we have to help them do that.

• Oxford upgraded to be optimised for ATLAS analysis, and is involved in many other areas.

• RALPPD are at full strength , leading the way.• Sussex; need some small effort/support to bring them on line