Middleware Status

  • Upload
    tanaya

  • View
    33

  • Download
    1

Embed Size (px)

DESCRIPTION

Middleware Status. Markus Schulz , Oliver Keeble, Flavia Donno Michael Grønager (ARC) Alain Roy (OSG) ~~~ LCG-LHCC Mini Review 1 st July 2008 Modified for the GDB. Overview. Discussion on the status of gLite Status summary CCRC and issues raised Future prospects OSG Alain Roy ARC - PowerPoint PPT Presentation

Citation preview

  • Middleware StatusMarkus Schulz, Oliver Keeble, Flavia DonnoMichael Grnager (ARC)Alain Roy (OSG)~~~LCG-LHCC Mini Review1st July 2008Modified for the GDB

    October 7, 2005*

    OverviewDiscussion on the status ofgLiteStatus summaryCCRC and issues raisedFuture prospectsOSGAlain RoyARCOxana SmirnovaStorage and SRMFlavia Donno

    October 7, 2005*

    gLite: Current StatusCurrent middleware stack is gLite 3.1Available on SL4 32bitClients and selected services available also on 64bitRepresents ~15 servicesOnly awaiting FTS to be available on SL4 (in certification)Updates are released every weekUpdates are sets of patches to componentsComponents can evolve independentlyRelease process includes full certification phaseIncludesDPM, dCache and FTS for storage managementLFC and AMGA for cataloguesWMS/LB and lcg-CE for workloadClients for WN and UIBDII for the Information SystemVarious other services (eg VOMS)

    October 7, 2005*

    gLite patch statisticsNew service types are introduced as patches

    October 7, 2005*

    CCRC08 May SummaryDuring CCRC weIntroduced new servicesHandled security issuesProduced the regular stream of updatesResponded to CCRC specific issuesThe software process operated as usualNo special treatment for CCRCPriorities were reviewed twice a week in the EMT4 Updates to gLite 3.1 on 32bit18 Patches2 Updates to gLite 3.1 on 64bit7 Patches1 Update to gLite 3.0 on SL31 Patch

    Normal Operation

    October 7, 2005*

    CCRC08 May HighlightsCCRC demonstrated that the software process was functioningIt did not reveal any fundamental problems with the stackVarious minor issues were promptly addressedScalability is always a concernBut didnt affect CCRC08 The WMS/LB for gLite3.1/SL4 was releasedAn implementation of Job Priorities was releasedNot picked up by many sitesFirst gLite release of dCache 1.8 was madeWN for x86_64 was released An improved LCG-CE had been released the week beforeSignificant performance/scalability improvementsCovers our resource access needs until the CREAM-CE reaches production

  • Middleware Baseline at start of CCRCgFAL/lcg_utilsReleased gLite 3.1 Update 20DPM 1.6.7-4Patch #1752Patch #1740Patch #1738Patch #1706Patch #1671FTS (T1)Released gLite 3.0 Update 41 All other packages were assumed to be at the latest versionThis led to some confusion.. An explicit list will have to be maintained from now on

    October 7, 2005*

    Outstanding middleware issuesScalability of lcg-CESome improvements were made, but there are architectural limitsAdditional improvements are being tested at the momentAcceptable performance for the next year Multiplatform supportWork ongoing for full support of SL5 and Debian 4SuSE 9 available with limited supportInformation system scalabilityIncremental improvements continueWith recent client improvements the system performs wellGlue 2 will allow major steps forwardMain storage issuesSRM 2.2 issuesConsistency between SRM implementationsAuthorisationCovered in more detail in the storage part

    October 7, 2005*

    Forthcoming attractionsCREAM CEIn the last steps of certification (weeks)Addresses scalability problemsStandards based --> eases interoperabilityAllows parameter passingNo GRAM interfaceRedundant deployment may be possible within a yearWMS/ICEScalability tests are starting nowICE is required on the WMS to submit to CREAMglexecEntering PPS after some security fixesTo be deployed on WNs to allow secure multi-user pilot jobsRequires SCAS for use on large sites

    October 7, 2005*

    Forthcoming attractionsSCASUnder development, stress tests started at NIKHEFCan be expected to reach production in a few monthsWith glexec it ensures site wide user ID synchronizationThis is a critical new services and requires deep certificationFTSNew version in certificationSL4 supportHandling of space tokensError classificationWithin 6 monthsSplit of SRM negotiations and gridFTPImproved throughputImproved logging Syslog targetStreamlined format for correlation with other logsFull VOMS supportShort term changes for Addendum to the SRM v2.2 WLCG Usage AgreementPython client libraries

    October 7, 2005*

    Forthcoming attractionsMonitoring Nagios / ActiveMQHas an impact on middleware and operations, but is not middlewareDistribution of middleware clientsCooperation with Application AreaCurrently via tar balls within the AA repositoryActive distribution of concurrent versionsTechnology for this is (almost) readyLarge scale tests have to be run and policies definedSL5/VDT1.10 releaseExecution of the plan has startedWN (clients) expected to be ready by end of SeptemberGlue2Standardization within OGF is in the final stages Non backward compatible, but much improvedWork on implementation and phase-in plan started

    October 7, 2005*

    OSGSlides provided by Alain Roy

    OSG Software

    OSG Facility

    CCRC and last 6 months

    Interoperability and Interoperation

    Issues and next steps

  • OSG for the LHCC Mini ReviewCreated by Alain RoyEdited and reviewed by others(including Markus Schulz)

    March 11, 2008 USCMS Tier-2 Workshop

    OSG SoftwareOSG SoftwareGoal: Provide software needed by VOs within the OSG, and by other users of the VDT, including EGEE and LCG. Work is mainly software integration and packaging: we do not do software developmentWork closely with external software providersOSG Software is component of the OSG Facility, and works closely with other facility groups*

    March 11, 2008 USCMS Tier-2 Workshop

    OSG FacilityEngagementReach out to new usersIntegration Quality assurance of OSG releasesOperationsDay-to-day running of OSG FacilitySecurityOperational securitySoftwareSoftware integration and packagingTroubleshootingDebug the hard end-to-end problems

    *

    March 11, 2008 USCMS Tier-2 Workshop

    During CCRC & Last Six MonthsResponded to ATLAS/CMS/LCG/EGEE update requests and fixes, MyProxy fixed for LCG(threading problem) in certification nowGlobus proxy chain length problem fixed for LCG in certification nowFixed security problem (rpath) reported by LCGOSG 1.0 releasedmajor new releaseAdded lcg-utils for OSG software installationsNow use system version of OpenSSL instead of Globuss OpenSSL (more secure)Introduced and supported SRM V2 based storage software based on dCache, Bestman, and xrootd.

    *

    March 11, 2008 USCMS Tier-2 Workshop

    OSG / LCG interoperationBig improvements in reportingNew RSV software collects information about functioning of sites, reports it to GridView. Much improved software, better information reported.Generic Information ProviderOSG-specific versiongreatly improved to provide more accurate information about a site to the BDII.*

    March 11, 2008 USCMS Tier-2 Workshop

    Ongoing Issues & WorkUS OSG sites need more LCG client tools for data management (LFC tools, etc.) .

    Working to improve interoperability via testing between OSGs ITB and EGEEs PPS.

    gLite plans to move to new version of VDT: Well help with the transition.We have regular communication via Alain Roys attendance at the EMT meetings.

    *

    March 11, 2008 USCMS Tier-2 Workshop

    ARCSlides provided by Michael Grnager

  • ARC middleware statusfor the WLCG GDBMichael GrnagerProject Director, NDGFJuly Grid Deployment BoardCERN, Geneva, July 9th 2008

  • GDB, CERN, July 2008*Talk OutlineNordic Acronym Soup Cheat SheetARC IntroductionOther Infrastructure componentsARC and WLCGARC StatusARC ChallengesARC Future

  • GDB, CERN, July 2008*Nordic Acronym Soup Cheat SheetDefinition of a GridA loosely coupled collection of resources that are made available to a loosely defined set of users from several organizational entities. There is no single resource uptime guarantee but as a collective whole the Grid Middleware ensure that the user experiences is smooth and stable virtual resource.Definition of an InfrastructureA managed set of resources made available to a managed set of users. An infra-structure operator might very well use Grid Middleware for this purpose.

  • GDB, CERN, July 2008*Nordic Acronym Soup Cheat SheetDefinition of a GridA loosely coupled collection of resources that are made available to a loosely defined set of users from several organizational entities. There is no single resource uptime guarantee but as a collective whole the Grid Middleware ensure that the user experiences is smooth and stable virtual resource.Definition of an InfrastructureA managed set of resources made available to a managed set of users. An infra-structure operator might very well use Grid Middleware for this purpose.There is no support on a Grid !No security contact !No Uptime Guarantee! There is support on an Infrastructure ! Security contact and guarantees for uptime, stability etc.

  • GDB, CERN, July 2008*Nordic Acronym Soup Cheat SheetDefinition of a GridA loosely coupled collection of resources that are made available to a loosely defined set of users from several organizational entities. There is no single resource uptime guarantee but as a collective whole the Grid Middleware ensure that the user experiences is smooth and stable virtual resource.Definition of an InfrastructureA managed set of resources made available to a managed set of users. An infra-structure operator might very well use Grid Middleware for this purpose.There is no support on a Grid !No security contact !No Uptime Guarantee! There is support on an Infrastructure ! Security contact and guarantees for uptime, stability etc.NorduGrid[aka the Nordu Grid]

    The Grid defined by the resources registered in the four Nordic GIIS'es operated by the NorduGrid Consortium partners.I.e. a real Grid.

  • GDB, CERN, July 2008*Nordic Acronym Soup Cheat SheetDefinition of a GridA loosely coupled collection of resources that are made available to a loosely defined set of users from several organizational entities. There is no single resource uptime guarantee but as a collective whole the Grid Middleware ensure that the user experiences is smooth and stable virtual resource.Definition of an InfrastructureA managed set of resources made available to a managed set of users. An infra-structure operator might very well use Grid Middleware for this purpose.There is no support on a Grid !No security contact !No Uptime Guarantee! There is support on an Infrastructure ! Security contact and guarantees for uptime, stability etc.NorduGrid[aka the Nordu Grid]

    The Grid defined by the resources registered in the four Nordic GIIS'es operated by the NorduGrid Consortium partners.I.e. a real Grid.NorduGrid[aka the NorduGrid Consortium]

    Group that aims at ensuring continuous funding for developing ARC and to mission all over the world that ARC is the grid best m/w to use.Also the formal owner of the ARC Middleware.

  • GDB, CERN, July 2008*Nordic Acronym Soup Cheat SheetDefinition of a GridA loosely coupled collection of resources that are made available to a loosely defined set of users from several organizational entities. There is no single resource uptime guarantee but as a collective whole the Grid Middleware ensure that the user experiences is smooth and stable virtual resource.Definition of an InfrastructureA managed set of resources made available to a managed set of users. An infra-structure operator might very well use Grid Middleware for this purpose.There is no support on a Grid !No security contact !No Uptime Guarantee! There is support on an Infrastructure ! Security contact and guarantees for uptime, stability etc.NorduGrid[aka the Nordu Grid]

    The Grid defined by the resources registered in the four Nordic GIIS'es operated by the NorduGrid Consortium partners.I.e. a real Grid.NorduGrid[aka the NorduGrid Consortium]

    Group that aims at ensuring continuous funding for developing ARC and to mission all over the world that ARC is the grid best m/w to use.Also the formal owner of the ARC Middleware. ARC[Advanced Resource Connector]

    A grid middleware developed by several groups and individuals.

  • GDB, CERN, July 2008*Nordic Acronym Soup Cheat SheetDefinition of a GridA loosely coupled collection of resources that are made available to a loosely defined set of users from several organizational entities. There is no single resource uptime guarantee but as a collective whole the Grid Middleware ensure that the user experiences is smooth and stable virtual resource.Definition of an InfrastructureA managed set of resources made available to a managed set of users. An infra-structure operator might very well use Grid Middleware for this purpose.There is no support on a Grid !No security contact !No Uptime Guarantee! There is support on an Infrastructure ! Security contact and guarantees for uptime, stability etc.NorduGrid[aka the Nordu Grid]

    The Grid defined by the resources registered in the four Nordic GIIS'es operated by the NorduGrid Consortium partners.I.e. a real Grid.NorduGrid[aka the NorduGrid Consortium]

    Group that aims at ensuring continuous funding for developing ARC and to mission all over the world that ARC is the grid best m/w to use.Also the formal owner of the ARC Middleware. ARC[Advanced Resource Connector]

    A grid middleware developed by several groups and individuals. KnowARC[Grid-enabled Know-how Sharing Technology Based on ARC Services]

    An EU Project with the aim of adding Standard compliant Services to ARC.

  • GDB, CERN, July 2008*Nordic Acronym Soup Cheat SheetDefinition of a GridA loosely coupled collection of resources that are made available to a loosely defined set of users from several organizational entities. There is no single resource uptime guarantee but as a collective whole the Grid Middleware ensure that the user experiences is smooth and stable virtual resource.Definition of an InfrastructureA managed set of resources made available to a managed set of users. An infra-structure operator might very well use Grid Middleware for this purpose.There is no support on a Grid !No security contact !No Uptime Guarantee! There is support on an Infrastructure ! Security contact and guarantees for uptime, stability etc.NorduGrid[aka the Nordu Grid]

    The Grid defined by the resources registered in the four Nordic GIIS'es operated by the NorduGrid Consortium partners.I.e. a real Grid.NorduGrid[aka the NorduGrid Consortium]

    Group that aims at ensuring continuous funding for developing ARC and to mission all over the world that ARC is the grid best m/w to use.Also the formal owner of the ARC Middleware. ARC[Advanced Resource Connector]

    A grid middleware developed by several groups and individuals. KnowARC[Grid-enabled Know-how Sharing Technology Based on ARC Services]

    An EU Project with the aim of adding Standard compliant Services to ARCNDGF[Nordic DataGrid Facility]

    A Nordic Infrastructure project operating a Nordic production compute and storage grid.

  • GDB, CERN, July 2008NDGF with ARC and dCache

  • GDB, CERN, July 2008NDGF with ARC and dCacheNote !

    The coupling between NDGF and ARC is that NDGF uses ARC (as it uses dCache) for running a stable Compute and Storage Infrastructure for the Nordic WLCG Tier-1 and other Nordic e-Science projects.

    In the best open source tradition NDGF contributes to ARC (and dCache) as a pay back to the community and a way of getting what we need.

  • GDB, CERN, July 2008*ARC IntroductionGeneral purpose Open Source European Grid middlewareOne of the major production grid middlewaresFeatures:Supports all flavors of Linux and general all Unix lavors for the resourcesSupportsPBS, SGE, LL, SLURM, Condor, LSC, and more batch systemssupports data handling by the CEno need for grid software on WNslightweight (client is 4MB!)Used by several NGIs and RGIs:SweGrid, SWISS Grids, Finnish M-Grid, NDGF, Ukraine Grid etc..

  • GDB, CERN, July 2008*Other Infrastr. ComponentsARC alone does not build an infrastructure... ARC-CE SAM sensors running in production since 2007Q3 SGAS Accounting with export to APEL arc2bdii info-provider to site-BDII support for ARC-CE submission in gLite-WMS June Release Further a strong collaboration with dCache for the storage part of the infrastructure.

  • GDB, CERN, July 2008*ARC and WLCGARC supports all the WLCG experiments:Fully integrated with ATLAS (supported by NDGF)now using PANDA in a fat-pilot modeNorduGrid cloud has the highest job success rate Adapted to ALICE (supported by NDGF) Used by CMS for the CCRC (supported by NDGF)This is through gLite-WMS interoperabilityAlso direct ProdAgent plugin for ARCDeveloped for the Finnish CMS Tier-2 Tested to work with also LHCb

  • GDB, CERN, July 2008*ARC StatusCurrent stable release 0.6.3 with support for LFC LCG File Catalogue (Deprecating RLS)This is now deployed at NDGF SRM2.2Interacting with SRM2.2 storageHandy user client ngstage enable easyInteraction with SRM based SEsPriorities and SharesSupport for User/VO-role based prioritiesSupport for Shares using LRMS Accounts

  • GDB, CERN, July 2008*ARC Next releaseNext stable release 0.6.4 (due for September) will support: the BDII as a replacement for the Globus MDSRendering the ARC-SchemaOptional rendering of the GLUE 1.3 Schema optional scalability improvement packages

  • GDB, CERN, July 2008*ARC ChallengesARC have scalability issues on huge +5k core clusters data up and download is limited by the CE session and cache access is limited on NFS based clustersthis issue is absent on clusters running e.g. GPFS These issues will be solved in the 0.8 release (end 2008) Move towards a flexible dCache like setup where more nodes can be added based on neede.g. adding more up and downloaderse.g. adding more nfs servers

  • GDB, CERN, July 2008*ARC FutureThe stable branch of ARC (sometimes called ARC classic) will continue to evolveMore and more features will be added from e.g. the KnowARC project.The KnowARC development branch (sometimes called ARC1) adds a lot of services which adheres to standards like GLUE2, BES and JSDLThese will be incorporated into the stable ARC branch.Work on dynamic Runtime Environments for e.g. bioinformatics are already being incorporated. There will be no migrations but a graduate incorporation of the novel components into the stable branch.

    October 7, 2005*

    ARCSlides provided by Oxana Smirnova

    ARC introduction and highlights

    ARC-classic status

    ARC-1 Status

  • *www.nordugrid.org *Advanced Resource Connector in a nutshellGeneral purpose Open Source European Grid middlewareOne of the major production grid middlewares Developed & maintained by the NorduGrid CollaborationDeployment support, extensive documentation, available on most of the popular Linux distributionsLightweight architecture for a dynamic heterogeneous system following Scandinavian design principlesstart with something simple that works for users and add functionality graduallynon-intrusive on the server sideFlexible & powerful on the client side User- & performance-driven developmentProduction quality software since May 2002First middleware ever to contribute to HEP data challengeStrong commitment to standards & interoperabilityJSDL, GLUE, Active OGF playerMiddleware of choice by many national grid infrastructures due to its technical meritsSweGrid, SWISS Grid(s), Finnish M-Grid, NDGF, etcMajority of ARC users are NOT from the HEP communityIllustrations:Scandinavian Design beyond the Myth www.scandesign.org

    www.nordugrid.org

  • *www.nordugrid.org*ARC Classic: overviewProvides reliable implementation of fundamental Grid services:The usual grid security: single sign on, Grid ACLs (GACL), VOs (VOMS) Job submission: direct or via matchmaking and brokering Job monitoring & managementInformation services: resource aggregation, representation, discovery and monitoringImplements core data management functionalityAutomated seamless input/output data movementInterfacing to Data Indexing, client-side data movementStorage ElementsLogging serviceBuilds upon standard open source solutions and protocolsGlobus Toolkit pre-WS API and libraries (no services!)OpenLDAP, OpenSSL, SASL, SOAP, GridFTP, GSI

    www.nordugrid.org

  • *www.nordugrid.org*ARC Classic: status and plansARC is a mature software which has proven its strenghts in numerous areasProduction release 0.6.3 is outStability improvements, bug fixesLFC (file catalogue) supportARC faces a scalability challenge posed by ten-thousand-core clustersFile cache redesign is necessary (ongoing)Uploaders/downloaders load on frontends needs to be optimizedLocal information system needs to be improved (BDII is being tested)Release plans based on ARC Classic0.6.x stable releases will be periodically releasedPreliminary planning for an 0.8 release incorporating new major featuresFuture: ARC1A Web-service based solution (working prototypes exist)Main goal: achieve interoperability via standards conformance (e.g. BES, JSDL, GLUE, SRM, GridFTP, X509, SAML)Migration plan: gradually replace components with ARC1 modules, initially co-deploying both versions

    www.nordugrid.org

  • *www.nordugrid.org*ARC1 Design: service decomposionDesign document: www.knowarc.eu/documents/Knowarc_D1.1-1_07.pdf

    www.nordugrid.org

    October 7, 2005*

    SRMSlides provided by Flavia Donno

    Goal of the SRM v2.2 Addendum

    Short term solutionCASTOR, dCache, STORM, DPM

  • LCG-LHCC Mini review CERN 1 July 2008The Addendum to the WLCG SRM v2.2 Usage AgreementFlavia DonnoCERN/IT

  • F. Donno, LCG-LHCC Mini review , CERN 1 July 2008*The requirements

    The goal of the SRM v2.2 Addendum is to provide answers to the following (requirements and priorities given by the experiments)Main requirements :Space protection (VOMS-awareness)Space selectionSupported space types (T1D0, T1D1, T0D1)Pre-staging, pinningSpace usage statisticsTape optimization (reducing the number of mount operations, etc.)

    F. Donno, LCG-LHCC Mini review , CERN 1 July 2008

  • F. Donno, LCG-LHCC Mini review, CERN 1 July 2008*The document

    The most recent version is v1.4 available on CCRC/SSWG twiki:https://twiki.cern.ch/twiki/pub/LCG/WLCGCommonComputingReadinessChallenges/WLCG_SRMv22_Memo-14.pdf

    2 main parts:An implementation-specific with limited capabilities short-term solution that can be made available by the end of 2008A detailed description of an implementation-independent full solution

    The document has been agreed by storage developers, clients developers, experiments (ATLAS, CMS, LHCb)

    F. Donno, LCG-LHCC Mini review, CERN 1 July 2008

  • 24 June 2004 : WLCG Management Board approval with the following recommendations

    Top priority is services functionality, stability, reliability and performance

    The implementation plan for the short term solution can startit introduces the minimal new features needed to guarantee protection from resources abuse and support for data reprocessingIt offers limited functionality and is implementation specific with clients hiding differences

    The long term solution is a solid starting pointIt is what is technically missing/needed Details and functionalities can be re-discussed with acquired experience

    F. Donno, LCG-LHCC Mini review , CERN 1 July 2008

    *The recommendations and priorities

    F. Donno, LCG-LHCC Mini review , CERN 1 July 2008

  • F. Donno, LCG-LHCC Mini review, CERN 1 July 2008

    *The short-term solution

    CASTORSpace Protection based on UID/GID. Administrative interface ready by 3rd quarter of 2008. Space selection already available. srmPurgeFromSpace available in 3rd quarter of 2008.Space typesT1D1 provided. pinLifeTime parameter negotiated to be always equal to a system defined default. Space statistics and tape optimization Already addressed

    Implementation plan Available by the end of 2008

    F. Donno, LCG-LHCC Mini review, CERN 1 July 2008

  • F. Donno, LCG-LHCC Mini review, CERN 1 July 2008

    *The short-term solutiondCacheSpace Protection Protected creation and usage of write space tokensControlling access to the tape system by DN's or FQAN's.Space selection Based on the IP number of the client, the requested transfer protocol or the path of the file. Use of SRM special structures for more refined selection of read buffers.Space typesT1D0 + pinning provided. Releasing pins will be possible for a specific DN or FQAN. Space statistics and tape optimization Already addressed

    Implementation plan Available by the end of 2008

    F. Donno, LCG-LHCC Mini review, CERN 1 July 2008

  • F. Donno, LCG-LHCC Mini review , CERN 1 July 2008

    *The short-term solution

    DPMSpace Protection Support a list of VOMS FQANs for the space write permission check, rather than just the current single FQANSpace selection Not available, not necessary at the moment Space types Only T0D1Space statistics and tape optimization Space statistics already available. Tape optimization not needed

    Implementation plan Available by the end of 2008

    F. Donno, LCG-LHCC Mini review , CERN 1 July 2008

  • F. Donno, LCG-LHCC Mini review, CERN 1 July 2008

    *The short-term solution

    StoRM Space ProtectionSpaces in StoRM will be protected via DN or FQAN based ACLs. StoRM is already VOMS-aware. Space selection Not availableSpace types T0D1 and T1D1 (no tape transitions allowed in WLCG) Space statistics and tape optimization Space statistics already available. Tape optimization will be addressed

    Implementation planAvailable by November 2008

    F. Donno, LCG-LHCC Mini review, CERN 1 July 2008

  • F. Donno, LCG-LHCC Mini review, CERN 1 July 2008

    *The short-term solution

    Client tools: FTS, lcg-utils/gfalSpace selection The client tools will pass both the SPACE token and SRM special structures for refined selection of read/write pools.PinningClient tools will internally extend the pinlifetime of newly created copies. Space TypesThe same SPACE Token might be implemented as T1D1 for CASTOR, or T1D0 + pinning in dCache. The clients will perform the needed operations to release copies in both types of spaces transparently.

    Implementation planAvailable by the end of 2008

    F. Donno, LCG-LHCC Mini review, CERN 1 July 2008

    October 7, 2005*

    SummarygLite handled the CCRC loadRelease process was not affected by CCRCImproved lcg-CE can bridge the gap until we move to CREAM Scaling for core services is understood

    OSG Several minor fixes and improvementsUser driven released OSG-1.0SRM-2 supportLcg data management clients Improved interoperation and interoperabilityMiddleware ready for show time

    October 7, 2005*

    SummaryARCARC 0.6.3 has been released Stability improvements, bug fixesLFC (file catalogue) supportWill be maintainedARC 0.8 release planning startedWill address several scaling issues for very large sites (>10k)ARC-1 prototype exists SRM-2.2SRM v2.2 Addendum Agreement has been reached Short term plan implementation can start Affects Castor, dCache, Storm, DPM and clientsClients will hide differencesExpected to be in production by the end of the yearLong term plan can still reflect new experiences

    **************************************STATUS OF SERVICE COVERED BY T2/T1 ***Tape optimization parameters can be passsed fom SRM to the TapeBackend***Not allowed means policy, not a technical issue*The clients will hide this, by sending two commands and one will be ignored**