Transcript
  • Ann. Telecommun. (2010) 65:771776DOI 10.1007/s12243-010-0210-2

    The next-generation ARC middleware

    O. Appleton D. Cameron J. Cernak P. Db M. Ellert T. Frgt M. Grnager D. Johansson J. Jnemo J. Kleist M. Kocan A. Konstantinov B. Knya I. Mrton B. Mohn S. Mller H. Mller Zs. Nagy J. K. Nilsen F. Ould Saada Katarina Pajchel W. Qiang A. Read P. Rosendahl G. Roczei M. Savko M. Skou Andersen O. Smirnova P. Stefn F. Szalai A. Taga S. Z. Toor A. Wnnen X. Zhou

    Received: 15 September 2009 / Accepted: 15 September 2010 / Published online: 2 October 2010 The Author(s) 2010. This article is published with open access at Springerlink.com

    Abstract The Advanced Resource Connector (ARC)is a light-weight, non-intrusive, simple yet powerfulGrid middleware capable of connecting highly hetero-geneous computing and storage resources. ARC aimsat providing general purpose, flexible, collaborativecomputing environments suitable for a range of uses,both in science and business. The server side offers thefundamental job execution management, informationand data capabilities required for a Grid. Users areprovided with an easy to install and use client whichprovides a basic toolbox for job- and data management.The KnowARC project developed the next-generationARC middleware, implemented as Web Services withthe aim of standard-compliant interoperability.

    Keywords Grid Distributed computing Middleware Standardization Interoperability Web service

    O. Appleton D. Cameron T. Frgt A. Konstantinov J. K. Nilsen F. Ould Saada (B) K. Pajchel W. Qiang A. Read A. TagaUniversity of Oslo, Oslo, Norwaye-mail: [email protected]

    K. Pajchele-mail: [email protected]

    J. Cernak M. Kocan M. SavkoPavol Jozef afrik, Koice, Slovak Republic

    P. Db I. Mrton Zs. Nagy G. Roczei P. Stefn F. SzalaiNIIF/HUNGARNET, Budapest, Hungary

    M. Ellert B. Mohn P. Rosendahl S. Z. ToorUppsala University, Uppsala, Sweden

    M. GrnagerNDGF, Kastrup, Denmark

    1 Introduction

    Many collaborative projects with common data andcomputing needs require a system to facilitate the shar-ing of resources and knowledge in a simple and se-cure way. Some 10 years ago, the high-energy physicscommunity involved in the new detectors of the LargeHadron Collider (LHC) at CERN, the European high-energy physics laboratory outside Geneva, faced verychallenging computing demands combined with theneed for world wide distribution of data and process-ing. The emerging vision of Grid computing [1] as aneasily accessible, distributed, pervasive resource wasembraced by the physicists and answered both technicaland political requirements for data processing to sup-port the LHC. In 2002 NorduGrid [2], a collaborationof leading Nordic academic institutions, introduced the

    D. JohanssonLinkping University, Linkping,Sweden

    J. Jnemo B. Knya O. SmirnovaLund University, Lund, Sweden

    J. KleistAalborg University, Aalborg, Denmark

    S. MllerUniversity of Lbeck, Lbeck, Germany

    H. Mller X. ZhouUniversity of Geneva, Geneva, Switzerland

    M. Skou Andersen A. WnnenUniversity of Copenhagen, NBI,Copenhagen, Denmark

  • 772 Ann. Telecommun. (2010) 65:771776

    Advanced Resource Connector (ARC) middleware asa complete grid solution for the Nordic region andbeyond.

    As NorduGrid represented stakeholders with highlyheterogeneous hardware, it was necessary for its soft-ware to run natively on a range of different platforms.They developed a light-weight, non-intrusive solutionthat respects local policies and assures security, both forthe resource providers and the users. ARC [3] was andstill is developed following a bottom-up approach: tostart with something simple that works for users andadd functionality gradually. The decentralized servicesof ARC make the system stable and reliable, and havefacilitated the creation of a highly efficient distributedcomputing resource accessed by numerous users via aneasy to use client package.

    Like many other middlewares, ARC developmentstarted by building on the Globus Toolkit [4]. This firstgeneration of middlewares was quite diverse as therewere few standards in the field, making interactionbetween the Grids difficult. In recent years, there hasbeen a growing awareness and need for interoperabilitywhich has resulted in a number initiatives working to-wards Grid standards that can drive Grid developmentand allow for interoperability.

    The EU-funded KnowARC [5] project developedthe next-generation ARC middleware, re-engineeredthe software into a modular, Web Services (WS)-based,standard-compliant basis for future Grid infrastruc-tures. Although the Grid has been somewhat overshad-owed by the emerging remain a highly relevant solutionfor scientific and business users. ARC in particular is amore cloudy solution than other Grid solutions, partlydue to the absence of the strict coupling between gridstorage and compute element.

    2 The ARC middleware design

    From the beginning of ARC, major effort has been putinto making the ARC Grid system stable by design.This is achieved by avoiding centralized services whichcould represent single points of failure, and building thesystem around only three mandatory components:

    The computing service, implemented as a GridFTP[6]Grid Manager [7] (GM) pair of services. The GMwhich runs on each computing resources front-end, isthe heart of the ARC-Grid. It serves as a gatewayto the computing resource by providing an interfacebetween the outside world and a Local Resource Man-agement System (LRMS). Its primary tasks are jobmanipulation and job related data management which

    includes download and upload of input and output dataas well as caching of popular data. It may also handleaccounting and other essential functions.

    The information system serves as the nervous sys-tem of an ARC-Grid. It is implemented as a dis-tributed database, a setup which gives this importantservice considerable redundancy and stability. The in-formation is stored locally for each service in a localinformation system, while the hierarchically connectedindexing service maintain the list of known resources.ARC also provides a powerful and detailed monitoringweb page [8] showing up-to-date status and workloadrelated to resources and users.

    The brokering client is the brain of the Grid. It haspowerful resource discovery and brokering capabilities,and is able to distribute a workload across the Grid.This interaction between the distributed informationsystem and the user clients makes centralized work-load management unnecessary and thus avoids typicalsingle points of failure. The client toolbox providesfunctionalities for all the Grid services covering jobmanagement, data management as well as user andhost credentials handling. In addition to the monitoringweb page, users have access to detailed job informationand real-time access to log files. In order to make theGrid resources easily accessible, the client has beenkept very light-weight and is distributed as an easy toinstall stand-alone package available for most popularplatforms.

    The ARC middleware is often used in environmentswhere the resources are owned by different institutionsand organizations. A key requirement in such an envi-ronment is that the system is non-intrusive and respectslocal policies, especially those related to the choiceof platform and security. The Computing Service isdeployed as a thin and firewall-friendly Grid frontendlayer, so there is no middleware on the compute nodes,and it can coexist with other middlewares. The jobmanagement is handed over to the local batch systemand ARC supports a long list of the most popularLRMSs (PBS, LSF, Torque, LoadLeveler, Condor, SunGrid Engine, SLURM). Data management is handledby the GM on the front-end, so no CPU time is wastedon staging files in or out. A job is allowed to start onlyif all input files are staged in and in case of failureduring stage out, it is possible to resume the job fromthe stage out step at a later point, for example when theproblem causing the failure is fixed. This architectureand workflow principles lead to a highly scalable, stableand efficient system showing particularly good resultsfor high throughput and data-intensive jobs.

  • Ann. Telecommun. (2010) 65:771776 773

    As a general purpose middleware, ARC strives tobe portable and easy to install, configure and operate.One of the explicit goals for the upcoming releases is toreduce external dependencies, like for example on theGlobus Toolkit and be included in the repositories ofthe most popular Linux distributions like Debian andFedora. By porting both Globus Toolkit and the ARCsoftware to Windows, there is a hope to open ARC toa broader range of user groups, both on the client andthe server side.

    3 Next-generation ARC

    The EU-funded project KnowARC and the NorduGridcollaboration has developed the next generation ofARC. Building on the successful design and the wellestablished components, the software has been re-engineered, with the implementation of the ServiceOriented Architecture (SOA) concept and addition ofstandard-compliant Web Service (WS) interfaces to ex-isting services as well as new components [9]. The next-generation ARC consists of flexible plugable modulesthat may be used to build a distributed system thatdelivers functionality through loosely coupled services.While faithful to the architecture described in Section 2,ARC will at the same time provide most of the capa-bilities identified by Open Grid Service Architecture(OGSA) road-map of which the execution manage-ment, information and data capabilities are the mostcentral. OGSA defines a core set of WS-standards anddescribes how they are used together to form the basicbuilding blocks of Grids. Some of the capabilities areprovided by the novel ARC hosting environment Host-ing Environment Daemon (HED) which will be de-scribed in the next Section. Figure 1 is an overview ofthe next-generation ARC architecture which shows theinternal structure both of the client and the server side.

    KnowARC had a strong focus on the Open Grid Fo-rum [10] (OGF) standardization efforts and both imple-mented and participated in the development of a rangeof standards. The project navigated through the frag-mented landscape of standards [11], adhering to the us-able and complete ones, while in case of incomplete orpartly matching standards, still applied them and propa-gated feedback to relevant Standard Development Or-ganizations (SDO). In case of non-existing standards,ARC developers provided proposals supported by im-plementation to appropriate SDOs and took an activerole in the standard development process. As a resultof this commitment, ARC developers are currentlymajor contributors to the GLUE2.0 [12] informationspecification schema and other OGF working groups.

    Fig. 1 Overview of the ARC architecture sowing the internalstructure both of the client and the server side. The client basedon the libarcclient is available via a number of interfaces. Pluginadaptors for other target CEs can be easily added. On the serverside is structured around the HED container hosting all func-tional components. The communication is WS based, but thereare also mechanisms for pre-WS backwards compatibility

    The goal of this effort is to obtain interoperability withother middleware solutions which follow the standards.

    3.1 Hosting environment daemon

    The next-generation server side ARC software is cen-tered on the Hosting environment daemon (HED)web-service container which is designed to providing alight-weight, flexible, modular and interoperable basis.HED differs from other WS hosting environments inthat it is designed to provide a framework for gluingtogether functionalities and not to re-implement vari-ous standards. One of the main functionalities is to be aconnection to the outside world and provide efficientinter-service communication. HED supports via thecrucial message chain component different levels andforms of communication, from simple UNIX socketsto HTTPS/SOAP messages. This implementation sep-arates the communication related functionalities fromthe service logic itself. As HED is handling the com-munication, it also implements the different securitypolicies. Also in this area, ARC has focused on usingstandards, applying the SSL,TSL, and GSL protocols,authentication mechanisms based on X.509, while VOmanagement is supported using VOMS [13].

    3.2 Execution capability

    The ARC Resource-coupled EXecution service [14](A-REX) provides the computing element function-alities, offers a standard-compliant WS interface and

  • 774 Ann. Telecommun. (2010) 65:771776

    implements the widely accepted basic execution ser-vice [15]. In order to provide vital information aboutservice states, capabilities and jobs, A-REX has im-plemented the GLUE2.0 information schema (towhich NorduGrid and KnowARC have been majorcontributors).

    Although KnowARC had a strong focus on novelmethodologies and standards, the core of the A-REXservice is the powerful, well tested and robust GridManager familiar from the pre-WS ARC. Thus, the newexecution service implementation imposes the samenon-intrusive policies of restricting the jobs to dedi-cated session directories and avoiding middleware in-stallation on the compute nodes. A-REX supports along list of batch systems, offers logging capability andsupport for Runtime Environments. efficient workflowsystem where all input and output staging managedby the front-end is preserved. This distinction betweenthe tasks done on the front-end and on the computenodes has resulted in a highly efficient utilization of thecomputing resources.

    3.3 Information system

    The functioning of an ARC-enabled Grid strongly re-lies on an efficient and stable information system thatallows the distributed services to find each other andco-operate in a coherent way, providing what looks to auser as a uniform resource. This special role requiresa high level of redundancy in order to avoiding sin-gle points of failure. The basic building block is theInformation System Indexing System (ISIS) container,implemented as a WS within HED, and in which everyARC service registers itself. Its functionality is twofold.On one hand they work as ordinary Web Services, andon the other hand, they maintain a peer-to-peer (P2P)self-replicating networkthe ISIS cloud. Being imple-mented as a service in HED allows all WS related com-munication to be delegated the hosting environmentand profit from the flexible and uniform configurationsystem, security framework and built in self-registrationmechanism.

    The user clients will then query any nearby ISISservice in order to perform the resource and service dis-covery necessary for the matchmaking and brokeringthat is a part of the job submission process.

    3.4 Data management capability

    One of the reasons why the high-energy physics com-munity has embraced Grid technology is its ability toprovide a world-wide distributed data storage system.Each of the four LHC experiments will produce sev-

    eral petabytes of useful data per year. No institutionis capable of hosting all the LHC data needed by atypical research group locally, rather one has to enforcethe policy of sending jobs to data meaning that allanalyses eventually will have to run, at least at somestage, on the Grid.

    Impressive as the data volumes are, the advantagesof the Grid data storage are not limited to its size. Italso offers easy, transparent and secure access to data,which is just as important. In many projects, commondata is often the very core of the collaborative work andknowledge sharing.

    The ARC middleware has traditionally aimed athigh throughput, data-intensive jobs and reliable datamanagement. The next-generation ARC introducedthe distributed, self-healing storage systemChelonia[16]. storage It consists on a set of SOAP-based servicesresiding within HED. The Bartender service providesthe high-level user interface and a possibility to ac-cess third-party storage systems. The Shepherd is thefront-end of the physical storage, while the Librarianmanages the entire storage namespace in the A-Hash,a distributed metadata database, thus avoiding oftenproblematic centralized services. The services provide ascalable, consistent, fault-tolerant and self-healing datastorage system. Files are grouped in a hierarchy ofcollections and sub-collections, which conceptually canbe thought of as a UNIX-like file system that can be ac-cessed through a root collection functioning as a globalnamespace. The security, user access and permissionswithin this hierarchical structure are imposed by wellcontrolled user- or VO-related authorization.

    The Chelonia storage system is an independent Grid-enabled system that can be used in three ways. It can beintegrated in a Grid job specification as the location ofinput or output data and handled in an appropriate wayby the Computing Element. It can also be viewed asan extended shared file system accessed via two typesof client tools. One possibility is the Command-LineInterface offering basic functions like copy, move, listor create a collection. Methods for modifying accessand ownership are also available. The second interfaceis based on the high-level Filesystem in Userspace [17]or FUSE-module which allows users to mount the stor-age namespace into the local file system enabling theuse of graphical browsers and simple drag-and-drop filemanagement.

    3.5 Interoperable client

    Much of the success of a middleware depends on theuser interface. Therefore, ARC strives to implementthe principles behind the term Grid and associated

  • Ann. Telecommun. (2010) 65:771776 775

    analogies to the uniformity and simplicity of the powergrid. The main features of the client have already beendescribed in Section 2. In addition, the focus in the next-generation client is on user friendliness, flexibility andinteroperability. The plugin-based libraries facilitatesimple integration of support for new Grid job execu-tion services and data access. In order to be standard-compliant, ARC has moved from the extended ResourceSpecification Language [18] job description to Job Sub-mission Description Language [19]. It has a built-in jobdescription translator and is capable of submitting jobsto both the native ARC job execution services, as wellas to the gLites CREAM [20] and to UNICORE [21]execution services. These are the two main interoper-ability targets for ARC.

    In order to make the Grid resources available for awide range of users, the client is made available for allpopular Linux distributions and a significant effort hasbeen made to port it to Windows and Mac OS.

    Developers of third-party applications or backendscan easily build directly on the C++ libarcclient oron its Python or Java bindings. The light weight andstandalone nature of the client makes it straightforwardto include in a software package, e.g., the Ganga jobmanagement framework [22] used by CERN physicistsin the quest for new physics at the LHC. The ARCjob management functionalities are also available viaa graphical interface (GUI) and via the web portalLunarc Application Portal (LAP). Users may alsochoose between several optimized brokering algo-rithms, for example brokering based on data availabil-ity or fastest CPU.

    4 Applications

    Since 2002, ARC has been in continuous use deliveringproduction-quality Grid infrastructures. It has been de-ployed by national Grid infrastructures like the SwissSwiNG, the Swedish SweGrid, National UkrainianGRID Infrastructure and by the M-Grid in Finlandwhich also provides the Tier-2 center for the CMSexperiment of the LHC.

    Due to the excellent performance, ARC is the mid-dleware chosen to power the Nordic Data Grid Facility[23] (NDGF). Using ARC, NDGF is capable of lever-aging the existing national computational resources andGrid infrastructures, and create a Nordic large scalescientific Grid computing facility. One of the majorprojects of NDGF is to operate the Nordic Tier-1 forLHC experiments ATLAS and ALICE and thus be apartner in the worlds largest scientific production Grid:the World-wide LHC Computing Grid. The Nordic site

    has a unique organization as it is a distributed Tier-1and its high efficiency and successful performancedemonstrate the strength of the ARC middleware.

    The ATLAS analysis covering for example searchesfor the Higgs particle and new physics phenomena isa very demanding computing task. It includes heavyprocessing of data and Monte Carlo simulations, dis-tribution, and storage of large data volumes, as wellas user analysis. In 2008, the NDGF managed systemdelivered the highest efficiency among the ten ATLASTier-1 sites.

    Much of modern medical diagnosis is based on theanalysis of images which are created in vast quantitiesat hospitals around the world. KnowARC partner Uni-versity Hospital in Geneva (HUG) Switzerland, cre-ates some 80,000 images per day alone. Content-basedvisual information retrieval is the basis for severalapplications that allow medical doctors and researchersto search through large collections of images, com-pare them, and obtain information about the partic-ular cases. Image processing is a computing-intensivetask which requires a significant amount of resources.In order to shorten the gap between the computingneeds and the existing hardware, the multidisciplinarymedGIFT team at HUG has developed an ARC-basedGrid infrastructure using idle desktop machines whichemploys virtualization techniques (VMware). Threemedical imaging applications (general content-basedimage retrieval, lung image retrieval, and fracture im-age retrieval) have been gridified so far. This use-case shows the capability of ARC to create volunteeror cycle scavenging computing infrastructures in thesecurity-sensitive and challenging network environ-ment of a hospital [24].

    The Taverna [25] workbench gives biological sci-entists the means to rapidly assemble data analysispipelines. KnowARC has provided an ARC client plu-gin for Taverna that gives its users seamless accessto Grid resources, which often in the past have beeninaccessible due to the complexity of the system.

    5 Development outlook

    The new WS-based ARC components are being grad-ually introduced into the production releases whilemaintaining backwards compatibility and assuring asmooth transition. At the end of 2009 the NOX releaseof the ARC, containing only the WS-based compo-nents and clients, was made available. In 2010, the0.8.2 production release of ARC included, for the firsttime, some of the WS-components, alongside of theclassic components, in order to ease deployment on

  • 776 Ann. Telecommun. (2010) 65:771776

    production facilities and pave the way for eventualmigration. Further development and support is carriedout by the NorduGrid collaboration and NDGF.

    The ARC middleware has been selected by theEuropean Grid Initiative [26] Design Study as one ofthe three sanctioned middleware solutions for futureEuropean-scale Grid computing, and is also a part ofthe emerging European Middleware Initiative, fore-seen as the future common European middleware solu-tion. The standard-based interoperability focus of ARCwill play an important role in this context.

    In order to prepare for ARC technology take-up,developers have successfully ported essential Grid toolslike the Globus Toolkit, LHC File Catalogue [27] andVOMS to Debian and Fedora. These components arealso being integrated into Ubuntu through their up-take of Debian components, and are available to dis-tributions such as RedHat Enterprise Linux, CentOS,Solaris, and Scientific Linux via EPEL (Extra Packagesfor Enterprise Linux), and add-on repository main-tained by Ferdora. These are also planned to be portedto Windows, extending the Grid awareness of thisimportant platform. In the future, ARC developersaim to make ARC itself an integral part of Linuxdistributions.

    Building on the accumulated experience and exper-tise through a number of projects both in Grid ap-plications and in middleware development, ARC willcontinue to provide simple and reliable Grid solutionsand work towards standard-based interoperability.

    Acknowledgements This work was supported in part by theInformation Society and Technologies Activity of the EuropeanCommission through the work of the KnowARC project (Con-tract No.: 032691).

    Open Access This article is distributed under the terms of theCreative Commons Attribution Noncommercial License whichpermits any noncommercial use, distribution, and reproductionin any medium, provided the original author(s) and source arecredited.

    References

    1. Foster I, Kesselman C (1999) The Grid: blueprint for a newcomputing infrastructure. Morgan Kaufmann

    2. The NorduGrid Collaboration. URL http://www.nordugrid.org. Web site. Accessed September 2010

    3. Ellert M et al (2007) Future Gener Comput Syst 23(2):219.doi:10.1016/j.cam.2006.05.008

    4. Foster I, Kesselman C (1997) Int J Supercomput Appl11(2):115. Available at: http://www.globus.org. AccessedSeptember 2010

    5. EU KnowARC project. URL http://www.knowarc.eu. Website. Accessed September 2010

    6. Allcock W et al (2002) Parallel Comput 28(5):7497. Konstantinov A The NorduGrid Grid Manager And

    GridFTP Server: description and administrators manual.The NorduGrid Collaboration. URL http://www.nordugrid.org/documents/GM.pdf. NORDUGRID-TECH-2. AccessedSeptember 2010

    8. The NorduGrid Monitor. URL http://www.nordugrid.org/monitor/. Monitor web site. Accessed September 2010

    9. KnowARC Design Document, KnowARC Deliverable D1.1-1, 2007

    10. Openl Grid Forum. URL http://www.ogf.org/. Web site.Accessed September 2010

    11. KnowARC Standards Conformance Roadmap, KnowARCDeliverable D3.31, 2006

    12. Andreozzi S et al (2009) GLUE Specification v2.0.URL http:/www.ogf.org/documents/GFD.147.pdf. GFD-R-P.147. Accessed September 2010

    13. Alfieri R et al (2005) Future Gener Comput Syst 21(4):54914. Konstantinov A The ARC computational job manage-

    ment moduleA-REX. URL http://www.nordugrid.org/documents/a-rex.pdf. NORDUGRID-TECH-14. AccessedSeptember 2010

    15. Foster I et al (2007) OGSA Basic execution service version1.0. URL http://www.ogf.org/documents/GFD.108.pdf. GFD-R-P.108. Accessed September 2010

    16. Nagy Z, Nilsen J, Toor SZ (2009) Cheloniaself-healingdistributed storage system. URL http://www.nordugrid.org/documents/arc-storage-documentation.pdf. NORDUGRID-TECH-17

    17. Filesystem in Userspace. URL http://fuse.sourceforge.net/.Accessed September 2010

    18. Smirnova O (2008) XRSL (Extended resource specificationlanguage). URL http://www.nordugrid.org/documents/xrsl.pdf. NORDUGRID-MANUAL-4. Accessed September2010

    19. Anjomshoaa A et al (2008) Job Submission Description Lan-guage (JSDL) Specification, Version 1.0 (first errata up-date). URL http://www.gridforum.org/documents/GFD.136.pdf. GFD-R.136. Accessed September 2010

    20. Aiftimiei C et al (IOP, 2008) In Proc. of CHEP 2007, J PhysConf Ser 119:062004, In: Sobie R, Tafirout R, Thomson J.(eds) URL http://dx.doi.org/10.1088/1742-6596/119/6/062004.Accessed September 2010

    21. UNICORE, Uniform Interface to Computing Resources.URL http://www.unicore.eu. Web site. Accessed September2010

    22. Moscicki JT et al (2009) Comput Phys Commun180(11):2303. URL http://arxiv.org/abs/0902.2685v1. Ac-cessed September 2010

    23. Nordic DataGrid Facility. URL http://www.ndgf.org. Website. Accessed September 2010

    24. Zhou X, Pitkanen MJ, Depeursinge A, Mller H (2009)A medical image retrieval application using grid technolo-gies to speed up feature extraction in medical image re-trieval. Philippine Journal of Information Technology. URLhttp://publications.hevs.ch/index.php/publications/show/861

    25. Hull D, Wolstencroft K, Stevens R, Goble C, PocockMR, Li P, Oinn T (2006) Nucleic acids res 34 (WebServer issue). doi:10.1093/nar/gkl320. URL http://dx.doi.org/10.1093/nar/gkl320. Accessed September 2010

    26. European Grid Initiative. URL http://www.eu-egi.eu/. Website. Accessed September 2010

    27. LHC File Catalog. URL https://twiki.cern.ch/twiki/bin/view/EGEE/GliteLFC. Web site. Accessed September 2010

    http://www.nordugrid.orghttp://www.nordugrid.orghttp://dx.doi.org/10.1016/j.cam.2006.05.008http://www.globus.orghttp://www.knowarc.euhttp://www.nordugrid.org/documents/GM.pdfhttp://www.nordugrid.org/documents/GM.pdfhttp://www.nordugrid.org/monitor/http://www.nordugrid.org/monitor/http://www.ogf.org/http:/www.ogf.org/documents/GFD.147.pdfhttp://www.nordugrid.org/documents/a-rex.pdfhttp://www.nordugrid.org/documents/a-rex.pdfhttp://www.ogf.org/documents/GFD.108.pdfhttp://www.nordugrid.org/documents/arc-storage-documentation.pdfhttp://www.nordugrid.org/documents/arc-storage-documentation.pdfhttp://fuse.sourceforge.net/http://www.nordugrid.org/documents/xrsl.pdfhttp://www.nordugrid.org/documents/xrsl.pdfhttp://www.gridforum.org/documents/GFD.136.pdfhttp://www.gridforum.org/documents/GFD.136.pdfhttp://dx.doi.org/10.1088/1742-6596/119/6/062004http://www.unicore.euhttp://arxiv.org/abs/0902.2685v1http://www.ndgf.orghttp://publications.hevs.ch/index.php/publications/show/861http://dx.doi.org/10.1093/nar/gkl320.http://dx.doi.org/10.1093/nar/gkl320http://dx.doi.org/10.1093/nar/gkl320http://www.eu-egi.eu/https://twiki.cern.ch/twiki/bin/view/EGEE/GliteLFChttps://twiki.cern.ch/twiki/bin/view/EGEE/GliteLFC

    The next-generation ARC middlewareAbstractIntroductionThe ARC middleware designNext-generation ARCHosting environment daemonExecution capabilityInformation systemData management capabilityInteroperable client

    ApplicationsDevelopment outlookReferences

    /ColorImageDict > /JPEG2000ColorACSImageDict > /JPEG2000ColorImageDict > /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 150 /GrayImageMinResolutionPolicy /Warning /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 150 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 1.50000 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict > /GrayImageDict > /JPEG2000GrayACSImageDict > /JPEG2000GrayImageDict > /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 600 /MonoImageMinResolutionPolicy /Warning /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.50000 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict > /AllowPSXObjects false /CheckCompliance [ /None ] /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False

    /Description > /Namespace [ (Adobe) (Common) (1.0) ] /OtherNamespaces [ > /FormElements false /GenerateStructure false /IncludeBookmarks false /IncludeHyperlinks false /IncludeInteractive false /IncludeLayers false /IncludeProfiles true /MultimediaHandling /UseObjectSettings /Namespace [ (Adobe) (CreativeSuite) (2.0) ] /PDFXOutputIntentProfileSelector /NA /PreserveEditing false /UntaggedCMYKHandling /UseDocumentProfile /UntaggedRGBHandling /UseDocumentProfile /UseDocumentBleed false >> ]>> setdistillerparams> setpagedevice


Recommended