Ann. Telecommun. (2010) 65:771776DOI 10.1007/s12243-010-0210-2
The next-generation ARC middleware
O. Appleton D. Cameron J. Cernak P. Db M. Ellert T. Frgt M. Grnager D. Johansson J. Jnemo J. Kleist M. Kocan A. Konstantinov B. Knya I. Mrton B. Mohn S. Mller H. Mller Zs. Nagy J. K. Nilsen F. Ould Saada Katarina Pajchel W. Qiang A. Read P. Rosendahl G. Roczei M. Savko M. Skou Andersen O. Smirnova P. Stefn F. Szalai A. Taga S. Z. Toor A. Wnnen X. Zhou
Received: 15 September 2009 / Accepted: 15 September 2010 / Published online: 2 October 2010 The Author(s) 2010. This article is published with open access at Springerlink.com
Abstract The Advanced Resource Connector (ARC)is a light-weight, non-intrusive, simple yet powerfulGrid middleware capable of connecting highly hetero-geneous computing and storage resources. ARC aimsat providing general purpose, flexible, collaborativecomputing environments suitable for a range of uses,both in science and business. The server side offers thefundamental job execution management, informationand data capabilities required for a Grid. Users areprovided with an easy to install and use client whichprovides a basic toolbox for job- and data management.The KnowARC project developed the next-generationARC middleware, implemented as Web Services withthe aim of standard-compliant interoperability.
Keywords Grid Distributed computing Middleware Standardization Interoperability Web service
O. Appleton D. Cameron T. Frgt A. Konstantinov J. K. Nilsen F. Ould Saada (B) K. Pajchel W. Qiang A. Read A. TagaUniversity of Oslo, Oslo, Norwaye-mail: email@example.com
K. Pajchele-mail: firstname.lastname@example.org
J. Cernak M. Kocan M. SavkoPavol Jozef afrik, Koice, Slovak Republic
P. Db I. Mrton Zs. Nagy G. Roczei P. Stefn F. SzalaiNIIF/HUNGARNET, Budapest, Hungary
M. Ellert B. Mohn P. Rosendahl S. Z. ToorUppsala University, Uppsala, Sweden
M. GrnagerNDGF, Kastrup, Denmark
Many collaborative projects with common data andcomputing needs require a system to facilitate the shar-ing of resources and knowledge in a simple and se-cure way. Some 10 years ago, the high-energy physicscommunity involved in the new detectors of the LargeHadron Collider (LHC) at CERN, the European high-energy physics laboratory outside Geneva, faced verychallenging computing demands combined with theneed for world wide distribution of data and process-ing. The emerging vision of Grid computing  as aneasily accessible, distributed, pervasive resource wasembraced by the physicists and answered both technicaland political requirements for data processing to sup-port the LHC. In 2002 NorduGrid , a collaborationof leading Nordic academic institutions, introduced the
D. JohanssonLinkping University, Linkping,Sweden
J. Jnemo B. Knya O. SmirnovaLund University, Lund, Sweden
J. KleistAalborg University, Aalborg, Denmark
S. MllerUniversity of Lbeck, Lbeck, Germany
H. Mller X. ZhouUniversity of Geneva, Geneva, Switzerland
M. Skou Andersen A. WnnenUniversity of Copenhagen, NBI,Copenhagen, Denmark
772 Ann. Telecommun. (2010) 65:771776
Advanced Resource Connector (ARC) middleware asa complete grid solution for the Nordic region andbeyond.
As NorduGrid represented stakeholders with highlyheterogeneous hardware, it was necessary for its soft-ware to run natively on a range of different platforms.They developed a light-weight, non-intrusive solutionthat respects local policies and assures security, both forthe resource providers and the users. ARC  was andstill is developed following a bottom-up approach: tostart with something simple that works for users andadd functionality gradually. The decentralized servicesof ARC make the system stable and reliable, and havefacilitated the creation of a highly efficient distributedcomputing resource accessed by numerous users via aneasy to use client package.
Like many other middlewares, ARC developmentstarted by building on the Globus Toolkit . This firstgeneration of middlewares was quite diverse as therewere few standards in the field, making interactionbetween the Grids difficult. In recent years, there hasbeen a growing awareness and need for interoperabilitywhich has resulted in a number initiatives working to-wards Grid standards that can drive Grid developmentand allow for interoperability.
The EU-funded KnowARC  project developedthe next-generation ARC middleware, re-engineeredthe software into a modular, Web Services (WS)-based,standard-compliant basis for future Grid infrastruc-tures. Although the Grid has been somewhat overshad-owed by the emerging remain a highly relevant solutionfor scientific and business users. ARC in particular is amore cloudy solution than other Grid solutions, partlydue to the absence of the strict coupling between gridstorage and compute element.
2 The ARC middleware design
From the beginning of ARC, major effort has been putinto making the ARC Grid system stable by design.This is achieved by avoiding centralized services whichcould represent single points of failure, and building thesystem around only three mandatory components:
The computing service, implemented as a GridFTPGrid Manager  (GM) pair of services. The GMwhich runs on each computing resources front-end, isthe heart of the ARC-Grid. It serves as a gatewayto the computing resource by providing an interfacebetween the outside world and a Local Resource Man-agement System (LRMS). Its primary tasks are jobmanipulation and job related data management which
includes download and upload of input and output dataas well as caching of popular data. It may also handleaccounting and other essential functions.
The information system serves as the nervous sys-tem of an ARC-Grid. It is implemented as a dis-tributed database, a setup which gives this importantservice considerable redundancy and stability. The in-formation is stored locally for each service in a localinformation system, while the hierarchically connectedindexing service maintain the list of known resources.ARC also provides a powerful and detailed monitoringweb page  showing up-to-date status and workloadrelated to resources and users.
The brokering client is the brain of the Grid. It haspowerful resource discovery and brokering capabilities,and is able to distribute a workload across the Grid.This interaction between the distributed informationsystem and the user clients makes centralized work-load management unnecessary and thus avoids typicalsingle points of failure. The client toolbox providesfunctionalities for all the Grid services covering jobmanagement, data management as well as user andhost credentials handling. In addition to the monitoringweb page, users have access to detailed job informationand real-time access to log files. In order to make theGrid resources easily accessible, the client has beenkept very light-weight and is distributed as an easy toinstall stand-alone package available for most popularplatforms.
The ARC middleware is often used in environmentswhere the resources are owned by different institutionsand organizations. A key requirement in such an envi-ronment is that the system is non-intrusive and respectslocal policies, especially those related to the choiceof platform and security. The Computing Service isdeployed as a thin and firewall-friendly Grid frontendlayer, so there is no middleware on the compute nodes,and it can coexist with other middlewares. The jobmanagement is handed over to the local batch systemand ARC supports a long list of the most popularLRMSs (PBS, LSF, Torque, LoadLeveler, Condor, SunGrid Engine, SLURM). Data management is handledby the GM on the front-end, so no CPU time is wastedon staging files in or out. A job is allowed to start onlyif all input files are staged in and in case of failureduring stage out, it is possible to resume the job fromthe stage out step at a later point, for example when theproblem causing the failure is fixed. This architectureand workflow principles lead to a highly scalable, stableand efficient system showing particularly good resultsfor high throughput and data-intensive jobs.
Ann. Telecommun. (2010) 65:771776 773
As a general purpose middleware, ARC strives tobe portable and easy to install, configure and operate.One of the explicit goals for the upcoming releases is toreduce external dependencies, like for example on theGlobus Toolkit and be included in the repositories ofthe most popular Linux distributions like Debian andFedora. By porting both Globus Toolkit and the ARCsoftware to Windows, there is a hope to open ARC toa broader range of user groups, both on the client andthe server side.
3 Next-generation ARC
The EU-funded project KnowARC and the NorduGridcollaboration has developed the next generation ofARC. Building on the successful design and the wellestablished components, the software has been re-engineered, with the implementation of the ServiceOriented Architecture (SOA) concept and addition ofstandard-compliant Web Service (WS) interfaces to ex-isting services as well as new components . The next-generation ARC consists of flexible plugable modulesthat may be used to build a distributed system thatdelivers functionality through loosely coupled services.While faithful to the architecture described in Section 2,ARC will at the same time provide most of the capa-bilities identified by Open Grid Service Architecture(OGSA) road-map of which the execution manage-ment, information and data capabilities are the mostcentral. OGSA defines a core set of WS-standards anddescribes how they are used together to form the basicbuilding blocks of Grids. Some of the capabilities areprovided by the novel ARC hosting environment Host-ing Environment Daemon (HED) which will be de-scribed in the next Section. Figure 1 is an overview ofthe next-generation ARC architecture which shows theinternal structure both of the client and the server side.
KnowARC had a strong focus on the Open Grid Fo-rum  (OGF) standardization efforts and both imple-mented and participated in the development of a rangeof standards. The project navigated through the frag-mented landscape of standards , adhering to the us-able and complete ones, while in case of incomplete orpartly matching standards, still applied them and propa-gated feedback to relevant Standard Development Or-ganizations (SDO). In case of non-existing standards,ARC developers provided proposals supported by im-plementation to appropriate SDOs and took an activerole in the standard development process. As a resultof this commitment, ARC developers are currentlymajor contributors to the GLUE2.0  informationspecification schema and other OGF working groups.
Fig. 1 Overview of the ARC architecture sowing the internalstructure both of the client and the server side. The client basedon the libarcclient is available via a number of interfaces. Pluginadaptors for other target CEs can be easily added. On the serverside is structured around the HED container hosting all func-tional components. The communication is WS based, but thereare also mechanisms for pre-WS backwards compatibility
The goal of this effort is to obtain interoperability withother middleware solutions which follow the standards.
3.1 Hosting environment daemon
The next-generation server side ARC software is cen-tered on the Hosting environment daemon (HED)web-service container which is designed to providing alight-weight, flexible, modular and interoperable basis.HED differs from other WS hosting environments inthat it is designed to provide a framework for gluingtogether functionalities and not to re-implement vari-ous standards. One of the main functionalities is to be aconnection to the outside world and provide efficientinter-service communication. HED supports via thecrucial message chain component different levels andforms of communication, from simple UNIX socketsto HTTPS/SOAP messages. This implementation sep-arates the communication related functionalities fromthe service logic itself. As HED is handling the com-munication, it also implements the different securitypolicies. Also in this area, ARC has focused on usingstandards, applying the SSL,TSL, and GSL protocols,authentication mechanisms based on X.509, while VOmanagement is supported using VOMS .
3.2 Execution capability
The ARC Resource-coupled EXecution service (A-REX) provides the computing element function-alities, offers a standard-compliant WS interface and
774 Ann. Telecommun. (2010) 65:771776
implements the widely accepted basic execution ser-vice . In order to provide vital information aboutservice states, capabilities and jobs, A-REX has im-plemented the GLUE2.0 information schema (towhich NorduGrid and KnowARC have been majorcontributors).
Although KnowARC had a strong focus on novelmethodologies and standards, the core of the A-REXservice is the powerful, well tested and robust GridManager familiar from the pre-WS ARC. Thus, the newexecution service implementation imposes the samenon-intrusive policies of restricting the jobs to dedi-cated session directories and avoiding middleware in-stallation on the compute nodes. A-REX supports along list of batch systems, offers logging capability andsupport for Runtime Environments. efficient workflowsystem where all input and output staging managedby the front-end is preserved. This distinction betweenthe tasks done on the front-end and on the computenodes has resulted in a highly efficient utilization of thecomputing resources.
3.3 Information system
The functioning of an ARC-enabled Grid strongly re-lies on an efficient and stable information system thatallows the distributed services to find each other andco-operate in a coherent way, providing what looks to auser as a uniform resource. This special role requiresa high level of redundancy in order to avoiding sin-gle points of failure. The basic building block is theInformation System Indexing System (ISIS) container,implemented as a WS within HED, and in which everyARC service registers itself. Its functionality is twofold.On one hand they work as ordinary Web Services, andon the other hand, they maintain a peer-to-peer (P2P)self-replicating networkthe ISIS cloud. Being imple-mented as a service in HED allows all WS related com-munication to be delegated the hosting environmentand profit from the flexible and uniform configurationsystem, security framework and built in self-registrationmechanism.
The user clients will then query any nearby ISISservice in order to perform the resource and service dis-covery necessary for the matchmaking and brokeringthat is a part of the job submission process.
3.4 Data management capability
One of the reasons why the high-energy physics com-munity has embraced Grid technology is its ability toprovide a world-wide distributed data storage system.Each of the four LHC experiments will produce sev-
eral petabytes of useful data per year. No institutionis...