17
CONCURRENCY AND COMPUTATION: PRACTICE AND EXPERIENCE Concurrency Computat.: Pract. Exper. 2011; 23:1179–1195 Published online 14 January 2011 in Wiley Online Library (wileyonlinelibrary.com). DOI: 10.1002/cpe.1695 An open and scalable architecture for delivering 3D shared visualization services to heterogeneous devices Gianluca Paravati, Andrea Sanna , , Fabrizio Lamberti and Luigi Ciminiera Dipartimento di Automatica e Informatica, Politecnico di Torino, Corso Duca degli Abruzzi 24, I-10129 Torino, Italy SUMMARY Shared visualization environments represent an effective means to enhance collaborative work in engi- neering and scientific design tasks. The availability of high-speed networks allows researchers to work together from geographically distributed locations, and mobile devices are able to carry out ubiquitous 3D visualization tasks through wireless network connections. This paper presents a scalable architecture for the delivery of shared 3D visualization services to heterogeneous terminals ranging from powerful work- stations to mobile devices such as PDAs and smart-phones. The framework design allows both desktop and mobile clients to simultaneously visualize the same model by sharing a common view. Remote-rendering servers support effective visualization on thin clients, and a load balancing mechanism provides efficient resource usage. Copyright 2011 John Wiley & Sons, Ltd. Received 10 June 2009; Revised 9 November 2010; Accepted 11 November 2010 KEY WORDS: collaborative visualization; mobile devices; remote visualization 1. INTRODUCTION The availability of increasingly powerful computers and high-speed communication networks has facilitated the development of a wide variety of multimedia services. Among these services, collab- orative visualization [1] is currently of particular interest because it is able to play a significant role across a large spectrum of disciplines, including problem solving [2–7], multi-user and collabora- tive virtual reality environments (VREs) [8–16], and multiplayer online games (MPOGs) [17–20]. Despite the large body of work devoted to the design and implementation of collaborative visualization environments (CVEs), only a few solutions have attempted to provide efficient support for heterogeneous devices. In particular, mobile clients (MCs) such as tablet PCs, ultra-mobile PCs, PDAs, and smart-phones are often not supported by existing implementations; moreover, when partial support is available, it generally provides a restricted set of functionalities with poor performance. The lack of consolidated and efficient heterogeneous support is essentially due to the difficulty of bringing 3D visualization to resource-constrained devices. Although the visualization of 3D graphics is mostly consolidated on graphics workstations and desktop machines, 3D visualization on thin (and, in particular, mobile) devices remains a challenging task [21]. Technological advances allow locally rendered geometries of ‘moderate’ complexity; in this case, the object data are processed directly on the device through specialized hardware that supports the rendering process. However, this solution is not applicable to complex 3D scenes due to hardware Correspondence to: Andrea Sanna, Dipartimento di Automatica e Informatica, Politecnico di Torino, Corso Duca degli Abruzzi 24, I-10129 Torino, Italy. E-mail: [email protected] Copyright 2011 John Wiley & Sons, Ltd.

An open and scalable architecture for delivering 3D shared visualization services to heterogeneous devices

Embed Size (px)

Citation preview

Page 1: An open and scalable architecture for delivering 3D shared visualization services to heterogeneous devices

CONCURRENCY AND COMPUTATION: PRACTICE AND EXPERIENCEConcurrency Computat.: Pract. Exper. 2011; 23:1179–1195Published online 14 January 2011 in Wiley Online Library (wileyonlinelibrary.com). DOI: 10.1002/cpe.1695

An open and scalable architecture for delivering 3D sharedvisualization services to heterogeneous devices

Gianluca Paravati, Andrea Sanna∗,†, Fabrizio Lamberti and Luigi Ciminiera

Dipartimento di Automatica e Informatica, Politecnico di Torino, Corso Duca degli Abruzzi 24,

I-10129 Torino, Italy

SUMMARY

Shared visualization environments represent an effective means to enhance collaborative work in engi-neering and scientific design tasks. The availability of high-speed networks allows researchers to worktogether from geographically distributed locations, and mobile devices are able to carry out ubiquitous 3Dvisualization tasks through wireless network connections. This paper presents a scalable architecture forthe delivery of shared 3D visualization services to heterogeneous terminals ranging from powerful work-stations to mobile devices such as PDAs and smart-phones. The framework design allows both desktop andmobile clients to simultaneously visualize the same model by sharing a common view. Remote-renderingservers support effective visualization on thin clients, and a load balancing mechanism provides efficientresource usage. Copyright � 2011 John Wiley & Sons, Ltd.

Received 10 June 2009; Revised 9 November 2010; Accepted 11 November 2010

KEY WORDS: collaborative visualization; mobile devices; remote visualization

1. INTRODUCTION

The availability of increasingly powerful computers and high-speed communication networks hasfacilitated the development of a wide variety of multimedia services. Among these services, collab-orative visualization [1] is currently of particular interest because it is able to play a significant roleacross a large spectrum of disciplines, including problem solving [2–7], multi-user and collabora-tive virtual reality environments (VREs) [8–16], and multiplayer online games (MPOGs) [17–20].

Despite the large body of work devoted to the design and implementation of collaborativevisualization environments (CVEs), only a few solutions have attempted to provide efficient supportfor heterogeneous devices. In particular, mobile clients (MCs) such as tablet PCs, ultra-mobilePCs, PDAs, and smart-phones are often not supported by existing implementations; moreover,when partial support is available, it generally provides a restricted set of functionalities withpoor performance. The lack of consolidated and efficient heterogeneous support is essentiallydue to the difficulty of bringing 3D visualization to resource-constrained devices. Although thevisualization of 3D graphics is mostly consolidated on graphics workstations and desktop machines,3D visualization on thin (and, in particular, mobile) devices remains a challenging task [21].Technological advances allow locally rendered geometries of ‘moderate’ complexity; in this case,the object data are processed directly on the device through specialized hardware that supports therendering process. However, this solution is not applicable to complex 3D scenes due to hardware

∗Correspondence to: Andrea Sanna, Dipartimento di Automatica e Informatica, Politecnico di Torino, Corso Ducadegli Abruzzi 24, I-10129 Torino, Italy.

†E-mail: [email protected]

Copyright � 2011 John Wiley & Sons, Ltd.

Page 2: An open and scalable architecture for delivering 3D shared visualization services to heterogeneous devices

1180 G. PARAVATI ET AL.

limitations that affect thin client devices. A widely adopted approach that is able to overcomethis problem makes use of specialized graphics resources residing on high-end machines [22] togenerate a 3D rendering that is later streamed over the network to remote clients. Despite itsflexibility, this approach raises critical issues related to network speed, bandwidth fluctuations, anddevice processing capabilities when stream decoding is needed.

This paper presents the design of a scalable and hybrid architecture in which MCs can displaycomplex 3D geometries by sharing a visualization session with other (local rendering) desktopsand mobile users‡ by exploiting remote and distributed visualization resources. The proposeddistributed framework allows heterogeneous clients to locate graphics resources (i.e. remote-rendering functionalities and shared visualization sessions). The workload is automatically balancedamong the involved servers, and each (mobile) client (which displays a scene by means of a remoterendering server (RS)) is managed in a customized way. The server that is responsible for renderinga 3D scene for one or more clients generates multiple flows that encapsulate compressed stillimages. The RS is also able to support different collaborative visualization sessions concurrently.Each data flow is dynamically tailored to optimize network and device performance. Experimentaltests on high-speed wireless connections demonstrate that the proposed solution achieves visualiza-tion at more than 20 frames per second (fps) on a real mobile device. Additionally, experiments withcollaborative visualization scenarios show that the designed architecture easily scales to effectivelysupport an increasing number of clients.

This paper is organized as follows: the relevant background on shared/collaborative visualizationsystems and their features is reviewed in Section 2, and the overall structure of the proposedarchitecture is presented in Section 3. Usage scenarios and the load-balancing algorithm arediscussed in Sections 4 and 5, respectively. Finally, experimental tests and results are provided inSection 6.

2. BACKGROUND

This section reviews selected representative solutions for developing CVEs, specifically focusingon their main limitations and drawbacks. A more exhaustive review and classification of previousworks can be found in [23].

OpenGL VizServer 3.1 [24] enables machines lacking specialized high-performance graphicshardware to share OpenGL applications remotely. VizServer renders the current frame and transmitsthe frame buffer to collaborators as a stream of images that can be either uncompressed orcompressed. The first solution can be used over a local area network (where more bandwidthis generally available), whereas the latter approach is targeted at standard wide-area Internetconnections. In this case, the user can choose among four types of preconfigured parameter settingsfor image encoding. However, the user who creates a collaborative session (the session master)specifies the initial compression settings for all collaborators; thus, service personalization isimpossible. Moreover, from an architectural point of view, all users share the same visualization(specifically based on resource centralization), as the collaboration among users is achieved byconnecting to a centralized RS responsible for streaming the image-based flow to all clients. It isworth noting that VizServer does not take advantage of the processing capabilities of machines thatmight be able to render complex 3D scenes locally; only remote visualization is taken into account.

Similar considerations also apply to the Sun Shared Visualization system [25], which is basedon the open source VirtualGL project [26]. Like VizServer, Sun Shared Visualization softwareprovides users with the ability to remotely access and share 3D-accelerated applications runningon a central resource. The rendering phase is followed by the distribution of compressed images tothe users, who can manually choose among three encoding settings: high, medium, and low quality.Like VizServer, Sun Shared Visualization exploits only centralized remote rendering, without usinglocal graphics capabilities where possible.

‡In the remainder of the paper, the terms client, user, and device are interchangeable.

Copyright � 2011 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 2011; 23:1179–1195DOI: 10.1002/cpe

Page 3: An open and scalable architecture for delivering 3D shared visualization services to heterogeneous devices

AN OPEN AND SCALABLE ARCHITECTURE 1181

HP Remote Graphics Software [27] is a commercial solution that offers a shared visualizationservice. As in the previous cases, it considers only remote rendering, and the entire desktopis streamed to the clients. The core of this technology consists of a proprietary digital imagecompression method and a proprietary image-processing algorithm. Because it does not require anymodifications to existing applications, HP Remote Graphics Software may be used in conjunctionwith any software program. The receivers can be workstations, PCs, or notebooks, but mobiledevices are not supported. Furthermore, because HP Remote Graphics Software consists of onesender and one or more receivers, the system is not scalable as the number of collaboratorsgrows.

An alternative approach is taken by COVISE [4], which represents an example of a modularvisualization environment (MVE). In COVISE, one of the users plays the role of a master whocan take control over the visualization, while the other users can only observe changes (e.g.translation and rotation of the given scene) from a unique local viewpoint. COVISE exploits thelocal processing capabilities of the clients by only transmitting changes affecting the data, andeach machine renders the scene locally.

In general, shared visualization environments often make assumptions about the homogeneityof the client devices’ graphics capabilities. For instance, in COVISE, each collaborator is assumedto have local rendering support, whereas in VizServer, the clients are expected to completelyrely on remote resources. In contrast, RAVE [14] is a collaborative visualization system that isable to automatically discover the available resources (either local or remote) and, furthermore, toexploit them through workload distribution techniques by reacting to dynamic changes in usageand allocation. RAVE runs as a background process using Grid/Web services and enables thesharing of resources with other users without the need to commandeer an entire machine. RAVEprovides a persistent collaborative environment where users can cooperate both synchronouslyand asynchronously by means of previously recorded sessions. Moreover, unlike other solutions,RAVE supports a wide range of clients, from hand-held PDAs to high-end servers, thus havingfeatures between COVISE and VizServer. It is able to use local rendering hardware, if present;otherwise, it exploits remote rendering services by transmitting the entire frame buffer. However,in this case, the image stream is not compressed. Therefore, performance is bounded by theavailable network bandwidth. RAVE is able to support clients ranging from PDAs to immersiveVR systems, but unfortunately, this flexibility imposes a cost in terms of performance. In fact,thin clients experience very poor visualization frame rates (1–3 fps) due to architectural choicesand due to the lack of a mechanism for adaptively customizing the interaction between the clientand the RS.

A service-oriented approach is presented in [28], where a formal description of the visualizationpipeline [29] (shared by all participants in a given session) is also introduced. According to thenotation in [28], the visualization process is considered as a pipeline in which a number of steps areexecuted starting from the data set: load data (D), filter (F), map (M), and render (R)§ . Each ofthese steps is equivalent to a module in the visualization pipeline; in [28], modules are implementedas services. A Pipeline Editor allows users to compose their own visualization pipelines, thusproviding extremely flexible support to different classes of users. Moreover, users only need toconsider the visualization process at a logical level, without being aware of the physical locationof the services. Unfortunately, this approach may not be able to provide satisfactory performancein terms of frame rate and latency.

Issues related to multimedia distribution and sharing have been extensively addressed in the past.For instance, a media streaming architecture was proposed in [30], in which a streaming serveris connected to a number of clients through a collaborative streaming service system. However,the system described in [30] is specifically tailored to multimedia content playback, whereasthe solution proposed in this paper is oriented to interactive 3D content manipulation. The maindifferences lie in the type and the concept of interactivity required using the video stream. In fact,

§The notation in [28] will also be used to describe the visualization processes involved in the architecture consideredin this paper.

Copyright � 2011 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 2011; 23:1179–1195DOI: 10.1002/cpe

Page 4: An open and scalable architecture for delivering 3D shared visualization services to heterogeneous devices

1182 G. PARAVATI ET AL.

in [30], the issues related to collaborative session management center on control of the commonsession state through interactive pause/unpause actions of the shared streams. For instance, referringto the use case in [30], a user can pause a movie session both on the main TV screen and on aPDA that is receiving a different audio stream of the movie (e.g. to enable a French-speaking userto watch the movie). Unlike the previous case, this work concerns collaborative sessions wherevirtual scenes are interactively controlled and commonly shared by users. For instance, a user canrotate/translate a 3D object, and other remote users can observe changes as soon as they occur.Similar considerations may apply for the work presented in [31], which presents synchronizationarchitecture that allows for joint and synchronized access of multimedia data on different devices.In addition, unlike [30, 31], the proposed solution considers generic scenarios, where devicesable to locally render complex geometries using local hardware resources may share visualizationsessions with MCs that can use remote rendering techniques.

A generic ‘late-join’ service for distributed interactive media has been proposed in [32]. The‘late-join’ problem applies to all late-comer users who wish to join an ongoing session after it hasalready been started: in this case, the current state of the medium must be shared and communicatedto latecomers. The ‘late-join’ problem is of particular importance when an application includesannotations and modifications that have been made by participants over the course of a sharedsession (e.g. in a shared whiteboard application). Because the synchronization of 3D viewers inthis work does not include the past history of the session (the current state of a 3D object simplydepends on the last position and orientation given by the session master), the ‘late-join’ service asproposed in [32] has not been implemented (only basic functionalities have been implemented toupdate the status of 3D objects).

The present work aims to overcome the limitations discussed above by providing an effectiveand flexible solution that enables interactive and concurrent collaborative 3D visualization onheterogeneous devices. In this context, the term ‘interactive’ does not necessarily mean ‘real-time’,as devices may be connected by high latency wireless connections; rather, the term denotes theuser’s ability to dynamically control the visualization. Collaboration among a spatially distributedgroup of users is guaranteed by shared visualization sessions and synchronized interactions. Inthe designed architecture, remote servers are used to support thin clients in the management ofcomplex 3D geometries, while a novel load-balancing strategy allows the workload to be evenlydistributed among multiple RSs. Server performance is measured offline and is represented by atype of lookup table that shows estimated frame rates (on the client side) depending on the numberof clients connected and the number of active sessions. This strategy allows the system to select RSsto maximize the client performance and balance the workload. Moreover, the proposed architecturecan cope with different types of clients by taking advantage of local hardware acceleration fromdesktop machines as well as by supporting thin devices through the remote rendering paradigm.

3. ARCHITECTURE OVERVIEW

The components and inter-relations among the software components involved in the proposedarchitecture are depicted in Figure 1. Specifically, the components of the considered visualizationframework include the following components:

1. Graphics Resource Manager (GRM);2. Collaboration Session Manager (CSM);3. Desktop Viewer Client (DVC);4. Mobile Viewer Client (MVC);5. Rendering Server (RS).

Each of these components represents a stand-alone application; components can either run on thesame or on different machines (e.g. an RS and a CSM can run on the same workstation). Thesystem is designed to manage an arbitrary number of each of these applications in order to achievescalability.

Copyright � 2011 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 2011; 23:1179–1195DOI: 10.1002/cpe

Page 5: An open and scalable architecture for delivering 3D shared visualization services to heterogeneous devices

AN OPEN AND SCALABLE ARCHITECTURE 1183

Figure 1. Software architecture overview.

3.1. Graphics Resource Manager

A GRM is an application that populates and manages a database in order to organize avail-able graphics resources (i.e. the services provided by RSs and collaboration session managers).In general, more than one GRM can be active concurrently, providing an efficient and fault-tolerantmechanism for resource identification. As shown in Figure 1, a Service Registration module is trig-gered when registration messages generated by RSs and CSMs are received. A GRM allows usersto obtain information about available RSs (which MCs need in order to obtain remote renderingservices) and collaborative visualization sessions (both types of clients need this information inorder to publish and to join sessions).

The system automatically assigns a RS to MCs and allocates a given CSM to manage a collabo-rative session. The address of the CSM in control of a session is transparently forwarded to clientsthat want to join (see Section 3.2). The Service Delivery function delivers this information andalso supplies the list of available RSs and collaboration session managers.

3.2. Collaborative Session Manager

A Collaborative Session Manager (CSM) directs the management of shared visualization sessionsamong local rendering applications (i.e. desktop clients (DCs) and RSs). Each RS acts as a sortof gateway for MCs that request to join established collaborative sessions with DCs and withother MCs that are exploiting different RSs (see Section 3.3). A CSM publishes its sessions to theGRMs through its Service Advertising module. A Session Manager module listens for incomingconnections from different local rendering applications and maintains an updated list of connectedusers for each session. A CSM is automatically assigned from the system to the first user, creatinga new shared session, and that CSM gathers relevant information about the session (metadata).Users who want to join a selected session must subscribe to the CSM in charge of the session;this is done through a service request directed to the GRM. In a collaborative session, only oneclient at a time can interact with the scene, whereas the other clients can only view scene changesfrom the same shared point of view. The client in control of the scene is called the session masterand is identified by a token. A protocol allows clients to ask the session master for the token, andconcurrent requests are managed on a first-come/first-served basis. The session master can alsorefuse to release the token. The Synchronization Manager module receives a stream of status data

Copyright � 2011 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 2011; 23:1179–1195DOI: 10.1002/cpe

Page 6: An open and scalable architecture for delivering 3D shared visualization services to heterogeneous devices

1184 G. PARAVATI ET AL.

Figure 2. Communication among multiple rendering pipelines: loaddata (D), filter (F), map (M), and render (R).

from the session master representing the position and orientation of the 3D virtual observer andthen forwards it to all other local rendering clients.

3.3. Desktop and MVC

The visualization processes of local rendering clients and rendering services can be described usingthe same notation as in [28]. In the proposed solution (see Figure 2), the visualization pipeline isreplicated for each local rendering client and shared for each remote rendering client. A ParameterControl Client (PCC) is a module that provides the mechanisms for interaction with an associatedpipeline through the modification of parameters; more specifically, a PCCL is a PCC for localrendering devices (that create a replicated pipeline), whereas a PCCR is a Parameter Control Clientfor remote rendering devices (that only share a pipeline provided by a rendering service). A PipelineController Service (PCS) acts as a common workspace that shares the control parameters withsubscribing clients and services. PCCRs interact with PCS|PCCL modules in charge of managingthe common shared pipeline. Although only two rendering pipelines are shown in Figure 2, anynumber of pipelines can be linked together to create shared visualization sessions. Such a sharedvisualization system allows support for various classes of devices that may be able or unable toperform local rendering of complex 3D scenes. Thus, a single shared visualization session canbe composed of both devices that locally replicate a visualization pipeline and thin clients thatexploit a remote shared rendering pipeline (i.e. an RS). It is worth observing that although PCCRsshare the same pipeline, the rendering service is customized for each client depending on devicecharacteristics.

In a shared visualization environment, the replicated pipelines share the same graphical datarepresentation of the scene by maintaining the synchronization of scene parameters through thePCSs. To this end, the connected viewers exchange a set of commands over the communicationlayer. Synchronization is guaranteed by a CSM that can be seen as a PCS module. Updates are sentto all components that subscribe to a shared session service (i.e. PCCL for DCs and PCS|PCCLfor RSs). In this scenario, a RS acts as a PCS|PCCL module (that is, a gateway) that managesand routes requests and commands coming from all connected PCCR clients and directed to othercollaborators. The RS acts as a joint gateway between the local rendering and the remote renderingcommunities, where a local rendering community is defined as a set of desktop PCs (PCCLs)connected through the network and is able to render the visualization scene locally. In contrast,a remote rendering community is a set of thin clients connected through the network that are notable to render complex scenes locally, thus requiring a remote rendering service.

A DVC is an OpenSG-based application [33] that is able to render 3D objects locally. TheInput GUI allows users to autonomously inspect a 3D model and to initialize/join a shared sessionwith other users. When the user creates a new collaborative session, a CSM Selector module istriggered in order to choose an available CSM module according to a load balancing strategy (seeSection 4). Conversely, when the user joins an already created session, the assigned CSM managesthe existing session. A Local Rendering Session module exchanges messages with the assignedCSM to keep the views synchronized among the users.

The MVC was developed using the Java 2 Platform Micro Edition, a subset of the Java platformthat provides a certified collection of Java APIs for the development of software applications

Copyright � 2011 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 2011; 23:1179–1195DOI: 10.1002/cpe

Page 7: An open and scalable architecture for delivering 3D shared visualization services to heterogeneous devices

AN OPEN AND SCALABLE ARCHITECTURE 1185

tailored to tiny, small, and resource-constrained devices. The MVC is similar to the desktop versionbut takes advantage of the rendering capabilities of a remote service. As for the DC, the MVCInput GUI allows a user to autonomously inspect a 3D model and to initialize/join a sharedsession with other users. When a shared session must be initialized, a Remote Rendering Sessionmodule is initiated on the rendering provider side. Each instance of the above module on a givenRS is related to a single collaborative session. Different visualization sessions are managed bydifferent instances on the same RS. More instances running on different RSs belonging to thesame visualization session act to replicate the rendering pipeline and provide scalability to thesystem. When a user wants to share the visualization of a given scene with other collaborators,he/she can either connect to an existing RS instance, or he/she can set up another instance ofthe same session on a different RS. In order to be able to use the remote rendering service,the MVC triggers an RS Selector module, which is responsible for selecting the best graphicsresource available at the moment according to the load balancing strategy (see Section 4). If theMC succeeds in connecting to the selected RS, the visualization phase begins. The stream of stillimages received from the RS (see Section 3.4) is decoded by the Image Decoder module andis displayed on the local frame buffer through the Visualization module. The Event Generatormodule allows roto-translation commands to be sent to the RS, which results in an interactivevisualization that gives remote users the feel of local processing. The Flow Manager is part of amodule distributed between the MC and the selected RS; this module adapts the image flow (interms of resolution, compression ratio and frame rate) according to the available bandwidth andthe client device resolution.

3.4. Rendering server

An RS is a graphics service provider. It renders a 3D scene on behalf of MCs and deliverscompressed images taken from its frame buffer. An RS is registered to one or more GRMs throughits Service Advertising module, which is also used for publishing sessions initiated by mobileusers. The CSM Selector module retrieves a list of available CSMs and automatically selectsthe best resource according to the load balancing strategy (see Section 4). When a mobile userrequests to initiate either a single user rendering session or a shared session, a new renderingpipeline is allocated on the RS; all sessions are initiated by the first client. In contrast, when auser joins an already existing session supported by the RS itself, the rendering pipeline of thecreated visualization session is shared among the users involved. If a user requests to join analready existing session not yet supported by the RS itself (e.g. a shared visualization sessioncreated by a local rendering client or by a mobile user exploiting a different RS), a new framebuffer is allocated, and the Local Rendering Session Module of the RS provides the mechanismsfor synchronization with the other collaborators. A 3D Rendering Application module (based onOpenSG) computes the scene and renders it on a Multi-Resolution Frame Buffer. The most suitableviewport is dynamically selected depending on device characteristics, and the viewport is thencompressed through the Image Encoder module and finally sent over the communication link. TheFlow Manager module takes both the mobile device and network characteristics into account andselects the ‘best’ encoding parameters (i.e. resolution, compression quality, and frame rate) usingthe algorithm proposed in [34]. The Event Scheduler module allows commands received by theMVC to be translated into suitable inputs for the 3D Rendering Application module.

An ad hoc 3D rendering application has been developed to test the effectiveness of the designedarchitecture; nonetheless, the proposed remote visualization architecture could be used to sharegeneric software applications by applying the methodology presented in [35].

4. USAGE SCENARIOS

The proposed system has been implemented as support architecture for the visualization modulesdeveloped in the European project VICTORY [36]. The main objective of this project was devel-opment of an innovative, distributed, visual objects digital repository that introduces novel search,

Copyright � 2011 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 2011; 23:1179–1195DOI: 10.1002/cpe

Page 8: An open and scalable architecture for delivering 3D shared visualization services to heterogeneous devices

1186 G. PARAVATI ET AL.

retrieval, and visualization capabilities to desktop and mobile P2P networks with a particular focuson 3D objects. The VICTORY client application allows a user community to share 3D objects,to perform three types of object search (3D-model search, image search, sketch search), and toretrieve found objects. The 3D search engine is based on content, which is extracted by takinginto account low-level geometric characteristics as well as context, and maps high-level features(semantic concepts) to low-level features. When downloaded, a model can be inspected through anincorporated 3D viewer. The main challenge in visualization is to provide an effective solution thatachieves delivery of visual content on low-power mobile terminals in order to enable integration ona standard PC network. A group of users then can share the visualization of a downloaded 3D object(e.g. during tele-cooperation among CAD engineers). The rendering functionality is performedlocally whenever the visualization takes place on desktop PCs; otherwise, specialized graphicsproviders enable the rendering functionality if mobile hand-held devices are used. In particular,two application cases were considered: the automotive and gameplay industries.

Based on project requirements, the architecture of the visualization module has been designedto cope with the following usage scenarios:

1. A single visualization session for a DVC;2. Shared visualization sessions for multiple DVCs;3. A single visualization session for an MVC;4. Shared visualization sessions for multiple MVCs;5. Shared visualization sessions for hybrid devices: desktop and MVCs.

This last scenario will be considered in detail, as this case is the most general (i.e. it encompassesall other cases). Several possible real applications of the proposed solution exist in which a 3Dmodel view must be shared: a scientist explaining a certain visualization phenomenon, a physicianshowing the results of a virtual endoscopy, or a designer presenting a new model of car, amongothers. Although the visualization may be integrated with an audio explanation, implementationof a shared audio conference is a topic beyond the scope of this paper.

A representative example of such a heterogeneous situation is illustrated in Figure 3: severalDCs running an instance of the DVC, several MCs running an instance of the MVC, several RSs, a

Figure 3. Shared visualization sessions in a heterogeneous scenario.

Copyright � 2011 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 2011; 23:1179–1195DOI: 10.1002/cpe

Page 9: An open and scalable architecture for delivering 3D shared visualization services to heterogeneous devices

AN OPEN AND SCALABLE ARCHITECTURE 1187

CSM, and a GRM are involved. Conceptually, two kinds of collaboration networks (communities)exist: a collaboration network for local rendering clients and a collaboration network for remoterendering clients. A client that requests a remote rendering service is only allowed to connectdirectly to the collaboration network for remote rendering clients, and this network is centralizedaround a RS. On the other hand, a DC (that is able to locally render 3D objects) can directlyconnect to a collaboration network for local rendering clients; this network is managed by a CSM.The interconnections between the local and remote rendering networks are guaranteed by the RSs.

MCs connected to a RS share the visualization of a scene by exploiting a single renderingpipeline (e.g. MC#1, MC#2 in Figure 3). When a DC requests to join this session, communicationwith the RS is started through the Synchronization Manager of the CSM, and the rendering pipelineis replicated on the local rendering device in order to exploit its own graphics capabilities andmanage network load (e.g. EP#1 joins the session running on RS#1).

It should be noted that a CSM is able to support multiple visualization sessions; in particular,in Figure 3, the CSM supports two different sessions. Shared Visualization Session #1 involvesthree DCs (DC#1, DC#2, DC#3) and three MCs (MC#1, MC#2, MC#3); two of these (MC#1,MC#2) are managed by a given RS (specifically, RS#1), while the remaining one (MC#3) exploitsa different RS (RS#3). All of the above clients share the same visualization of the 3D scene. Theset composed of DCs and RSs RS#1 and RS#3 constitutes a local rendering network belonging tothe Shared Visualization Session #1. The CSM manages the synchronization of status data. Thesets composed of RS#1, MC#1, MC#2 and of RS#3 and MC#3 constitute two remote renderingnetworks in which data synchronization is guaranteed by the CSM. Similar considerations alsoapply to Shared Visualization Session #2. Furthermore, a rendering provider can concurrentlysupport multiple shared visualization sessions by allocating different rendering pipelines and byproviding respective content streams to different MCs (e.g. RS#3 streams a portion of the framebuffer related to the Shared Visualization Session #2 to MC#4 and concurrently streams a portionof the frame buffer associated with the Shared Visualization Session #1 to MC#3).

5. LOAD BALANCING

System performance is highly dependent on the load balancing strategy adopted for selection ofthe RSs and CSMs. DCs manage 3D geometries locally, and their visualization performance iscritically dependent on their graphics adapters. Therefore, the procedure described in this sectionexplains how an MC can select a graphics resource (i.e. an RS).

First, RS performance must be estimated before registering it to the GRMs. Performance char-acterization is an offline procedure that measures the frame rate (at the client side) that the RSunder analysis is able to provide under variable and stressful conditions. In other words, the RS isloaded by creating a certain number of collaborative rendering sessions with an increasing numberof clients. Figure 4 shows an example of RS characterization: the number of clients increases upto 50, and the number of concurrently active sessions goes from 1 to 6. Each point in the 3Dspace represents a value of the frame rate measured on the client side when a certain number ofsessions are active on the server side (with a certain number of connected clients). Clients can bemodeled on software emulators to evaluate server behavior when a large number of devices areconnected. Figure 4 represents the behavior of the RSs used for the tests presented in Section 6,where it was observed that a frame rate of about 10 fps is estimated for approximately 50 clientsevenly distributed among six sessions.

At this point, two pairs of variables must be set: {wfr, fr_min} and {wrtt, rtt_max}.Variables wfr and fr_min represent the desired frame rate and the minimum acceptable framerate, respectively, on the client side. For instance, wfr could be set to 20 fps and fr_min to 10 fps.Similarly, wrtt represents the desired round-trip time (RTT) delay and rtt_max the maximumacceptable round-trip time; wrtt and rtt_max could be set to 30 and 300 ms, respectively. Inthis context, the term round-trip time refers to the time required for a message to travel from aspecific client to a specific RS and back again.

Copyright � 2011 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 2011; 23:1179–1195DOI: 10.1002/cpe

Page 10: An open and scalable architecture for delivering 3D shared visualization services to heterogeneous devices

1188 G. PARAVATI ET AL.

Figure 4. Example of rendering server characterization.

When an MC needs to select a RS, two different events may occur (both cases are managed ina transparent way).

1. The mobile client must initiate a new session:

(a) The client receives the list of all the RSs currently available from a GRM;(b) The client application contacts the RSs and gathers information concerning the estimated

frame rate (efr). The servers use information depicted in Figure 4 to predict theperformance on the client side according to the number of connected clients, the numberof active sessions for each server and the measured connection round-trip time crtt;

(c) Only RSs that satisfy the following condition are considered: efr ≥ wfr and crtt ≤wrtt;

(d) If a non-empty list of RSs is obtained, then the RS with the lowest number of activesessions (out of those with the same number of sessions) and with the lowest number ofconnected clients is selected; otherwise, wfr and wrtt are decreased and increased bya certain percentage, respectively (e.g. 5%);

(e) If the new wfr ≥ fr_min and the new wrtt ≤ rtt_max, then the execution flowreturns to step (c); otherwise, no RS can satisfy the client and the connection is refused.

2. The mobile client must connect to an existing session: Steps (a)–(d) are the same as in theprevious case (the only difference is that a GRM returns the list of all the RSs involved in aspecific session), but in step (e), if no RS can satisfy the connection request, the client triesto initiate a new instance of the session on a different RS that does not belong to the listreceived from the GRM.

The choice at step (d), i.e. the selection of the RS with the lowest number of active sessions, isderived from the behavior analysis on the server side. The frame rate decreases linearly with thenumber of sessions; therefore, a high number of sessions can strongly affect the whole systemperformance.

6. TESTS AND RESULTS

This section presents the results of experimental tests carried out during validation of the VICTORYproject [36]. Experiments were intended to assess the effectiveness of the proposed architectureand evaluate system performance and scalability capabilities while considering both desktop and

Copyright � 2011 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 2011; 23:1179–1195DOI: 10.1002/cpe

Page 11: An open and scalable architecture for delivering 3D shared visualization services to heterogeneous devices

AN OPEN AND SCALABLE ARCHITECTURE 1189

Figure 5. A shared visualization session between a desktop client (the laptop) and a mobile client (thesmart-phone) exploiting a remote rendering server.

MVCs. An in-depth performance analysis was carried out to evaluate the usability of the systemunder stressful conditions, particularly when the number of connected clients per server increases.This set of experiments was also intended to benchmark the capacity of the system for the situationin which the number of graphics resources (i.e. RSs) varies, and thus, to determine the scalabilityof the system. The performance analysis was conducted by monitoring the parameters that mostlyinfluence the visualization user experience, by measuring the frame rate (fps) and the round-triptime (i.e. the elapsed time between sending a command and observing its effect on the devicescreen).

The test bed for the experiments was composed of RSs, DVCs, and mobile viewer clients. RSswere deployed on public IP workstations. Specifically, Dual-Core AMD Opteron @ 2.60 GHzmachines with 8 GB of RAM and NVIDIA Quadro FX 3500 graphics adapters were used. DVCsran on an ASUS F3JC laptop with an Intel Core Duo T2250 1.73 GHz processor, 1 GB of RAM,and a NVIDIA GeForce Go 7300 graphics card. MVCs ran on an HTC TyTN II smart-phone with802.11g and HSDPA connectivity. The RS and the DVCs consisted of OpenGL-based applicationswritten in C++ language. The mobile viewer was developed in J2ME (Java Micro Edition) and runson every Java-enabled smart-phone. Figure 5 shows an example of a collaborative visualizationsession involving a desktop (the laptop) and an MVC (the smart-phone); the console and theviewports of the RS can be observed in the background. The RSs were configured to supporta set of 16 spatial resolutions linearly increased in a range from 160×120 to 400×300 pixels.Different viewports allowed for customization of the flow of still compressed images deliveredto mobile clients. The streaming system used mechanisms of flow control to provide users witha type of controlled quality of experience (QoE). The encoding parameters (resolution and JPEGquality) were adaptively changed to best suit the device and network characteristics. Although themechanism of flow control is beyond the scope of this paper, more details can be found in [34, 37].

The first test was designed to measure the performance of the DVC, in terms of frame rate, as afunction of model complexity on different hardware devices. Table I shows 3D viewer performancefor a single DC. The first column lists the geometric complexity (in terms of number of polygons),the second column tabulates the results obtained on a PC, and the third column shows the resultsobtained when a laptop is used as a DVC. It is clear that the frame rate values are highly dependenton the characteristics of the graphics adapter available on the considered platform. In fact, the DVCapplication takes full advantage of OpenGL hardware acceleration; thus, extremely complex modelscan be rendered in a smooth and interactive manner when a high-performance graphics adapteris available. Because each DVC is only responsible for its local rendering pipeline, the numberof DVCs that participated in a shared visualization session did not influence the performances.

Copyright � 2011 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 2011; 23:1179–1195DOI: 10.1002/cpe

Page 12: An open and scalable architecture for delivering 3D shared visualization services to heterogeneous devices

1190 G. PARAVATI ET AL.

Table I. Performance of a single desktop viewer client.

Frames per second Frames per second(PC) (laptop)

Number of polygons NVIDIA Quadro FX 3500 NVIDIA GeForce Go 7300

12.554 60 5856.017 60 58104.656 60 58244.576 60 30499.288 60 12748.932 32 9979.557 32 8

Table II. Performance of a single mobile viewer client running on an HTC TyTN II smart-phone.

Frames per second Frames per secondNumber of polygons Wi-Fi 802.11g HSDPA

12.554 11–22 6–1556.017 11–22 6–15104.656 11–22 6–15244.576 11–22 6–15499.288 11–22 6–15748.932 11–22 6–15979.557 11–22 6–15

For example, a PC and a laptop with the hardware specified in Table I were able to reach framerates of 60 and 12 fps, respectively, during the same shared session involving visualization of amodel composed of half-million polygons.

Table II reports the results obtained from the use of a single MVC (specifically, an HTC TyTN IIsmart-phone). Unlike the previous case, the performance of MVCs is essentially invariant to themodel complexity. Two different wireless connections were considered, an IEEE 802.11g Wi-Finetwork and an HSDPA link (with a download rate up to 3.6 Mbps), because it was assumed thatperformance depends on both the network and computational capabilities. The same models usedfor testing the DVC performance were rendered on an RS, and a flow of JPEG images was sentto the mobile client device. The RS is capable of reaching a higher number of fps than the valuesindicated in Table II; hence, in this case, the type of network connection limited the frame rate.When the client was connected to the RS via the Wi-Fi network, the frame rate fell to 11–22 fps.In this case, the frame rate has a wide range of values because it is also a function of the encodingparameters used to stream images to the client. Indeed, the maximum frame rate that can beachieved on a real device is strongly affected by decode-and-display times. This value takes intoaccount the delay introduced in decoding a JPEG image, loading it into the frame buffer of thedevice, and displaying it on the screen. On the selected device, the above time can be estimated ina few tens of milliseconds for a resolution of 240×180 pixels (although the exact time depends onthe ‘content’ of the frame to be visualized). The greater the resolution and the JPEG quality settings,the lower the frame rate. The minimum value in each cell of Table II corresponds to the maximumresolution supported (320×240 pixels) at the maximum image quality. The maximum value wasobtained using the flow control methodology proposed in [34]. In the latter case, the resolutionwas smoothly scaled down to 160×120 pixels while the JPEG image quality was reduced to 86.The performance decreased using an HSDPA connection; in particular, the frame rate obtained bytransmitting the maximum resolution and maximum JPEG quality was equal to 6 fps. Additionally,a frame rate of 15 fps was obtained with a resolution of 192×144 pixels and a JPEG image qualityequal to 45. It is worth mentioning that with HSDPA, the performance strongly depends on thequality of service of the underlying network link, which is generally subject to sudden variations.

Copyright � 2011 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 2011; 23:1179–1195DOI: 10.1002/cpe

Page 13: An open and scalable architecture for delivering 3D shared visualization services to heterogeneous devices

AN OPEN AND SCALABLE ARCHITECTURE 1191

The frame rate is only one parameter that can affect the performance of a visualization session; inparticular, the round-trip time (i.e. the response time to a command sent from the client to the server,including the time for encoding a frame) provides a useful indication of the system interactivity.Interaction is fundamental to human experience. Unfortunately, the threshold of interactive responseis not a well-known parameter [38] and is highly dependent on the activity to be performed. As anexample, the telephone industry stipulates that the round-trip delay over a telephone call should beless than 100 ms, as a rule of thumb [38]; this is because a greater delay would dramatically affecta dual-channel conversation. Because the proposed framework is oriented toward activities relatedto collaborative professional design, however, a loose round-trip time can still be acceptable in theconsidered cases. Indeed, the proposed application requires that only one user at a time interactwith the 3D model. The average round-trip time measured was 59 ms for the Wi-Fi network and214 ms for the HSDPA connection.

Further experiments were performed to test the proposed solution’s ability to efficiently supportan increasing number of clients and sessions using MVCs (i.e. providing a frame rate that guaranteesthe creation of the illusion of movement in a rapid sequence of still images). According to [38],the motion fusion threshold is 8–12 fps. To validate the system scalability hypothesis, we set up aseries of tests involving a growing number of MVCs participating in one or more shared sessions.Again, the measurements were collected in terms of average number of fps on the client side. In thiscase, given the high number of clients needed for the experiments, the clients were executed onsoftware emulators, and the supported resolution was set to 240×180 pixels. The graphs presentedin Figures 6–8 show the performance scalability when one, two, and three shared sessions aremanaged by one, two, and three RSs, respectively. Whenever a client connects to the system, theaverage frame rate decreases. The number of clients evenly distributed among sessions is allowedto increase up to 50. The system manages an arbitrary number of RSs; as seen in Figures 6–8,the system correctly scales, and the visualization performance improves when more RSs are used.For example, when 25 clients are participating in the same shared visualization session (Figure 6),the system (managed by only one RS) guarantees each client a frame rate equal to 15 fps. Incontrast, when two and three RSs are available, the guaranteed frame rates are 18 and 23 fps,respectively. A higher number of RSs can be used to further improve performance. Figures 7and 8 show the performance when two and three sessions are concurrently active on each server,respectively. The trend is the same as that of Figure 6, but performances were lower because, inthese cases, the servers perform computations using a different rendering pipeline for each session.As an example, when up to seven clients participated in the same shared visualization session, theaverage frame rate fell in the range of 25–30 fps (Figure 6). When the same number of clientsparticipated in two and three shared sessions, the average frame rate was measured in the rangeof 22–23 fps (Figure 7) and 17–18 (Figure 8), respectively.

Figure 6. Performance scalability in a collaborative visualization session managedby one, two, and three rendering servers.

Copyright � 2011 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 2011; 23:1179–1195DOI: 10.1002/cpe

Page 14: An open and scalable architecture for delivering 3D shared visualization services to heterogeneous devices

1192 G. PARAVATI ET AL.

Figure 7. Performance scalability of two collaborative visualization sessions managed by one, two, andthree rendering servers. Two sessions are concurrently active on each server.

Figure 8. Performance scalability of three collaborative visualization sessions managed by one, two, andthree rendering servers. Three sessions are concurrently active on each server.

Figure 9. Performance scalability with two RSs and up to eight independent sessions.

Moreover, Figures 9 and 10 show the performance scalability when the number of independentsessions is increased. The load balancing metric introduced in Section 5 guarantees workloadsplitting when more RSs are available. Figure 9 shows the performance when two RSs are involved,

Copyright � 2011 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 2011; 23:1179–1195DOI: 10.1002/cpe

Page 15: An open and scalable architecture for delivering 3D shared visualization services to heterogeneous devices

AN OPEN AND SCALABLE ARCHITECTURE 1193

Figure 10. Performance scalability with three RSs and up to nine independent sessions.

Table III. Performance of different types of clients involved in the same shared session.

Shared session 1 Shared session 2 Shared session 3 Shared session 4

Network F RTT Network F RTT Network F RTT Network F RTTlink (fps) (ms) link (fps) (ms) link (fps) (ms) link (fps) (ms)

Mobiledevice

802.11g 20–21 43–78 802.11g 19–21 37–70 HSDPA 14–18 158–223 802.11g 19–21 35–73

Mobiledeviceemulator

Fastethernet

21–23 33–68 802.11g 20–23 31–78 802.11g 21–23 31–69 HSDPA 15–18 166–239

DesktopPC

Fastethernet

29–31 15–33 802.11g 29–30 15–33 802.11g 29–30 16–31 802.11g 29–31 15–33

whereas Figure 10 shows the performance with three servers (the case with just one server hasbeen shown in Figure 4). The system correctly scales when more RSs are used. As an example,when 12 clients are involved in six sessions (i.e. two clients per session, on average), the achievedaverage frame rate is only 8 fps with one RS (Figure 4). The frame rate that can be achieved withtwo and three RSs is 17 (Figure 9) and 23 fps (Figure 10), respectively, with the same numberof clients and sessions. The tests clearly show that the proposed architecture is efficiently able toscale in order to support an increasing number of clients and also to manage multiple sessions.A pair of RSs is more than sufficient to provide at least 10 fps to up to 50 mobile clients almostindependently of the number of managed sessions. It is worth noting that, in all cases (up to50 clients), performance measurements were larger than the previously introduced threshold ofmotion fusion [38].

The results of previous system performance tests using shared visualization sessions wereobtained using only MVCs. Experiments aimed at evaluating performance in a shared visualizationsession for hybrid devices in a real-world scenario (with different types of Internet access) areintroduced in the following. Table III shows the performance of different types of clients connectedto the same shared visualization session in terms of the frame rate (F) and RTT. Tests wereperformed using a commercially available mobile device (an HTC TyTN II smart phone) and amobile device software emulator (Sun Java Wireless Toolkit for CLDC) running on a laptop and adesktop PC. The mobile device and the emulator exploit the remote rendering technique; they areserved by RSs, which are synchronized through a CSM (for example, RS#1 and RS#3 in Figure 3).The desktop PC performs local rendering and exchanges status messages for synchronization ofthe view. During the tests, the three clients share the visualization of the same 3D environment;only one client at a time is allowed to navigate the 3D scene, while the others can only observethe changes. The client that owns the right to navigate the 3D visualization space is identified

Copyright � 2011 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 2011; 23:1179–1195DOI: 10.1002/cpe

Page 16: An open and scalable architecture for delivering 3D shared visualization services to heterogeneous devices

1194 G. PARAVATI ET AL.

by a token. In this context, the term ‘round-trip time’ relates to the delay between sending acommand from the client graphics user interface (GUI) and observing the effects on the localscreen. The round-trip time is evaluated by assigning the token to the client that is being measured.The resolution of the visualization module running on the mobile devices was set to 240×180pixels and the JPEG quality was set to 80; the resolution of the desktop viewer was set to 1024×768pixels. The experiments were carried out using different types of connections: Fast Ethernet, Wi-Fi802.11g, and HSDPA (3.6 Mbps).

Table III shows that, for a desktop PC, frame rate and round-trip time are independent of theconnection used for the experiments (Fast Ethernet or 802.11g Wi-Fi). Indeed, only status messagesare exchanged (few bytes per second) to synchronize the visualization with the RS, which assiststhe two mobile devices. On the other hand, tests performed using an HSDPA connection ledto an increase in the monitored parameters, especially for RTT delay. RTT was always smallerthan 78 ms for the 802.11g connection, which is still within the limits of interactivity mentionedin [38], whereas this result was tripled for the HSDPA connection. Advances in telecommunicationnetworks will likely lead to improvements in RTT delays.

7. CONCLUSIONS

This paper presents a framework for supporting shared visualization on heterogeneous devices.Desktop and mobile clients can be managed concurrently in the designed architecture, allowingfor the implementation of efficient hybrid collaboration scenarios. Several applications can takeadvantage of the proposed architecture in order to enable smooth and interactive visualization onthin client devices and to employ remote rendering resources. The system provides users with thebest graphics resources currently available in a transparent manner. Performance is maximizedand workload is evenly split among the servers; the allocation of RSs takes into account both thecomputational capability of the server itself (weighted according to the number of connected clientsand active sessions) and the current network status. Additionally, experimental tests demonstratedthe scalability of the proposed solution. Mobile clients receive custom flows of compressed stillimages that are properly tuned to match both network conditions and device capabilities. Thissolution allowed measured performance to reach more than 20 fps on real devices.

The future work will be focused on managing changes in network conditions more effectively.Sudden changes in latency can strongly affect the performance of mobile clients, although thiscould be mitigated by implementing task migration mechanisms that are able to move an instanceof a visualization session from one RS to another. Moreover, the future work will be aimed atdeveloping a system where clients can potentially share different points of view of the samevisualization.

REFERENCES

1. Wood J, Wright H, Brodie K. Collaborative visualization. Proceedings of IEEE Visualization, Phoenix, AZ,U.S.A., October 1997; 253–259. DOI: 10.1109/VISUAL.1997.663890.

2. Johnson G. Collaborative visualization 101. ACM SIGGRAPH Computer Graphics 1998; 32(2):8–11.3. Walkley M, Wood J, Brodlie K. A distributed cooperative problem solving environment. Proceedings of ICCS

2002, International Conference (Lecture Notes in Computer Science, vol. 2329), Amsterdam, The Netherlands,Part I. Springer: Berlin, 2002; 853–861.

4. Stuttgart HPCC. COVISE features. Available at: http://www.hlrs.de/organization/avis/covise/ [9 December 2010].5. Kohl JA, Papadopoulos PM, Geist GA. CUMULVS: Collaborative infrastructure for developing distributed

simulations. Proceedings of the Eighth SIAM Conference on Parallel Processing for Scientific Computing,Minneapolis, MN, U.S.A., March 1997.

6. Furmento N, Lee W, Mayer A, Newhouse S, Darlington J. ICENI: An open grid service architecture implementedwith Jini. Proceedings of SuperComputing 2002, Baltimore, MD, U.S.A., November 2002.

7. Duce DA, Gallop JR, Johnson IJ, Robinson K, Seelig CD, Cooper CS. Distributed cooperative visualization—The MANICORAL Approach. Proceedings of Eurographics U.K. Conference, Leeds, U.K., March 1998; 69–85.

8. Tramberend H. Avango: A distributed virtual reality framework. Proceedings of AFRIGRAPH 2001, FirstInternational Conference on Computer Graphics, Virtual Reality and Visualization in Africa. ACM: New York,November 2001.

Copyright � 2011 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 2011; 23:1179–1195DOI: 10.1002/cpe

Page 17: An open and scalable architecture for delivering 3D shared visualization services to heterogeneous devices

AN OPEN AND SCALABLE ARCHITECTURE 1195

9. Pettifer S, Cook J, Marsh J, West A. Deva3: Architecture for a large-scale virtual reality system. Proceedings ofACM Symposium in Virtual Reality Software and Technology. ACM: New York, October 2000; 33–40.

10. Frécon E, Stenius M. DIVE: A scaleable network architecture for distributed virtual environments. DistributedSystems Engineering Journal (DSEJ) 1998; 5:91–100. Special Issue on Distributed Virtual Environments.

11. Pang A, Wittenbrink C. Collaborative 3D visualization with CSpray. IEEE Computer Graphics and Applications1997; 17(2):32–41.

12. Wilson S, Sayers HM, Myles W, McNeill MDJ. Nomad: An architecture to support the development of collaborativevirtual environment application. Proceedings of Eurographics, Swansea, U.K., 2000; 151–158.

13. Hartling P, Just C, Cruz-Neira C. Distributed virtual reality using octopus. Proceedings of IEEE InternationalSymposium on Virtual Reality (VR’01). IEEE Computer Society: Silver Spring, MD, March 2001; 53–62.

14. Grimstead IJ, Avis NJ, Walker DW. Automatic distribution of rendering workloads in a grid enabled collaborativevisualization environment. Proceedings of the 2004 ACM/IEEE Conference on Supercomputing, Washington, DC,U.S.A. IEEE Computer Society: Silver Spring, MD, November 2004.

15. Schmalstieg D, Hesina G. Distributed applications for collaborative augmented reality. Proceedings of IEEEVirtual Reality Conference 2002 (VR ’02), Orlando, FL, U.S.A. IEEE Computer Society: Silver Spring, MD,March 2002; 59–66.

16. Second life. Available at: http://secondlife.com/ [9 December 2010].17. Bettner P, Terrano M. 1500 Archers on a 28.8: Network programming in age of empires and beyond. Proceedings

of the Game Developers Conference, San Jose, CA, U.S.A., 20–24 March 2001.18. Bernier YW. Latency compensating methods in client/server in-game protocol design and optimization.

Proceedings of the 15th Games Developers Conference, San Jose, CA, U.S.A., March 2001.19. Dionne C, Lavoie M, Trueman K. NetZ—Multiplayer architecture for online games. Proceedings of the Game

Developers Conference, San Jose, CA, U.S.A., 6–12 March 2000.20. Frohnmayer M, Gift T. The TRIBES engine networking model. Proceedings of the Game Developers Conference,

San Jose, CA, U.S.A., 6–12 March 2000.21. Sanna A, Lamberti F. 3D visualization on mobile devices. Handbook of Research on User Interface Design

and Evaluation for Mobile Technology, Information Science Reference. IGI Global, 2008; 556–573. ISBN:978-1-59904-871-0.

22. Martin IM. Adaptive rendering of 3D models over networks using multiple modalities. Technical Report, IBMT.J. Watson Research Center, 2000.

23. Grimstead IJ, Walker DW, Avis NJ. Collaborative visualization: A review and taxonomy. Proceedings of theNinth IEEE International Symposium on Distributed Simulation and Real-time Applications, Montreal, Canada,October 2005; 61–69. DOI: 10.1109/DISTRA.2005.12.

24. Ohazama C. OpenGL vizserver. White Paper, Silicon Graphics Inc., 1999. Available at: http://www.sgi.com/pdfs/3533.pdf [9 December 2010].

25. Sun Shared Visualization System. Available at: http://sun.systemnews.com/articles/122/4/sw/19745 [9 December2010].

26. VirtualGL Project. Available at: http://www.virtualgl.org/ [9 December 2010].27. Advantages and Implementation of HP Remote Graphics Software. HP Remote Graphics Software enables 2D

and 3D real-time interactive graphics and collaboration from a distance, HP White Paper, 2004. Available at:http://www.hp.com/workstations/white_papers/docs/hp_remotegraphics.pdf [9 December 2010].

28. Wang H, Brodlie KW, Handley JW, Wood JD. Service-oriented approach to collaborative visualization.Concurrency and Computation: Practice and Experience 2008; 20:1289–1301.

29. Haber RB, McNabb DA. Visualization idioms: A conceptual model for scientific visualization aystems.In Visualization in Scientific Computing, Shriver B, Neilson GM, Rosenblum LJ (eds.). IEEE Computer Society:Silver Spring, MD, 1990; 74–93.

30. Kahmann V, Brandt J, Wolf L. Collaborative streaming in heterogeneous and dynamic scenarios. Communicationsof the ACM 2006; 49(11):58–63. Special Section on Entertainment Networking—Recreational Use of IP Networks.

31. Lohse M, Repplinger M, Slusallek P. Dynamic distributed multimedia: Seamless sharing and reconfigurationof multimedia flow graphs. Proceedings of the Second International Conference on Mobile and UbiquitousMultimedia, Sweden, 10–12 December. ACM Press: New York, 2003; 89–95.

32. Vogel J, Mauve M, Geyer V, Hilt V, Kuhmunch C. A generic late join service for distributed interactive media.Proceedings of the Eight ACM Multimedia Conference, Los Angeles, 30 October–3 November. ACM Press:New York, 2000; 259–267.

33. OpenSG. Available at: www.opensg.org/ [9 December 2010].34. Paravati G, Sanna A, Lamberti F, Ciminiera L. On quality of experience in remote visualization on mobile

devices. International Journal of Mobile Human Computer Interaction 2010; 2(1):1–20.35. Lamberti F, Sanna A. Extensible GUIs for remote application control on mobile devices. IEEE Computer Graphics

and Applications 2008; 28(4):50–57. ISSN: 0272–1716, DOI: 10.1109/MCG.2008.70.36. European Project VICTORY. Available at: http://www.victory-eu.org [9 December 2010].37. Paravati G, Celozzi C, Sanna A, Lamberti F. A feedback-based control technique for interactive live streaming

systems to mobile devices. IEEE Transactions on Consumer Electronics 2010; 56(1):190–197.38. Cheshire S. Latency and the quest for interactivity. White paper commissioned by Volpe Welty Asset Management,

L.L.C., for the Synchronous Person-to-Person Interactive Computing Environments Meeting, San Francisco,

November 1996.

Copyright � 2011 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 2011; 23:1179–1195DOI: 10.1002/cpe