Intelligent virtual environments for virtual reality art

  • Published on

  • View

  • Download

Embed Size (px)


  • Computers & Graphics 29 (2005

    mea, Saco




    y of P

    objectives. This makes the prospect of generic tools

    rather unrealistic. Another approach consists in obser-


    Corresponding author. Tel.: +44 1642 342 657;

    ving that often-artistic concepts revisit fundamental

    aspects of interactivity, or question essential concepts

    0097-8493/$ - see front matter r 2005 Elsevier Ltd. All rights reserved.


    fax: +44 1642 230 527.

    E-mail address: (M. Cavazza).1. Introduction and objectives

    Virtual reality (VR) art has emerged in the last decade

    as an unexpected application for high-end VR systems

    as well as a new direction for digital arts [1,2].

    However, the development of VR art installations is

    an extremely complex process. Leading VR artists have

    often beneted from a supportive technical environment

    for the development of their major installations. Some of

    them were able to hire teams of systems developers,

    while others were afliated to academic institutions,

    which brought together artists and scientists or engi-

    neers. The level of complexity and cost of such

    development is certainly a limitation to the development

    of VR art. As such there is a rationale for new tools that

    would facilitate the development of VR art installations.

    However, the strategy for creating such tools has to be

    carefully considered, as one can only feel bemused at

    how diverse the relation to technology is among various

    artists. Some advocate a strong technical involvement

    and even participation in programming tasks while

    others tend to follow a production model in which

    technical developments are subordinated to the artisticAbstract

    The development of virtual reality (VR) art installations is faced with considerable difculties, especially when one

    wishes to explore complex notions related to user interaction. We describe the development of a VR platform, which

    supports the development of such installations, from an art+science perspective. The system is based on a CAVETM-

    like immersive display using a game engine to support visualisation and interaction, which has been adapted for

    stereoscopic visualisation and real-time tracking. In addition, some architectural elements of game engines, such as their

    reliance on event-based systems have been used to support the principled denition of alternative laws of Physics. We

    illustrate this research through the development of a fully implemented artistic brief that explores the notion of causality

    in a virtual environment. After describing the hardware architecture supporting immersive visualisation we show how

    causality can be redened using articial intelligence technologies inspired from action representation in planning and

    how this symbolic denition of behaviour can support new forms of user experience in VR.

    r 2005 Elsevier Ltd. All rights reserved.

    Keywords: Virtual reality art; Articial intelligence; Causal perception; Immersive displaysIntelligent virtual environ

    Marc Cavazzaa,, Jean-Luc LugrinAlok Nandic, Jeffrey JaSchool of Computing, University of

    bCLARTE, 4 Rue de lEcCommediastra, 182, av. W.

    dDepartment of Information Sciences, Universit) 852861

    nts for virtual reality art

    imon Hartleya, Marc Le Renardb,bsond, Sean Crooksa

    side, Middlesbrough, TS1 3BA, UK

    ge, 53000 Laval, France

    hill, 1180 Brussels, Belgium

    ittsburg 135, North Bellefield, PA 15260, USA

  • animate the world objects. This process could be

    laws of Physics [5]. The Ego.geo.Graphies brief is

    ARTICLE IN PRESSM. Cavazza et al. / Computers & Graphics 29 (2005) 852861 853facilitated if behaviours could be described at a more

    abstract, conceptual level, in the VR system itself. The

    creation of alternative behaviours could take place

    directly in this representation layer, which would also

    support iterative explorations of initial ideas. The use of

    an AI layer to dene the behaviour of a virtual

    environment implements the notion of an intelligent

    virtual environment [4]. This experimental technology

    should bring numerous benets to the development of

    VR art installations: it supports the redenition of non-

    realistic and alternative behaviour from rst principles,

    it allows rapid prototyping and experimentation and,

    nally it is well adapted to an art+science approach as it

    explicitly represents those concepts that are the object of

    artistic or scientic experimentation.such as reality, physical experience or even the perceived

    nature of life. In other words, as these interrogations

    also happen to be scientic ones, they open the way to

    what has been recently described as the art+science

    approach, in which VR artists have otherwise played a

    prominent role. In this paper, we describe such research,

    whose aim is to facilitate the development of VR art

    installations in an art+science context [3].

    This is why, rather than simply developing a toolkit

    to lower the accessibility threshold of VR art technol-

    ogy, we propose a system where artistic and scientic

    simulation can meet at the level of conceptual repre-

    sentations, while still generating technical output in the

    form of implemented VR installations.

    2. Intelligent virtual environment: knowledge layer and

    programming principles

    The notion of behaviour of a virtual environment

    normally encompasses all reactions of the environment

    to the users physical intervention. This in turn

    corresponds to the physical processes triggered by the

    user, when for instance s/he grasps, then drops an

    object. More often, it will consist of all devices

    behaviour that are ultimately not derived from physical

    simulations (for obvious reasons related to optimal

    levels of description), but scripted within the systems

    implementation. In both cases, such behaviour is

    encoded procedurally and the concepts underlying

    behaviours (e.g., patterns of motion, physical concepts,

    etc.) are not explicitly represented other than through

    variables embedded in equations or scripts. VR art is

    often concerned with the creation of virtual worlds that

    exhibit idiosyncratic behaviours, which might violate the

    traditional laws of Physics, such behaviours often being

    described in the installation briefs in abstract or

    metaphorical terms only. This makes it rather tedious

    to implement non-standard behaviours directly in terms

    of the low-level primitives (physical or procedural) thatexploring interaction and navigation in a non-anthro-

    pomorphic world, blurring the boundaries between

    organic and inorganic. Its installation involves an

    immersive VR world with which the user can interact.

    The virtual world comprises of a landscape in which the

    user can navigate, populated by autonomous entities

    (oating spheres), which are actually all part of the same

    organism. In this world, two sorts of interaction take

    place: those involving elements of the world (spheres and

    landscape) and those involving the user. The rst type of

    interaction is essentially mediated by collisions and will

    be perceived in terms of causality. The second is based

    on navigation and position and will be sensed by the

    world in terms of empathy, as a high-level, emotional

    translation of the user exploration.

    Through the staging of the Ego.geo.Graphies installa-

    tion, we are interested in exploring aspects related to

    predictability/non-predictability and hence some kind of

    narrative accessibility, from the perspective of user

    interaction. On one hand, this brief is an exploration of

    the notion of context through the variable behaviour of the

    environment which itself responds to the user involvement.

    But on the other hand, it constitutes an exploration of

    causality. As such, it requires mechanisms varying the

    physical effects of collisions (bouncing, merging, bursting,

    exploding, altering neighbouring objects, etc.), taking into

    account the semantics of the environment.

    This also implies that we explore how the user can be

    affected by causality. The spontaneous movements of

    the spheres focus the user attention, within the

    constraints of his/her visual and physical exploration

    of the landscape. The user will perceive consequences of

    spheres colliding with each other, which are equivalent

    to an emotional state of the world (as these multiple

    spheres still constitute one single organism) responding

    to perceived user empathy.

    As a consequence, a dialogue should emerge from this

    situation: user exploration will affect world behaviour

    through levels of perceived empathy, and in return the

    kind of observed causality will inuence user exploration

    and navigation.

    4. System overview

    The system presents itself as an immersive installation

    supporting alternative worlds with which the user can

    interact and, through this interaction, experience the

    nature of the fantasy worlds created by the artistic brief.3. The illustrative briefs

    To illustrate the technical presentation we will use

    examples from a fully implemented artistic installation,

    Ego.geo.Graphies by Alok Nandi. This brief is

    situated in an imaginary world governed by alternative

  • ARTICLE IN PRESSM. Cavazza et al. / Computers & Graphics 29 (2005) 852861854The choice of an immersive hardware platform was

    dictated by the necessity to match state-of-the-art VR

    installations. The vast majority of them are based on

    CAVETM-like systems [6], which are multi-screen im-

    mersive projection displays. The advantages of CA-

    VETM-like systems for VR art are well established: they

    constitute an optimal compromise between user immer-

    sion in visual content and the ability for physical

    navigation (although in a limited space) and interaction.

    In addition, CAVETM-based installations can be explored

    by a small audience of up to four spectators (Fig. 1).

    The software architecture implements the notion of an

    intelligent virtual environment, in which alternative

    reality can be dened through a symbolic description

    of the virtual worlds behaviour. This software archi-

    tecture is based on an integration layer, which consists in

    an event-based system, relating the visualisation engine

    to the behavioural layer. We use a state-of-the-art game

    engine, Unreal Tournament 2003TM (UT), as a visua-

    lisation engine. Game engines provide sophisticated

    Fig. 1. Immersive visualisation in the SAS CubeTM.visualisation features and most importantly constitute a

    software development environment in which further

    components can be integrated. This aspect explains that

    game engines are increasingly used in VR research [7].

    The behavioural layer is in turn composed of two

    modules, one for alternative Physics (using qualitative

    Physics) and another for articial causality. In this paper

    we shall concentrate on the latter component. Through

    this event-based system, real-time interaction with the

    visualisation engine can trigger alternative behaviours

    calculated by the intelligent virtual environment.

    5. The VR architecture: stereoscopic visualisation in the

    SAS cubeTM

    The immersive display we have used for this research

    is known as the SAS CubeTM (Fig. 2) and is a four-sidedCAVETM-like projection system in which the front, left,

    right and oor sides (each 3m wide) are used as

    projection screens, receiving a back-projected image

    produced by four BarcoTM projectors.

    This immersive display supports the use of a game

    engine as a visualisation engine through specic soft-

    ware known as CaveUTTM [8]. A multi-screen display

    based on CaveUTTM requires a server computer

    connected by a standard LAN to a number of client

    computers, at least one for each screen in the display.

    Stereo visualisation is an essential feature of immer-

    sive displays and CaveUTTM supports stereographic

    display by using two computers per screen, one to render

    the left eye view and one to render the right eye view,

    with an average frame rate of 60 frames/s per eye in most

    experiments reported here. The camera view can be

    offset from the viewers default conguration by a set

    value equal to half the inter-pupillary distance. Active

    stereo requires a single stereographic projector that will

    alternate between the left and right eye views at 120

    frames per second. The user wears shutter glasses on

    where each lens alternates between black and clear, also

    at 120 frames per second. The glasses switch in time with

    the display, and the result is that each eye gets the view it

    is supposed to at 60 fpsthe left view for the left eye and

    the right view for the right eye. All the screens in the

    composite display must also switch view at exactly the

    same time, a desirable state called genlock.

    The CaveUTTM installation in the SAS CubeTM

    platform uses two computers for each screen, one for

    Fig. 2. The SAS CubeTM installation.each eye view, and uses the DVG (video) cards in their

    ORADTM (PC) cluster to mix the two video signals and

    send the combined signal to a single stereographic

    projector. The DVG cards also handle the genlock

    synchronisation across all screens of the composite

    display. The overall hardware/software architecture

    supporting CaveUTTM in the SAS CubeTM is depicted

    on Fig. 3.

    CaveUTTM supports real-time tracking in physical

    space, using the IntersenseTM IS900 system or any

    similar devices. Tracking the players head allows

    CaveUTTM to generate a stable view of the virtual

    world, while the player is free to move around inside the


    in th

    M. Cavazza et al. / Computers & Graphics 29 (2005) 852861 855Fig. 3. Stereoscopic visualisationdisplay (which has the size of a traditional CAVETM).

    From a system integration perspective, CaveUTTM uses

    another freeware package, Virtual Reality Peripheral

    Network (VRPN)1 to handle input from all control

    peripherals such as joysticks, buttons, gamepads and the

    tracking system itself. All controllers are physically

    attached to the server machine, and data from the

    peripherals are collected by the VRPN server, which

    runs in parallel to the UT game server. The VRPN

    server converts data from the control peripherals into a

    generic normalised form and sends it to the CaveUTTM

    code in the UT game server, via a UDP port. The

    modied UT game server uses this information to

    update the users location in the virtual world from the

    head tracker and to process commands from the other

    control peripherals. The VRPN server also broadcasts

    the users new location to each one of the UT clients,

    and the information is received by a VRPN client. Then,

    the VRPN client sends the tracking information via

    another UDP port to the VRGL code attached to the

    UT client. VRGL uses this information to adjust

    the perspective correction, in real-time, to preserve the

    perspective depth illusion. The overall result is that the

    users view into the virtual world looks stable to him and

    1Released by the Department of Computer Science at the

    University of North Carolina at Chapel Hill.e SAS CubeTM with CaveUTTM.the correspondence between the virtual world and the

    real one is maintained.

    6. Software architecture: the event interception system

    The choice made for the software architecture also

    reects our philosophy of relating...


View more >