158
Student’s name and surname: Jan Klimczak ID: 112006 Second cycle studies Mode of study: Part-time Field of study: Informatics Specialization: Systems and Mobile Technologies MASTER'S THESIS Title of thesis: Immersive 3D Visualization Laboratory Demonstrator Title of thesis (in Polish): Demonstrator możliwości Laboratorium Zanurzonej Wizualizacji Przestrzennej Supervisor signature Head of Department signature PhD MEng Jacek Lebiedź PhD MEng, Professor with habilitation Bogdan Wiszniewski Gdańsk, 2014

6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

Embed Size (px)

Citation preview

Page 1: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

Student’s name and surname: Jan Klimczak

ID: 112006

Second cycle studies

Mode of study: Part-time

Field of study: Informatics

Specialization: Systems and Mobile Technologies

MASTER'S THESIS

Title of thesis:

Immersive 3D Visualization Laboratory Demonstrator

Title of thesis (in Polish):

Demonstrator możliwości Laboratorium Zanurzonej Wizualizacji Przestrzennej

Supervisor

signature

Head of Department

signature

PhD MEng Jacek Lebiedź

PhD MEng, Professor with habilitation Bogdan Wiszniewski

Gdańsk, 2014

Page 2: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

2

STATEMENT

First name and surname: Jan Klimczak Date and place of birth: 12.04.1982, Gdańsk ID: 112006 Faculty: Faculty of Electronics,Telecommunications and Informatics Field of study: informatics Cycle of studies: postgraduate studies Mode of studies: Part-time studies

I, the undersigned, agree/do not agree* that my diploma thesis entitled: Immersive 3D

Visualization Laboratory Demonstrator may be used for scientific or didactic purposes.1

Gdańsk, ................................. ................................................ signature of the student

Aware of criminal liability for violations of the Act of 4th February 1994 on Copyright and Related Rights (Journal of Laws 2006, No. 90, item 631) and disciplinary actions set out in the Law on

Higher Education (Journal of Laws 2012, item 572 with later amendments),2 as well as civil liability, I declare that the submitted diploma thesis is my own work. This diploma thesis has never before been the basis of an official procedure associated with the awarding of a professional title. All the information contained in the above diploma thesis which is derived from written and electronic sources is documented in a list of relevant literature in accordance with art. 34 of the Copyright and Related Rights Act. I confirm that this diploma thesis is identical to the attached electronic version.

Gdańsk, ................................. ................................................ signature of the student

I authorise the Gdańsk University of Technology to include an electronic version of the above diploma thesis in the open, institutional, digital repository of the Gdańsk University of Technology and for it to be submitted to the processes of verification and protection against misappropriation of authorship.

Gdańsk, ................................. ................................................ signature of the student

*) delete where appropriate

1 Decree of Rector of Gdańsk University of Technology No. 34/2009 of 9th November 2009, TUG archive instruction addendum No. 8.

2 Act of 27th July 2005, Law on Higher Education:

Art. 214, section 4. Should a student be suspected of committing an act which involves the appropriation of the authorship of a major part or other elements of another person’s work, the rector shall forthwith order an enquiry.

Art. 214 section 6. If the evidence collected during an enquiry confirms that the act referred to in section 4 has been committed, the rector shall suspend the procedure for the awarding of a professional title pending a judgement of the disciplinary committee and submit formal notice of the committed offence.

Page 3: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

3

STRESZCZENIE

Niniejsza praca badawczo-rozwojowa przedstawia możliwości tworzenia aplikacji do

uruchomienia w instalacjach typu CAVE. Praca rozpoczyna się od przeglądu istniejących

rozwiązań, opisuje w jaki sposób i gdzie są one wykorzystywane a następnie skupia się na

prezentacji czym jest CAVE i jak jest on zbudowany. Kolejne rozdziały opisują Laboratorium

Zanurzonej Wizualizacji Przestrzennej (LZWP) na Politechnice Gdańskiej i inne podobne

rozwiązania. Następnie została przedstawiona metodologia tworzenia aplikacji pod CAVE.

Zawiera ona przegląd i porównanie bibliotek kodu, frameworków oraz edytorów z graficznym

interfejsem użytkownika (GUI), który przyśpiesza i upraszcza proces tworzenia aplikacji. W

końcowej części pracy został przedstawiony opis utworzonej aplikacji demonstracyjnej, która

może zostać uruchomiona w CAVE na LZWP. Zakończenie zaś przedstawia dalsze plany

rozwojowo-badawcze. Pierwszy załącznik opisuje aplikacje demonstracyjne skompilowane i

uruchomione podczas tworzenia niniejszej pracy. Natomiast kolejny załącznik jest

dokumentacją, która pokazuje, w jaki sposób rozpocząć pracę z frameworkiem Virtual Reality

Toolkit ViSTA.

Wynikiem niniejszej pracy jest potwierdzenie, iż tworzenie aplikacji od podstaw w kodzie do

uruchomienia w CAVE jest skomplikowanym procesem. Mamy do dyspozycji kilka dobrych

frameworków, na których może bazować daną aplikacja. Prostszym rozwiązaniem tworzenia

aplikacji pod CAVE jest wykorzystanie istniejącego edytora z graficznym interfejsem

użytkownika, który pozwala na utworzenie takiej aplikacji w sposób wizualny. To znacznie

ułatwia i przyśpiesza proces projektowania aplikacji pod CAVE, ale w pewien sposób ogranicza

możliwości ich tworzenia.

Dziedzina nauki i techniki: Rzeczywistość Wirtualna, CAVE, Grafika 3D, OpenGL,

Przetwarzanie rozproszone, Silniki gier i symulacji, Symulatory, Silniki zarządzania sceną,

Silniki 3D.

Page 4: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

4

ABSTRACT

This research and development (R&D) work present possibilities of creating applications to

run in cave automatic virtual environment (CAVE) installations. It begins of little review of

existing solutions, describing how and where they are used, and then it shows what is CAVE

and how it is build. The next topic describes the Immersive 3D Visualization Laboratory (I3DVL,

pol. Laboratorium Zanurzonej Wizualizacji Przestrzennej - LZWP) at Gdansk University of

Technology. Then it focuses on methodology of developing CAVE applications. It contains

review and comparison of code libraries, frameworks and editors with graphic user interface

(GUI) which speed-up and make easier developing process. At least it provides description of

developed example application to run in CAVE at I3DVL and predicts the possibilities of future

R&D. The first supplementary part shows demonstrate applications compiled and run during

creating of this work and second part is about Virtual Reality Toolkit ViSTA as documentation

for start-up to work with it.

The result of this work is that process of developing applications from scratch through

coding them for CAVE is difficult. There are a few good frameworks which you may base on.

The easier way of creating applications for CAVE is to use some dedicated tool with GUI where

you can create application in studio in visual way. This make easier and speed-up process of

developing applications for CAVE but it have also own limitations which you will read about it

further in this work.

Keywords: Virtual Reality, CAVE, 3D Computer Graphics, OpenGL, Distributed Rendering,

Game and Simulation Engine, Simulators, Scene Graphs, 3D Engine.

Page 5: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

5

TABLE OF CONTENTS

STRESZCZENIE ........................................................................................................................... 3

ABSTRACT ................................................................................................................................... 4

LIST OF MAJOR SIGNS AND ABBREVIATIONS ........................................................................ 8

INTRODUCTION AND PURPOSE OF WORK ............................................................................. 9

1. CAPABILITIES OF VIRTUAL REALITY .................................................................................. 11

1.1. Image .......................................................................................................................................... 11

1.2. Sound .......................................................................................................................................... 11

1.3. Other channels – touch and smell............................................................................................... 12

1.4. Interaction .................................................................................................................................... 12

2. I3DVL AT GDANSK UNIVERSITY OF TECHNOLOGY ......................................................... 13

2.1. CAVE ........................................................................................................................................... 13

2.2. Edge blending ............................................................................................................................. 14

2.3. Colour Mapping ........................................................................................................................... 15

2.4. 3D Image ..................................................................................................................................... 15

2.5. Eye Tracking ............................................................................................................................... 16

2.6. Surround 8.1 sound ..................................................................................................................... 17

2.7. VirtuSphere - locomotion platform.............................................................................................. 17

3. EXISTING CAVE SYSTEMS AND LOCOMOTION PLATFORMS ......................................... 19

3.1. I3DVL - Gdansk University of Technology .................................................................................. 19

3.2. Silesian University of Technology, Poland .................................................................................. 20

3.3. aixCAVE - Aachen University, Germany ..................................................................................... 21

3.4. Possible applications of CAVES ................................................................................................. 22

3.4.1. Flooding Crisis Simulation ......................................................................................... 22

3.4.2. Molekül Visualisierung (MCE) ................................................................................... 22

3.4.3. Neurochirurgieplanung in immersiven Umgebungen............................................... 23

3.4.4. Virtual Gallery ........................................................................................................... 23

3.4.5. Example students projects ........................................................................................ 24

4. PROPOSAL OF USE I3DVL ................................................................................................... 27

4.1. Simulations .................................................................................................................................. 27

4.2. Medicine ...................................................................................................................................... 27

4.3. Prototyping .................................................................................................................................. 27

4.4. Games ......................................................................................................................................... 27

4.5. Fun .............................................................................................................................................. 28

4.6. Marketing ..................................................................................................................................... 28

4.7. Trainers ....................................................................................................................................... 28

5. METHODOLOGY OF CREATING SOLUTIONS FOR I3DVL................................................. 29

5.1. I3DVL as complete platform ........................................................................................................ 29

5.2. Creating Virtual Reality applications for CAVE ........................................................................... 30

5.2.1. Existing libraries and frameworks ............................................................................. 31

Page 6: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

6

5.2.1.1. API Graphic ......................................................................................................... 31

5.2.1.1.1. DirectX .......................................................................................................... 31

5.2.1.1.2. OpenGL......................................................................................................... 32

5.2.1.2. Scene graph engines ........................................................................................... 32

5.2.1.2.1. OpenGL Performer ....................................................................................... 33

5.2.1.2.2. OpenSG ........................................................................................................ 33

5.2.1.2.3. OpenSceneGraph ......................................................................................... 34

5.2.1.2.4. NVIDIA SceniX - NVSG ................................................................................ 35

5.2.1.2.5. Summary ....................................................................................................... 37

5.2.1.3. Frameworks for CAVE solutions.......................................................................... 39

5.2.1.3.1. ViSTA ............................................................................................................ 39

5.2.1.3.2. VR Juggler .................................................................................................... 42

5.2.1.3.3. Equalizer ....................................................................................................... 42

5.2.1.3.4. Summary ....................................................................................................... 45

5.2.1.4. Support libraries .................................................................................................. 46

5.2.1.4.1. Cg Toolkit .................................................................................................. 46

5.2.1.4.2. NVIDIA OptiX ............................................................................................ 46

5.2.2. Graphical editors ....................................................................................................... 47

5.2.2.1. Create own editor with GUI ................................................................................. 48

5.2.2.2. GUI libraries ......................................................................................................... 48

5.2.2.3. Existing graphic editors .......................................................................................... 48

5.2.2.3.1. Simulators ......................................................................................................... 49

5.2.2.3.2. CAVE supported and dedicated ....................................................................... 56

5.2.2.3.2.1. VBS - Virtual Battlespace........................................................................... 56

5.2.2.3.2.2. Quazar3D ................................................................................................... 57

5.2.2.3.2.3. EON Studio ................................................................................................ 60

5.2.2.3.2.4. Vizard ......................................................................................................... 62

5.2.2.3.2.5. Summary .................................................................................................... 65

5.2.2.3.3. Game dedicated engines .................................................................................. 66

5.2.2.3.3.1. UNIGINE .................................................................................................... 66

5.2.2.3.3.2. UDK ............................................................................................................ 67

5.2.2.3.3.3. CryEngine .................................................................................................. 68

5.2.2.3.3.4. UNITY ........................................................................................................ 68

6. DEMONSTRATIVE PROJECT FOR I3DVL ............................................................................ 70

6.1 System project ............................................................................................................................. 70

6.2 Implementation notices................................................................................................................ 73

6.3 Quality tests ................................................................................................................................. 74

6.4 Performance tests ....................................................................................................................... 75

6.5 System presentation .................................................................................................................... 76

6.6 User manual ................................................................................................................................ 78

7. FUTURE R&D WORK FOR I3DVL ......................................................................................... 80

8. SUMMARY .............................................................................................................................. 81

THE STUDY BENEFITED FROM THE FOLLOWING REFERENCES ...................................... 82

Page 7: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

7

LIST OF FIGURES ...................................................................................................................... 85

LIST OF TABLES ........................................................................................................................ 86

Attachment A - Example Applications ......................................................................................... 87

1. ViSTA ............................................................................................................................................. 87

2. OpenSG 1.8 ................................................................................................................................... 91

3. OpenSG 2.0 ................................................................................................................................... 96

4. OpenSceneGraph 3 ..................................................................................................................... 103

4.1. Sample applications based on the books ................................................................................. 127

5. Nvidia SceniX 7 ............................................................................................................................ 138

Attachement B - ViSTA ............................................................................................................. 142

1. Download framework.................................................................................................................... 142

2. Compilation prepare ..................................................................................................................... 142

3. Setting up environment variables ................................................................................................. 143

4. Prepare project for Visual Studio 2012 ........................................................................................ 143

5. Libraries required by the sample application ............................................................................... 148

6. Compilation of the sample applications ....................................................................................... 148

7. Configure sample application ....................................................................................................... 150

8. Manual create a project into Visual Studio 2012 .......................................................................... 154

9. 3D objects import test .................................................................................................................. 157

Page 8: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

8

LIST OF MAJOR SIGNS AND ABBREVIATIONS

2D – Two-dimensional space

3D – Three-dimensional space

CAVE – Cave Automatic Virtual Environment

CryVE – Cryengine automatic Virtual Environment

FPS – Frames Per Second

GUI – Graphical User Interface

HDD – Hard Disk Drive

I3DVL – Immersive 3D Visualization Laboratory at Gdansk University of Technology

ODE – Open Dynamic Engine

R&D – Research and Development

SDD – Solid-State Drive

UDK – Unreal Development Kit

VBS – Virtual Battlespace

VR – Virtual Reality

Page 9: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

9

INTRODUCTION AND PURPOSE OF WORK

The Master Thesis is R&D work about the possibilities of use and developing applications

for CAVE's. I begin my work from explanation of virtual reality (VR). Then I described Immersive

3D Visualization Laboratory (I3DVL) at Gdansk University of Technology in Poland. Here you

can read about many different important elements which are creating the laboratory. The main

part is cave 6-wall automatic virtual environment (CAVE) with multiplied projectors per walls for

higher the quality of displayed image. The system consists of image blending, tracking system,

surround sound and VirtuSphere locomotion platform also. This locomotion platform is a big

sphere where you can go inside and freely walk or run to move in virtual reality. This is quite

interesting composition of the VirtuSphere locomotion platform with CAVE, maybe the first such

configuration on the world.

Then I review existing CAVE installations. I visited i3D company with my supervisor Ph.D.

Jacek Lebiedź and Ph.D. Adam Mazikowski. After I described a few CAVE configuration from

our neighbour border friends as biggest one CAVE in Europe at Aachen University in Germany.

Then proceed with a few impressing CAVE installation on the world. Also I included Survey

Simulator from DNV which have VR Training Centre in Gdynia at DNV Academy Poland for

training and certificate the employers from all over the world. Is worth to say that this system

shorten training time from 5 to about 1 year which is a great result. The centre contains

interesting and very comfortable back projection system which improves their immersive

feelings. So you will find here a good notice about my visit in the VR Training Centre of DNV.

After this introduction I describe possibilities of use Immersive 3D Visualization Laboratory

in domains of simulators, medicine, prototyping, games, fun, marketing and trainers. Here you

will read how you can use CAVE laboratory for such kind of different projects.

Next on I move to methodology of creating solutions for I3DVL and mark a common

problems and requirements for creating applications for CAVE. I notice here that we can

develop applications from scratch where I point to important functionality of OpenGL and

DirectX API's. Then I move to existing scene graph engines which are commonly in use as

OpenSceneGraph, OpenSG, NVIDIA SceniX and some of them which are currently obsolete

but in the past were very common and important like OpenGL Performer or CAVELib. These

are powerful graphic libraries which make easier and faster the process of developing 3D

applications and VR simulations. After you will read about frameworks as Equalizer, VRJuggler

and ViSTA which enable you to create CAVE applications based on previously described scene

graphs. The main functionality of this frameworks is added possibility to distributed rendering

and displaying image, easy configurable setups for develop once and run application at different

computer configurations and in CAVE installations and support many different input and output

devices as manipulators and trackers. This topic is focused at coding application for CAVE.

To speed-up process of developing CAVE application is recommended use of GUI editors. I

described a QT framework which is best suited for it. To go one step further you can use

existing simulators or GUI engines. VBS of Bohemia Interactive Simulations is commonly used

training simulator for military, police, fire brigade and ambulance on over the world as for

example in US Army, NATO and currently in Poland as well. At the moment Gdansk University

of Technology sign contract that Bohemia and University will create Crisis Management Center

Page 10: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

10

based on Virtual Battlespace (VBS), configurable simulator system. If these will not enough for

you than you can use CAVE dedicated editors like Quazar3D, EON Studio or Vizard which are

full environment specialized in fast and easy creation of CAVE applications. On the market

there are a few very good game studios as UNiGiNE, Unreal Development Kit (UDK),

CryEngine and Unity which are powerful tools for creating AAA level games.

For almost all of the scene graphs, graphics editor and game studios I pass through

compilation, configuration, running and analyzing the frameworks as well as their examples and

some of the tutorials to get really know how they works. The result of analyze is a big

attachment where you can see about three hundred of described small applications. At attached

disc you can see their images in HD resolution. There is also additional attachment just

described the ViSTA framework which is provided without any documentation, so it should help

you to start to work with it if you will decide on it.

Page 11: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

11

1. CAPABILITIES OF VIRTUAL REALITY

The Virtual Reality has many names and meanings. It is interpreted differently by different

peoples and institutions. But there are a few common capabilities, elements which glue them

together. The first one is virtual world, then immersion, sensory feedback (responding to user

input) and interactivity [1].

Virtual World presents the environment where action takes place. It is an imaginary space

often manifested through a medium, a description of objects in a space.

Immersion is considered that the user must be physically immersed, have "a sense of

presence" within alternate reality or point of view. The alternate word may be a representation of

the actual space that exists somewhere, or it could be a purely imaginary environment.

Sensory Feedback allows participant to select their vantage point by positioning their body

and to effect events in the virtual world. The VR system provides direct sensory feedback to the

participants based on their physical position usually by tracking system.

Interactivity is the fourth element of VR which is just responding for the user interaction.

This gives opportunity to interact with virtual worlds and simulations.

1.1. Image

Do we need a photo-realistic image to create the Virtual Reality? No, we don't. There are

some of the VR systems for blind peoples without any graphics, were people can interact and

act in virtual reality worlds. We can come back to the past when first virtual worlds were created

in the games just in text modes. The first computer games could be example of VR which based

on text. By improving graphic quality we just improve our immersion of VR. There is important

how we can see, if we can see in colour, at big screen or maybe through glasses as HMD. The

resolution is important for the quality of virtual words. How we can see? Can we believe that

what we see could be real? Virtual reality is a medium. A virtual world is a real representation of

some world that may or may not exist in the physical world. To visualize this we use an image.

1.2. Sound

In real life we hear sound everywhere. Only in vacuum there is no sound. That's why we

know that if somewhere is no sound that should be not real. The same is in VR. The sound

improves our immersion. Without sound we lost a lot of immersion and we feel that's something

is not true1. The quality of sound is very important, as well as good spectrum of sound. We

know many of sound and we know how some of the sound should be heard like. Also the

position of sound in 3D is very important as is in real life. We can use background sound, sound

effects and voices which can be also recognized by system to interact. All of them improve our

immersion level.

1This is not valid for deaf people.

Page 12: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

12

1.3. Other channels – touch and smell

The Virtual Reality systems can be enhanced by other channels like touch, smell which

improve immersion. This is an optional, not required, but because of them you will fill just like in

the real world.

Touch is when you can touch a real device, which moves your manipulations into virtual

worlds. You can have real like devices or platforms as e.g. car cabin, plane cockpit or

submarine control room, which can be similar or just the same as in the real environment. This

provide you more accurate control over the system, train based on real-like devices or situations

in different configurations. Through using such techniques you can work with VR as you used

to. But also there are available many different devices just created to use in virtual reality which

help you to navigate in space. You can use move-driven manipulators which react for your

translating and rotating in different axes, even simultaneously. You can choose between

analogue, digital or mixed devices. The analogue thumbs give you easy control how you're

sliding up or down. It will give you possibility to almost visually specify values which you want to.

But the drawback is the lover quality of analogue control over digital one. Digital one provides

you state buttons, key pads or touch devices.

From the other side there are devices which react to simulation. Haptic devices are one of

them. These devices usually have mechanisms to provide force feedback which you will feel

during working with them. For example by using haptic pen during painting some of the 3D

model you will feel the touch at the virtual contact of pen with geometry. Other kind of devices

may be some installation which produces water bubbles, fog or other substances which are

controlled by simulation.

In modern platforms you can also smell much different savour which fulfils experience of

immersion.

1.4. Interaction

Interaction is very important in VR. Without possibilities to interact you will fell just like in the

movie. To be immersed in virtual world its need to react in real-time. There are many

possibilities how the simulation will interacting with you. You can manipulate it through devices

as manipulators, keyboards, mouse and trackballs as well as mobile, real-like or touch devices,

simulation platforms and cockpits etc. You can use motion capture and interact through moving

or use sensors as gloves or body tracking. You may use voice control to speech commands.

Head tracking very improve immersion in the sense that as you change position and orientation

of viewing point that scene will be displayed at different position and angle in real time. You can

combine head tracking system with other manipulators to get best results and better interaction

with the simulated VR [2].

Page 13: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

13

2. I3DVL AT GDANSK UNIVERSITY OF TECHNOLOGY

I3DVL is advanced CAVE laboratory built in the end of 2014 at Gdansk University of

Technology. Process of setup and specification how the laboratory will looks like, took a few

year of research. The main idea was to project hi-end CAVE with something which will be

unique and improve their usability. The decision was to choose locomotion platform installed

inside 6-wall CAVE as the distinguishing feature. The locomotion platform is a big sphere,

where user can go into and freely walk around the virtual world. This is world unique solution

which gives possibility to R&D a new kind of solutions.

2.1. CAVE

A cave automatic virtual environment (better known by the acronym CAVE) is an immersive

virtual reality environment where projectors are directed to the walls of a room-sized cube (see

fig 2.1) [3].

Fig. 2.1. Typical CAVE installation [3]

The CAVE at University contains 6 walls which create square room. Each wall is 3,4m width

and height. The room is about 3m above floor of the building containing the CAVE. Walls are

created by acryl glass. The floor is strengthened and is divided into two parts with very thin gap

between them invisible from above. The floor glass will resist about 500 kg load. Image is

projected by rear projector's system consisted of 12 DLP Full HD 3D 120Hz projectors with

laser calibration system. Metal construction positions the CAVE room at second level of

building. At this level there is also light floor which eases entrance into room. First level contains

2 projectors with mirrors which display image into CAVE floor. There are also 10 additional

projectors located around the room which display the image at surroundings CAVE walls.

Page 14: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

14

Displayed image is high quality with resolution 1920x1920px. Such display system needs huge

computing power which is realised by 14 servers with 32GB RAM, NVidia Quadro K5000 4GB,

SSD, fibber full-duplex network InfiniBand 40 Gb/s each, which guarantee high quality of

displayed image.

2.2. Edge blending

High quality of image is created by displaying two images from 2 projectors at one wall. The

problem is that CAVE walls are square and images from two sources don't fill exactly surface of

the wall. Second problem is how to connect such two images into one, that there will be not

visible gap and artefacts between them?

Fig 2.2. Edge blending and Color Mapping [4]

The solution is to setup two images that will overlap each other and use edge blending.

Edge blending blends two overlapped images at the place of its overlapping. It creates a

seamless image by adjusting the brightness at adjoining edges when using multiple projectors

side-by-side to reproduce single widescreen images [4].

Fig. 2.3. Edge blending function [4]

The blending, in simple, is the process which setup transparency from zero to one hundred

percent at the part of overlapped image which make the connection between them not visible

[5].

Page 15: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

15

2.3. Colour Mapping

The use of multiple projectors to create a larger image can result in colour variations due to

slight differences in projector image processing. Each projector is adjusted so that the same

colours are reproduced when multiple projectors are used simultaneously.

2.4. 3D Image

Humans have two eyes situated close together side by side. This positioning means that

each eye has a view of the same area from a slightly different angle. Both views are merged in

a brain to form a single image. To provide real-like image we need to display two different

images for both eyes otherwise image will be flat and will look not real [6].

Fig 2.4. Perception of human viewing [6]

To see real 3D image we need two a little bit different images displayed at different way for

each eye. Each image should be visible just for one eye. This technique is named 3D stereo.

There are available a few techniques which allow display 3D stereo image. Generally we have

passive and active systems which required using special glasses. Passive systems use

polarisation filters or spectrum selection at glasses. Active one just open and close display in

glass for each eye and display image for one eye and then for second one in turns. To see 3D

image we need to use glasses. The University decided to use passive Infitec system with

spectrum selection and active solution, which is based on NVIDIA 3D Vision Pro system which

guarantees high quality of 3D immersion and is dedicated for NVIDIA Quadro graphics cards.

Page 16: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

16

Fig. 2.5. NVIDIA Quadro Sync [7]

Displaying 3D stereo image is a little bit more complicated in CAVE environments. There is

synchronisation of the displays that you will see images in the same moment of time at each

display which is projected by different projectors. To synchronise it is used special hardware

which synchronize 3D signal for each graphic card. These NVIDIA Quadro Sync cards are

connected to each other through separate network. Quadro Sync connects to NVIDIA Quadro

GPUs, synchronizing them with the displays or projectors attached to them. This guarantee

correctly display 3D stereo image at each displays in CAVE [7].

2.5. Eye Tracking

Tracking system detects your motion and reacts for it. We can track full body motion or their

different part as hand or head. The most important in CAVE is eye tracking system because in

CAVE you can walk around, so you need different 3D perspective from different point of view.

This is done by eye tracking system in real time, which need to use special eye tracking

positioning system.

Fig. 2.6. Eye tracking glasses with positioning system [8]

Page 17: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

17

The tracking system consists of cameras and IR's which accurate locate glasses with their

transformation in space. This information is further used in simulation to transform displayed

image to enhance such virtual reality experiences [8].

2.6. Surround 8.1 sound

8.1 sound is the common name for an eight-channel + subwoofer surround audio system

commonly used in home theatre configurations.

Fig. 2.7. Surround system [9]

The CAVE is square room where sound comes from different directions. This is done by 8

channel surround sound system. Each channel is independent. This system produces real 3D

stereo sound which gives you chance to feel immersed in a scene as part of the action [9].

2.7. VirtuSphere - locomotion platform

The main concept for creating the I3DVL laboratory was to add something unique and

useful which will improve CAVE installation. The VirtuSphere is platform for immersion into

cyberspace. It is a big semi-transparent sphere where user can go inside and control movement

by walk in virtual world by just walking [10].

Page 18: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

18

Fig. 2.8. VirtuSphere in action [10]

Platform allows rotating freely in any direction according to the user’s steps. A user is able

to walk and run inside the sphere. The sensors collect and send data to the computer in real

time and user’s movement is replicated within the virtual environment. This gives you full body

immersion into virtual reality.

Page 19: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

19

3. EXISTING CAVE SYSTEMS AND LOCOMOTION PLATFORMS

I would like to introduce the CAVE at Gdansk University of Technology in Poland. This is

one of a few available 6-wall CAVE systems in the world. There is a big 3.05 meters sphere

inside, locomotion platform where user can go inside and freely move around, moving the virtual

world. This is impressive configuration as well as one of the most advanced configurations in

the world.

Another impressive solution is CAVE at Aachen University in Germany which is one of the

biggest in Europe. It has walls 5.25 x 3.3m in size. This system uses 24 projectors to project

image connected from 4 projectors for each wall which improve display quality.

3.1. I3DVL - Gdansk University of Technology

Gdansk University of Technology took a few years to create Immersive 3D Visualization

Laboratory (I3DVL) with 6-wall CAVE. Immersion of user is enhanced on 6 wall projection. The

CAVE is one of the most advanced in Europe and is the top solution in the world. It is unique

because it is optionally supported in mobile locomotion platform which can be installed inside

the CAVE. The locomotion platform is a big sphere named VirtuSphere where user can go

inside and naturally walk around the virtual world [11].

Whole solution is based on high-end technologies which create high quality and realism of

the simulation at the highest level. For such system it was built up a 13 meters height building

with glass room inside. At each walls there are rear projected images from external sides. It

uses 2 projectors for wall to double resolution of image. Computer system is based on 14

computers with 32 GB RAM, NVIDIA Quadro K5000 4GB, fast SSD and fibber network. Each

computer is connected into high quality Barco DLP HD 3D 120Hz projector with laser

calibration.

Fig. 3.1. Proposed room schema of I3DVL (at the moment is a little modified) [11]

Page 20: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

20

Technical specification:

CAVE with walls 3.4x3.4m placed 3m above floor,

Spherical locomotion platform with diagonal 3.05m (VirtuSphere),

Acrylic glass all 6 screen, floor with load at least 500kg,

12 DLP HD 3D 120Hz projectors with laser calibration of image (< 0,5 mm),

14 computers, 32GB RAM, NVIDIA Quadro K5000 4GB, HDD SSD,

InfiniBand fibber network, 40 Gb/s full duplex,

Surround sound 8.1,

Tracking system.

3.2. Silesian University of Technology, Poland

Silesian University of Technology probably built the first CAVE in Poland. I have visited

Silesian University with Ph.D. Jacek Lebiedź and Ph.D. Adam Mazikowski to check it in action.

It was my first contact with CAVE. This is simple system created with 3 walls and floor. The

image is displayed at 1024x768px resolution which is in middle range. When you come closely

you will see little pixels. There are just 4 projectors, each one for different wall. There are no

mirrors used for projectors. Screens are created from material which is elastic. The floor is

made from wood.

Fig. 3.2. Author in CAVE at Silesian University of Technology

This is really simple installation which uses powerful Quazar3D application to display

simulations. When I wear glasses and come into CAVE the impression was just amazing. I have

never seen something like that before. Quazar3D provide high level of visualisations where I

Page 21: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

21

feel immersed at all. I feel that what I see it was like a real world. So even of simple CAVE

installation it was amazing experience for me. The only minus it was no ceiling and back wall

which force me to focus at the front wall and blocked me to watching up. Even that this

amazing feeling of immersion is not describable.

3.3. aixCAVE - Aachen University, Germany

The solution created by Aachen University in 2012 contains 5-walls CAVE which gives you

full freely in 360 degree movement. With size bigger than 5x5m2 and with rear projection this is

the biggest such solution in the Europe. System provides high quality image. The image is

bright, uniform and provides 3D active stereo vision which guarantee excellent experience for

the user [12].

Fig. 3.3. CAVE installation at Aachen University [12]

3D stereoscopy projection is created through 24 DLP full-HD projectors. There is used four

projectors for each wall and eight projectors for floor (which is divided for 2 screens). Rendering

system is created by 24 computers with 2 for slave and 1 for master NVIDIA Quadro 6000

graphic cards (the older ones), 2x Intel Xeon with 6 cores 2,7GHz, 24 GB RAM and fast

InfiniBand QDR (4x) fibber network.

Technical specification:

Five screens with rear projection (4 walls and floor),

24 HD projectors with 3D active stereo NVIDIA 3D Vision Pro 120Hz,

Walls 5.25m x 3.30m,

4 projectors per wall with edge blending image,

Floor 5.25m x 5.25m,

8 projectors for glass floor of thickness 6.5cm,

Page 22: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

22

8 camera optical tracking system,

Use of power about ~67kWatt,

Automatic closing door.

3.4. Possible applications of CAVES

Possible applications of CAVES will be shown as the examples of use Virtual Reality Center

(VRC) of Johannes Kepler Universität, Austria.

VRC at Johannes Kepler University was created in 2005 year. At attached DVD there are

additional movies and photos in directories: “documentation\movies\Virtual Reality Center -

Johannes Kepler Universitat” and “ocumentation\photos\Virtual Reality Center - Johannes

Kepler Universitat”.

3.4.1. Flooding Crisis Simulation

Application is the simulation of flooding based on Grid platform (CrossGrid UE Project) [13].

It's provide ability to simulate different flooding with different parameters. By using CAVE

experts may batter estimate ravages of flooding and better counteract them. It's based on

OpenSG [14].

Fig. 3.4. Flooding system in action [14]

3.4.2. Molekül Visualisierung (MCE)

MCE is a collection of research programs about visualizations the electron density

distribution. The application is created to visualize values of calculations X-ray diffraction data.

There are available versions for Windows, Linux, IRIX and CAVE [15].

Page 23: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

23

Fig. 3.5. Molecules and particle system visualization [15]

3.4.3. Neurochirurgieplanung in immersiven Umgebungen

Project was created in cooperation with Medicine Department at University in Insbruck and

Institute of Fluid Mechanics. Application is a teacher of medicine education or may help to plan

neurosurgical procedures [16].

Fig. 3.6. Anatomical structure in medicine [16]

3.4.4. Virtual Gallery

Virtual Gallery provide virtual travelling and study scenes in virtual worlds.

Page 24: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

24

Fig. 3.7. Travel in virtual world

3.4.5. Example students projects

There is a few students' works for CAVE installation.

3D Kunstwerk

Application shows interaction with 3D art. It's based on CAVElib [17].

Fig. 3.8. Interactive 3D art [17]

Page 25: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

25

Multi User Maze

Application is a maze where a few users may participate at once. It's based on OpenGL

Performer [18].

Fig. 3.9. Multi user maze [18]

CAVE Skiing

Application attempts to move skiing into CAVE. It's based on OpenSG [19].

Page 26: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

26

Fig. 3.10. Ski simulator [19]

Page 27: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

27

4. PROPOSAL OF USE I3DVL

The CAVE provides many possibilities for use. VirtuSphere is movable so it is possible to

use just standalone CAVE or CAVE with locomotion platform. This increases possibilities of

uses. You can use it in simulations, medicine, prototyping, games, fun, marketing, trainers and

other disciplines. Only the imagination limits applications which you can use in CAVE. You can

create new one optimised for CAVE or just run existing one with a few modification. You can

use full 6-wall environment or just a few walls in it.

4.1. Simulations

The first group of possible applications are simulations. During simulations you can train,

learn or see how something works. The Gdansk University of Technology with cooperation with

Bohemia Interactive Simulations from Czech Republic based on their VBS 3 engine will create

"Crisis Management Centre". This kind of simulations provides solutions to prepare or what to

do, when some incidents will happen. There you can not only just imagine how it will be looks

like but you can see it and prepare for it.

4.2. Medicine

Conventional medicine needs a models or organs to work with them. Sometimes there are

very small and sometimes is difficult to see at real model how some parts are build or how they

are works. Here you can also treat fears. For example when somebody fears something than

you can slowly accustom for it. No one wants to be treated or operated by not good trained and

experienced man. Medicine in CAVE trainings provides adequate learning paths with exactly

showing how organism works and provides some exercises. You can learn how to make some

operations, how some of the organs are built and how they are working without possibility to

provide real models. This improves medicine experience.

4.3. Prototyping

Prototyping is a cost prone and long time process. Usually creating prototype in real is an

expensive and single operation. Sometimes it is even impossible to create prototypes in the

middle of stage because of costs or time limits. The CAVE is ideal solution for it. You can

prototype and verify it in real scale every product. Additionally you can change a prototype in

real time and see changes immediately. This provides great possibilities for prototyping.

4.4. Games

Every day games are going to be better and to provide immersion that is not a game, but it's

real. CAVE increase immersion of such felling and provide more natural and free navigation in

virtual worlds. In CAVE you will feel that you are inside virtual world. Every game will look

different. Some game which doesn't immerse you at PC here may immerse you at all. You can

cooperate with somebody in multiplayer mode. The players may use CAVE or different

platforms. This gives rich possibilities for playing games in CAVE.

Page 28: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

28

4.5. Fun

There are some applications just for fun. CAVE gives new possibilities to feel immersion. In

CAVE just simple animations or movie can provide so much immersion and fun like you haven't

it never before. This is a place which gives you a lot of fun. You will discover it from beginning.

Perhaps you will discover a new type of fun and you will love it. You can travel, play with toys,

relax with animals and nature and do many more amazing things. Because of CAVE it will look

so real.

4.6. Marketing

Marketing is another group of application which you can use. You can plan how

advertisement will look like and where it should be placed to get the best result. In real time you

can change configuration. You may provide virtual work at new estate. You can present

apartment in different styles. Maybe hotel, look through window? That also is possible and it will

help you a lot when you want to sell or build PR.

4.7. Trainers

You can simulate some vehicles, devices and other thinks which can be supported with real

models as e.g. cockpit or control panel. This can teach you what you should do or what you

should not to do and why. In big environments this can add some randomness for training

paths. In opposite to real, that virtual training may lower the costs of training and sometimes

may train in way that in real is not possible. This is a big advantage over the conventional

trainings.

Page 29: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

29

5. METHODOLOGY OF CREATING SOLUTIONS FOR I3DVL

Creating applications for CAVE in many cases is different than creating typical 3D

application. Of course you can use existing editors which support CAVE. Then it looks just the

same or it needs just a little modification in code. But when you want create some application

from scratch, just by using some frameworks then it's need from you more advanced work and a

few notices that you should remember.

The main goal is that CAVE applications should work in distributed environment with

synchronisation in every frame for receive proper stereo 3D image. You should think about it.

Some objects will look the same at all distributed nodes which should be just synchronised. This

may be for example a transformation and animation of some objects. The first problem will be if

you use for it some algorithm which is based on same randomness. Then you need provide

synchronisation of each frame for all steps of the algorithm for all nodes, which sometimes may

be difficult. The second problem is to provide state of objects which should be different at each

node. This may be vector of camera which is different at each node, because of square room

projection. The third problem is local node computation. There is no need to make all

computation at server and just send result of it for all nodes. This just makes higher network

usage. We should think that we have at least 12 computers connected each one to each other.

Typical CAVE application has one server which control and synchronize state of objects

between nodes. We don't have too much time for every frame. So if we will exceed bandwidth of

network then we will see jams at our application.

So the first think is that the CAVE application is a server-client type of application. The

server control whole application, share state of objects and synchronize each frames. The

clients are renderers which render frames, make local calculations and display image at

projectors. Server controls input and output devices as manipulators and tracking system,

maintains network connections, and setups main camera system based on external sensors. In

our CAVE we have 12 positions of cameras. There are 2 cameras for one wall. First we should

setup these cameras and then provide for them transformations from eye tracking system that

these cameras will react for our movement of head. This transformation we should multiply by

data comes from manipulator device and VirtuSphere locomotion platform that we will be able

moving around virtual world. Usually frameworks have built file configuration for displays and

control devices which shorter time of configuration for different platforms.

You should have in mind that when you want to use some framework function or some

library, it will work in distributed environment. Is there any possibility to share state of object?

This is a requirement for development CAVE applications. Many times you will need to write on

your own some functionality to use in CAVE because usually libraries are not designed to use in

distributed environments.

5.1. I3DVL as complete platform

I3DVL consists of 6-wall CAVE and spherical walk simulator named VirtuSphere. Each wall

3,4m square, displays image from 2 projectors. Each wall contains 2 images with 480px edge

blending in the middle. The walls have horizontal split of images for edge blending. Inside CAVE

is installed VirtuSphere locomotion platform which may be removed off. VirtuSphere is 3.05m

Page 30: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

30

semi-transparent plastic in the form of grid sphere where user can go to inside and walk in

virtual world. The VirtuSphere technically works as a mouse. There you have 3D stereo active

image which is provided by NVIDIA 3D Vision Pro or Infitec Barco system. Both of them need to

use different glasses and drivers. Additionally you have markers at glasses for eye tracking

system and cameras with IR sensors to detect head movements in real time. There is also 8

channels sound system based on eight speakers plus one subwoofer. Applications are running

at 12 computers plus 2 additional in control room. Computers are connected by fibber and cable

network and additional independent cable network for 3D synchronisation. These create whole

I3DLV contemporary configuration.

5.2. Creating Virtual Reality applications for CAVE

Virtual reality applications most often are created in 3D technology. Applications frequently

create virtual reality in 3D. These applications typically consist of many elements such as the

scene in 3D, rendering system and image displaying, user interaction, physics or other laws of

nature, movement and animation elements, audio and surround sound, special effects such as

fog, rain or post effect like e.g. motion blur, etc. [20].

There are also important components at a lower level such as increasing the efficiency of

the system through the use of multiple threads and optimal algorithms to calculate the

distribution or synchronization of data between clusters, generating and displaying image in 3D

stereo, GPU utilization and enhanced instruction for the calculation of whether the use of the

advanced capabilities of the latest graphics cards through such implementations like shaders

[21].

Solutions for CAVE consider features such as combining edges of images projected from

multiple projectors on a single plane (Edge Image Blending), generation and synchronization

technology of stereo image using multiple clusters consisting of multiple graphics cards and

projectors, the detection of the head position for the observer (Head Tracracking) and on this

basis, generating position of the 3D image and support for additional peripherals like gloves or

other 3D manipulators such as used at the Gdansk University of Technology locomotion

platform so-called VirtuSphere - mostly obtained through opportunities to write and attach own

driver for the user.

Not every system or application for CAVE should comply with all such requirements, but

advanced ones may. There are systems dedicated for just one specified operating system or

multi-platform, which in turn extends the field of application. We have some libraries that offer

full or partial functionality described above, which can then be used in a newly-created

applications or we can use editors with a user interface that help us a lot in creating advanced

applications with all aspects of creating application for CAVE. Such editors offer feature of

WYSIWYG interface and scripting languages that allow make changes in real-time in the

running application without needs to recompile the script or whole application to see result in

real time, which significantly speeds up process of application development.

At least we can write a complete framework from scratch, editor or an application for use in

the CAVE. A key element of the final visual effect is a way of rendering graphics. Such low level

graphic may be created using the CPU or GPU. Currently, most graphics cards have very

Page 31: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

31

powerful GPU computing units designed for efficient graphic generation and are able to display

a much more complex graphics in real-time compared to CPU. Virtual reality applications

require real-time interaction and the same requirements apply to the displaying image. For this

reason the graphics are not generated at CPU but it is used GPU instead. Therefore, the

creation of virtual reality applications for use in the CAVE uses mostly API like OpenGL or

DirectX . These two APIs form the backbone of all existing libraries, frameworks, engines to

create applications that use 3D graphics, including the CAVE solutions [22].

5.2.1. Existing libraries and frameworks

CAVE solutions are expensive investments, in many times costing hundreds thousands or

even millions of dollars. Because of high cost there are not so much such platforms on the

world. The most often we can find it at different Universities and Military Areas. There are open-

source software developed mainly by Universities and a few commercial rather expensive

solutions available on the market.

5.2.1.1. API Graphic

At the lower level of rendering graphics there are interfaces like OpenGL and DirectX2 [23].

API at this level is very thin layer, specialized in just generating and processing computer

graphics at GPU. This layer has direct connection to graphic cards via graphic driver.

Functionality of such layer sometimes is called as state machine, which means that at this level

it is not available whole scene but just base elements like triangles, from scene is build up and

displayed without any knowledge about their past and future. Here are also available shaders

which provide ability to makes some operations at GPU in streams at many cores

simultaneously.

Because of such limitations about knowledge of scene, there is need to create a layer of

higher level which will take care of creating a scene, lighting, handling input and output devices

and interaction in virtual world. The knowledge of whole scene gives possibility to optimize

performance of application. We can choose dedicated solutions for specifying applications e.g.

games or use general purposes solutions e.g. scene-graph engines. For CAVE solutions the

general purpose frameworks are better suited.

At the next stage we can use or create an editor with user interface. This will shorten time

and make easier of creating application. In editors we can build our scene and manage it in

graphical way often through WYSWIG editor. By using editors we can also make simpler

configuration of displays, network, tracking and devices to run application in CAVE.

5.2.1.1.1. DirectX

Microsoft DirectX is used mainly in games. It's a stable standard which new versions are

created rarely. These guarantee that applications will work for long time at many computers.

2At the moment there is under development Mantle API by AMD which is the lower level graphics API.

Microsoft also works to add low level instructions into DirectX in new 12 versions. OpenGL want to add such possibilities as well. Such API’s are not available at the moment that’s why I don’t describe it there.

Page 32: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

32

The minus is that is not an open standard, and you should wait a long time for a new functions

or improvements. DirectX works only at Windows and XBOX. It is projected to work mainly in

one window, but it supports more than one. The main advantage is that NVIDIA 3D Vision

works at GeForce GPU's in heuristic way. This enable 3D stereo at the cost of slow-down your

application for low end graphic cards. DirectX don't support hardware stereo 3D and there are

not many scientists' libraries. This is the main thing that it is rather not used in professional 3D

applications, as this one used in CAVE [24].

5.2.1.1.2. OpenGL

OpenGL developed by Khronos is open source library for 3D graphics. Because of open

source and many additional libraries, hardware 3D stereo image support, possibility of work with

multiple displays and availability for different systems: Windows, Linux, Mac and UNIX it is the

mostly chosen for advanced 3D application. The drawback is that not every graphic cards

support all extension of the library as is in DirectX so developed applications which use some

extension may not work at all computers. This incapability issues contributed to often replace it

by DirectX in games. The different situation is that OpenGL ES is a standard in mobile devices.

Only newest Windows Phone support DirectX. But the most mobile devices based on Android

and Mac OS X support OpenGL ES. Almost all further described frameworks are based on

OpenGL.

5.2.1.2. Scene graph engines

Scene graph engines provide possibilities to creating and managing whole scene displayed

in 3D virtual simulation. They are usually used for general purpose and they are easy

integralable for any applications. Using them give us possibility to manage virtual world, adding

and removing objects, transform them, generate scene in many threads in cluster environment

and display it at many devices like monitors, HMDs or projectors.

Scene graph represents logical connections between elements in the scene and is used for

performance management and rendering. The most often scene is represented by hierarchical

graph contained child nodes and one main root node. Each node may contain other nodes. In

advanced systems the node may have a few parents which create directed acyclic graph

(DAG). At default each operation performed at parent is performed at all his children as well.

Scene graph systems are often specified as retained or deferred rendering. It means that

they not just provide content to rendering but keep it in the buffer which adds possibility to

additional transformations and optimizations e.g. for use multi-threading just before rendering.

These systems often are object-oriented which give possibility to extend their functionality

through implementing different modules and plug-ins. This provides easy way to scale the

system.

OpenSG and OpenSceneGraph are Open-Source solutions which are often used for

creating VR and CAVE systems. NVIDIA have own scene-graph framework named SceniX

which is very powerful and provide real-time raytracer. SceniX is optimized for NVIDIA graphics

cards and have not available source code. The problem with SceniX is that is not prepared to

Page 33: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

33

use with CAVE out of the box and there are not currently available libraries which provide

integration SceniX for CAVE solutions. So the only way to use SceniX in CAVE is to write own

module to use it in such environments.

5.2.1.2.1. OpenGL Performer

OpenGL Performer is the one of the first systems for scene-graph management. It was

created by SGI. Entirely was available only for SGI graphics stations with IRIX operationg

system. The main goal for SGI was hardware, not software. OpenGL Performer not share

source code so this factors causes that in the middle-time was arisen other systems as open-

source e.g. OpenSG where everyone may add own additional modules. For this reasons

OpenGL Performer just disappeared from the market and is currently outdated [25].

5.2.1.2.2. OpenSG

OpenSG is open-source scene-graph management system for creating 3D real-time virtual

reality applications. It is available for Windows, Linux, Solaris and MacOS [26]. It extends

OpenGL. System was developed across many years. In 2001 it was published the first version

of OpenSG. The year 2007 begins the work at second version. At sourceforge.net we can

observe that the last version was published in March 2013 and from this time it was just once

downloaded. In git3 repository the changes are added almost every day.

For the top advantages we can include cluster and multi-tread support in rather easy way

at framework level. Also the ability to render graphics over several computers and graphics

cards undoubtedly belongs to the advantages of this solution. With the open code and its

availability is still extended. OpenSG is not an application. It is just a library that we can use in

our application. This framework may be used with VRJuggler and Open Tracker so it makes

easier to prepare applications for running in CAVE solutions.

The biggest improvements in OpenSG 2 vs 1.8 is an improvement of the architecture, which

currently relies on the shaders. Additionally programming is simplified because some thread

synchronization happens in the new version automatically. There are improved handling of

pointers by introducing their new types. Properties of geometry have been changed. Many

internal implementations have been improved, rebuilt or created in a new way. In new version

support for NVIDIA CUDA, CG, EXR, NURBS, VTK and Collada is added. All these changes

make it worth to use a newer version of OpenSG. The most importantly OpenSG in second

version is faster than the previous one.

Documentation for version 1.8 contains about 200 pages OpenSG Starter Guide which

describes the entire important topic related to the library. In addition, there are described API for

all classes and framework division into modules. There are available on the market some books

about OpenSG. Unfortunately, the documentation for version 2 is a little abandoned and much

of it is just simply copied of the documentation from first version.

3Address: git://git.code.sf.net/p/opensg/code.

Page 34: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

34

Most of the sample applications from OpenSG 2 were simply carried from previous version.

There are no more advanced examples provided with OpenSG 2. Therefore I attached

presentations of example programs for both OpenSG 1.8 and 2. Originally OpenSG 1.8

contains example applications provided for Visual Studio 2005. 22 example applications are

provided to download and seven additional you can download with the source codes of

OpenSG. Each examples I converted to Visual Studio 2012 and included at attached DVD.

Otherwise OpenSG 2 provides compiled libraries for Visual Studio 2010 for both framework and

supporting libraries. First full compilation on my computer takes about 6 hours. OpenSG project

is managed through Cmake. Compiled library size is 25 MB for lib and 15 MB for dll for OpenSG

1.8. For second version we have respectively 20 MB and 120 MB (there are also some

extensions that take the extra a few megabytes of data). The dependent libraries for 1.8 take

about 30 MB for lib and 5 MB of dll. In contrast, second version weight 600 MB for lib and 30

MB for dll.

5.2.1.2.3. OpenSceneGraph

OpenSceneGraph is one of the most frequently used scene management systems in the

world. Is used among others by Boeing in Flight Simulator, NASA's Earth Simulator or Gear with

Flight Simulator and others such as Sony or ESA in their projects. In spite of its advanced

features it is fairly simple to use. The first version of OpenSceneGraph was founded in 1998

year. It was created by Don Burn's, who previously worked for SGI at their scene-graph

OpenGL Performer. In the middle time he created a solution of scene-graph named SG, which

was the prototype for the OSG. In 1999, the project was officially named OpenSceneGraph [27].

The entire framework is based on several primary and optional libraries. On the other hand,

if necessary on-demand dynamic plug-ins are included in the form of dll files which make writing

applications simpler.

Framework has modular structure. Basic modules include scene operation management,

building the graph, math class containing implementations for vectors and matrices,

implementation of object-oriented multi-threading management, mechanisms for managing files

and streaming 2D and 3D as well as components for dynamic loading graph to handle large

scenes and the mechanisms to travel the graph, modifying its elements and call instructions

with OpenGL.

Additional modules allow to make animations, including skeletal and morphing based on

key frames and canals, a module for creating special effects in 3D, the system of multi-platform

GUI with support devices, mechanisms of manipulation objects in space (rotation, scale and

translation), particle system for rendering explosions, fire, smoke, etc., libraries to add shadows,

terrain generation system based on the height maps, vector text rendering in 2D and 3D based

on the FreeType font, integration of management systems for Windows Win32, X11, MacOS

and other, generation volumes and integration with Qt library, which allows for example to

generate Qt components in space (such as a web browser).

For tests I used the latest version OpenSceneGraph 3.3.1, developer release published on

29 January 2014. Every few months a new version is released. Previous stable version 3.2.0

Page 35: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

35

was released about half year earlier. On this basis, it is easy to conclude that the framework is

still being developed. There are provided supporting libraries for across VS 2005 and VS 2013,

Linux, Mac OSX and Android. At the time of writing this work the compiled binaries were not

available.

Compared to OpenSG this framework is managed in terms of a better version releases to

users. There are frequently updates. Also here is much better designed website and all codes

lay on own servers. Preparation and setup library using CMake in contrast to OpenSG went

smoothly and compilation itself is also not encountered additional problems. This looks like

more solid release apart to OpenSG.

All provided documentation is based on several books. On the OSG website we will find not

enough information that will teach us how to use the library. For this reason, we can say that we

are forced to buy the books. We can choose from a few items e.g. “Begginer 's Guide” and

“Cookbook” later. They are prepared to learn framework from begging. Therefore, they are

written in a clear and arranged manner. They fully compensate the lack of documentation not

available on the website. The books also describe how to configure and build the library and the

method of preparation projects in CMake for Visual Studio. You should start by reading them,

then you can analyze the accompanying examples and then start creating a new solutions. For

the purposes of this work I created all of the sample applications that are described in the

books. Together with a library it's provided a quite large number of sample applications. They

show a wide range of available functionality. These examples are much more advanced in

contrast of samples provided by OpenSG.

By press ‘s’ key we can both turn on and off and switch between various modes of statistics.

We have information about amount of frames per second on the busy threads in terms of

rendering scenes and information about the complexity of the scene including information on

the number of its elements, nodes, vertices, or even the instance objects.

Before compilation we should add at least the following environment variables:

OSG_ROOT - pointing to the root directory of the OSG,

OSG_NOTIFY_LEVEL - NOTICE - Setup the level of debug messages for OSG,

OSG_FILE_PATH - indicating on attached files containing resources for the sample

applications.

5.2.1.2.4. NVIDIA SceniX - NVSG

The scene management in the implementation of NVIDIA is largely dedicated for their

solutions and its "strongest" squeezed last power of NVIDIA graphics cards in standard use of

the advanced capabilities of graphics cards NVIDIA Quatro [28]. A strong element of framework

is work with a range of advanced NVIDIA libraries to render scene and raytracking module, bulk

processing or scripting level shader of graphics card. The strength of this framework may

indicate that they are used in systems such as Autodesk Showcase [29], which allows for photo-

realistic visualization and interaction in the prepared scenes in AutoCAD or Autodesk Inventor

Page 36: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

36

or Image Courtesy of Realtime Technology AG (RTT) for application DetlaGen2, which is used

for visualization of the highest quality, mainly cars.

Unlike competing solutions, this framework was enhanced in a shader layer which is

characterized by remarkable speed of operation and the quality of the generated image.

Shaders are built on the basis of language CgFX [30]. Also, its use interactively ray tracker

based on OptiX or RTFx (Ray Tracing Effect interchange Format).

Framework is available only for Windows and Linux in 32 and 64 bit without source code.

Moreover, there are available pre-compiled libraries for SceniX 7.3 from August 2012 to use in

Visual Studio 2008 and 2010. Based on history of updates we can see that this framework is

updated once for every 1.5 years (but the last available version comes from two years ago).

We should prepare about 2GB of free disk space. To use the library in VS 2010 you need to

install an additional package "Visual Studio 2010 redistributables"4 and "Service Pack 1"

5

(otherwise you will be not able to properly setup Cmake for VS 2010 project). There are known

some issues with troublesome under Linux, which is manifested by the fact that some

operations may result in errors. In contrast, the 64-bit Windows cannot load textures in TIFF

format (which should not be a problem, because we can load the textures in other formats). For

compile examples using CMake Qt and wxWidgets frameworks must be prepared.

To compile wxWidgets 2.8.12 locally it's necessary to comment out in windows.cpp file:

#if !defined __WXWINCE__ && !defined NEED_PBT_H

// #include <pbt.h>

#endif

and add value to preprocessor "_ALLOW_KEYWORD_MACROS".

Fig. 5.1. NVIDIA SceniX viewer

4You can download it from: http://www.microsoft.com/download/en/details.aspx?id=5555.

5You can download it form: http://www.microsoft.com/en-us/download/confirmation.aspx?id=23691.

Page 37: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

37

Viewer is a complete application based on Qt framework with available source code. Viewer

allows you to view scene and 3D graphic objects and components.

5.2.1.2.5. Summary

Scene graphs engines helps develop CAVE applications. Through years there were

significant changes in architecture of graphics cards which forced serious changes in such

frameworks. Because of that the most important are modern frameworks which can provide full

power of existing graphics cards. So at the moment the most valuable are OpenSG and

OpenSceneGraph which are open sourced and NVIDIA SceniX. Below you can see comparison

table where I also included ViSTA framework because of contained scene graph engine

described further in this work.

Table 5.1. Scene graphs comparison

Feature OpenSG 1.8 OpenSG 2 OpenSceneGraph ViSTA SceniX

Scenegraph x x x x6 x

Realtime

graphics

x x x x x

Open Source x x x x -

Licence LGPL LGPL OSGPL LGPL Own7

Based on OpenGL OpenGL OpenGL/OpenGL

ES

OpenSG OpenGL/DirectX

Supported

platforms

Windows,

Linux,

MacOS X,

Solaris

Windows,

Linux,

MacOS X,

Solaris

Windows, Linux,

Mac OSX,

FreeBSD, Solaris,

Android

Windows,

Linux,

MacOS X

Windows, Linux

Extensibility x x x x x

Multithreading x x x x x

Clustering x x x x x

Creating Simple

geometry

x x x x x

Support mouse

and keyboard

events

x x x x x

Sample

applications and

tutorials

x x x x x

Documentations

and books

x x x - x

API

documentation

x x x x x

6ViSTA based on OpenSG 1.8 (there are works on implementation of OpenSceneGraph).

7You can read license during installation.

Page 38: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

38

Direct OpenGL

drawing -

glBegin()

x x x x x

Materials x x x x x

Load scene files8 VRML97,

OBJ, dxf,

raw, stl, 3ds,

OFF, BIN

VRML97,

OBJ, dxf,

raw, stl, 3ds,

dae, OFF,

BIN,

COLLADA

.3dc, .3ds, .obj,

.ac3d, .bsp, .dae,

.sw., .dxf., .fbx.,

.geo, Inventor, .ive,

.logo, .lwo, .lws,

.md2, .ogr,

OpenFlight, .osg,

.pfb, .shp, .stl,

.dds, VRML, .x

VRML97,

OBJ, dxf,

raw, stl,

3ds, OFF,

BIN

COLLADA,

COLLADA FX,

VRML2.0/WRL,

OpenFlight, OBJ,

3DS, PLY

Picking objects x x x x x

Lights x x x x x

Cameras x x x x x

GLSL Shader x 9 x x -

10

Stereo 3D x x x x x

OpenGl

extensions

x x x x x

Scene statistics x x x x x

Shadows x x x x x

NURBS - x 11

x - x

OpenEXR12

- x x - x

Cg - x x - x

CgFX - x ? - x

Nvidia CUDA - x x - x

LOD x x x x x

Viewports x x x x x

Cube map x x x x x

Graph traverse x x x x x

VTK - x x x -

Collada - x x - x

Cmake x x x x -

VS libraries to compile to compile to compile to compile 2008 or 2010

GUI Toolkit GLUT, Qt,

wxWidget,

Win32

GLUT, Qt,

wxWidget,

Win32

GLUT, Qt,

wxWidget, Win32

GLUT GLUT, Qt,

wxWidget, Win32

NVIDIA OptiX - - - - x

RTFx RTFx - - - - x

RT raytracer - - - - x

8In each framework there are supported other file formats through custom plug-in.

9GLSL is available through ShaderChunk object which is experimental.

10Shader used as material (extension of OpenSG) or is used for particle system generation.

11Through OpenNurbs library.

12OpenEXR is high-dynamic range (HDR) image file format.

Page 39: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

39

Ambient

Occlusion

- - - - x

Mobile - - Android/OpenGL

ES

- -

Lib size 25 MB 20 MB 8 MB 4,5 MB 12 MB

Dll size 15 MB 120 MB 44 MB + 780MB 26 MB + 5,5

MB

16 MB

Support lib size 30 MB 600 MB 1,6 GB -

Support dll size 5 MB 30 MB 64 MB 32 MB

As we can see in comparison table the functionality of selected scene graphs engines are

very similar to each other. The base functionality are almost the same for all of them. The main

difference is between NVIDIA SceniX and other. SceniX have no provided source code, but is

very powerful. It's specialised for NVIDIA graphics cards and as only one can works with

DirectX and OpenGL and have real-time ray-trace engine. SceniX is the most advanced scene

graph engine. OpenSceneGraph (OSG) is the only one which supports mobile. OSG contains

the most number of additional modules and natively support shaders. It makes it good choice to

use too. Then we have OpenSG which looks like a little forgotten framework and at the moment

is not so functional as OpenSceneGraph. And at the least there is ViSTA framework which

based at old fundaments as OpenSG 1.8 which makes it a little bit depreciated at the moment.

5.2.1.3. Frameworks for CAVE solutions

In this chapter I will describe frameworks that extend the possibilities of scene management

engines. This extension concerns above all the possibility of image rendering by multiple

computers and multi GPU rendered image into several instances. In addition, these systems

synchronize the user's camera head tracking system using mechanisms to detect the position of

the head in order to properly render the image. It is concerned with the management of various

manipulators, so that each server receives consistent information about its properties.

Using the presented solutions, we can write an application with distributed rendering

processing, rendering units (separate computers as clusters) and where both the output image,

the input devices and events will be synchronized in the resulting application.

These frameworks provide advanced mechanisms of network connections and serialization of

objects. Sometimes you need the given object to make available to read for all rendering unit

(e.g. containing initialization data) and sometimes it led to a renderer to each unit have its own

state of mind not shared with other machines (e.g. for storing data information about the

configuration of the camera).

5.2.1.3.1. ViSTA

VIRTUAL REALITY for SCIENTIFIC TECHNICAL APPLICATIONS - ViSTA framework

created by Virtual Reality Group at RWTH Aachen University in Germany. University has set up

several applications in the CAVE. Framework is available as Open-Source project. This solution

was developed for about 15 years. During this time, several generations of graphics cards

Page 40: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

40

architectures have passed in the era of information technology and the framework itself was

strongly changed. Initially it was available at the super computers like SGI Irix, HP-UX, Sun

Solaris and now is available for Windows, Linux and Mac. At the moment, at least when it

comes the latest version of framework no anyone outside of the University of Aachen benefited

from this solution [31].

The biggest advantage of framework is its integration with various existing libraries which

broadens its area of application and the fact that it fully supports CAVE systems and display

image in stereoscopic 3D technology combining images from multiple projectors (Image

Blending), tracking and adjusting the position of the user's head (Head Tracking), support

calculations on multiple clusters and multiple input-output devices. All these features give us the

basis for the creation of a dedicated application in the CAVE.

Main features of ViSTA framework:

scene management,

support input and output devices (e.g. manipulators, tracking camera and haptic

devices),

is based on OpenSG 1.8 (in the future will be support OpenSceneGraph as well),

support for cluster computing (VistaDataFlow),

support for multiple screens (including video monitors and stereo 3D),

tools for managing threads, links, files, network, etc.,

the ability to write and add own drivers for input and output devices,

integration with many available Open Source libraries,

contains own mechanisms to handle the keyboard (mainly via events),

allows to create basic 3D geometric solid objects,

import 3D objects and scenes created in other applications,

allows coloring and texturing objects,

support lighting and its management,

display text in both 3D space and on the GUI layer,

allows to add interactivity to objects created (e.g. you can select an object and move it

to another location),

create and manage the camera (set its parameters, location, etc.),

add a layer overlay containing other scenes both in 2D and 3D rendered in real-time,

implementation of the events on the phenomenon in the application (for example, after

obtaining the position of a given object is generated the event),

communication with other applications in C/C++,

debugging tools that display information on both the console and on the scene.

Integration with the following arrangements:

OpenSG - allows you to manage and display a 3D scene in real time,

OpenSG Ext - extension OpenSG (e.g. particle system or fog),

Page 41: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

41

VTK (The Visualization Toolkit) - adds a lot of graphics functions for working with

graphics,

OpenGL - enables native OpenGL command execution within the node,

Python - allows you to write dynamic scripts.

Initially, the biggest problem to start with the framework is total lack of any documentation.

There are only comment at the source code, generated API documentation based on classes

and several sample very simple applications that show the basic capabilities of the framework.

Knowledge of OpenSG 1.8 will help a lot because many of the framework functionality expand

and use mechanisms of it.

Configuration is been based on text files, which can detect changes in running application.

This configuration allows you to easily move the application between different environments,

e.g. between developer station consisting of two monitors and CAVE like systems. For that you

only need to specify how many walls and projection system is composed of. Here you can

configure the network addresses for communication between computers in clusters and input-

output devices. This allows to separate application itself on its configuration depending where it

has to be launched. Same configuration files can consist of multiple files so there is a possibility

to prepare the so-called configuration modules and plug-in them to streamline the configuration.

A key element of framework is scene management system, which is based on OpenSG 1.8.

The OpenSG system is described in the chapter devoted to it - this is the main mechanism that

is responsible for displaying the scene in real time. OpenSG directly sends data to the graphics

card to the GPUs via OpenGL, which then renders the data held in the form of an image.

OpenSG 1.8 was completed in 2007, which greatly reduces the possibility of the internals of the

ViSTA framework. Hope is in the ongoing work on the replacement of the old OpenSG 1.8 for

competitive solution OpenSceneGraph. At this point I just want to point out that currently ViSTA

is not able to fully exploit the potential of the latest computers.

On the official website is information that ViSTA has additional libraries (VistaAddonLibs)

that add additional functionality, offering among others use of physics and collision detection,

soft body simulation and sound support. But those shared libraries are not shared to download.

Without them it can be implemented by own self or we may use other existing libraries through

their independent implementation.

For the purposes of this document, I described how to build both ViSTA framework and

supporting libraries as well as sample applications. I attached the workspace containing both all

the projects and source codes and compiled versions of the applications. Also I created mini-

framework "FirstTry" using ViSTA for make easier to create a new applications in this

technology (located on the accompanying CD in catalog

"workspace\myvista\CAVE_PG_VS2012\FirstTry"). The framework consists of several modules:

communication for interfacing with external applications, providing support for the keyboard

controller, circulation and transformation of objects, allowing for the interaction with objects.

Scene stage manager allows you to add more objects to the scene and text, which allows you

to add text both in 2D and 3D. In the framework the main file is Application.cpp that sets and

Page 42: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

42

initializes the initial stage of application. By this way I prepared a solution that divided ViSTA

frameworks for functional modules, so you can quickly begin to create a new scene with it.

5.2.1.3.2. VR Juggler

VR Juggler is the one of first library specialised for implementing CAVE applications. It is

scalable system which supports complex multi-screen systems running on clusters. The

flexibility of VR Juggler allows applications to execute in many VR system configurations

including desktop VR, HMD, CAVE-like and powerwall-like devices. VR Juggler supports IRIX,

Linux, Windows, FreeBSD, Solaris, and Mac OS X. Library contains Gadgeteer which is a plug-

in system to support local or remote devices. The configuration is based on .xml files. It can

work standalone as a scene graph based on OpenGL or can cooperate with existing scene

graphs engines like OpenGL Performer, OpenSG and OpenSceneGraph. This sounds good but

unfortunately it doesn't work with newest version of such engines and it cannot be compiled in

64 bit mode. This solution is simple to implement and configure to work in CAVE. But it is

outdated [32].

5.2.1.3.3. Equalizer

Equalizer is a framework that allows parallelization of OpenGL-based applications [33].

Thanks to it we can benefit from the use of multiple graphics cards, processors and even

computers to improve the efficiency and quality of the running applications. Applications based

on this framework can be run without modification on both single computer and virtual reality

systems consisting of a number of computers. It is a proven solution because many open-

source application and commercial products are based on this framework. These include known

applications such as RTT DeltaGen or 3D player Bino. It is available for Windows, Linux and

Mac. The solution is based on GLUT. At the moment creators are working on adding the

administrative library, which will allow the addition, configuration for new windows and changing

their templates from separate application.

There is available Sequel project, which simplifies the process of creating applications using

Equalizer by introducing mechanisms of modules. Sequel can reduce the amount of code

written as a ratio of 1 to 10. It is recommended to start with Sequel project and then move on

Equalizer with more advanced projects.

The main possibilities of framework include distributed rendering based on clusters, support

stereo 3D, tracking head position (Head Ttracking), support for virtual HMD helmets,

synchronization display on multiple screens, software combining edges (Edge Blending),

automatic configuration and one based on ASCII files, compression of the image sent over the

network, load-balanced mechanism for renderers units (Load-Balancing) and which is important

for the project to I3DVL support InfiniBand network and G-Sync image hardware

synchronization (using barriers "NV group" and " NV barrier").

Supported modes of parallel rendering image:

Page 43: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

43

2D (SFR - Sort-First Compounds) - each module renderer renders a portion of the

target image and display in a single window. This mode is used for example when 4

computers will render an image of the scene (each client after the fourth screen)

and then whole image is joined side by side which in turn will give us full screen

display,

DB (SLC - Sort-Last Compounds) - lies on the fact that each module render part of

the scene in parallel, which is then assembled into a whole image. In this mode,

there may be problems with anti-aliasing, transparency and shadows,

Stereo Compounds - image for each eye is attributed to the rendering into an

independent entity. The resulting image is copied into the stereo buffer. This mode

supports virtually every available stereo 3D image modes e.g. among others active

mode (quad-buffer), anaglyphic stereo 3D displays with multi-fold course,

DPlex Compounds (AFR or Time-Multiplex) - in this mode, different grids are

assigned to different units renderer. Based on them is reproduced image. This

method allows you to increase the number of frames displayed per second,

Tile Compounds - mode similar to the previously described 2D mode, with the

difference that each unit renderer renders a few tiles of which the complete picture

is created. Rendering the tiles used queuing provides load balancing,

Pixel Compounds - split image rendering unit to render different part of the pixels at

each unit,

Subpixel Compounds - This mode assigns separate samples for units rendering to

create effects such as anti-aliasing, depth of field, etc in order to speed up

rendering the desired effect.

For 2D and DB Compounds modes we can take advantage of the "Load Equalizer", which

is based on the actual resource utilization of the unit to keep the rendering, adjust the size

distribution of the image data to enhance rendering performance of the whole image. In

contrast, the "View Equalizer" will use the "Cross-Segment Load-Balancing" with the most

current division will adjust the rendering of the image at the level of the GPU to achieve high

performance. This option is recommended for use in the CAVE like systems, in order to free

resources for the GPU to pass it on to render an image where these resources are missing. An

interesting option is the "DFT Equalizer" (Dynamic Frame Transform) which in the case of an

overload and too little FPS renders the image at a lower resolution and then rescales it to

actively display resolution which will help in improving productivity through the picture at the

lower quality. In the event of inactivity, or when the data resources of computing a given image

will be generated at full resolution. "Monitor Equalizer" will allow us to scale and display a

picture of the system of multi-screen display on the monitor of your computer.

Solution architecture is based on a client-server model. It is used here Collage project to

build a distributed applications. Each client is controlled by the server. Both the client and server

can be the same application (file). The server can respond for only application logic (called the

"master"), or participate in the rendering of the 3D image as does the client.

Page 44: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

44

For several years there are not already supplied binary libraries on Windows - so you

should compile framework from source-code. To compile the source code Equalizer must either

use Buildyard package, which contains the entire framework with all dependencies, or you can

do it manually one by one starting with the compilation of projects: vmmlib (this is a set of

mathematical operations at the level of vectors and matrices), Lunchbox (which is an abstract

Connects functionality performed on the operating system level, among others. processor clock,

etc.) and Collage (this is a library to manage connections at the network level) and then you can

compile the Equalizer.

Additional modules include Hardware Service Discovery (hwsd) which allows for automatic

detection and configuration of both the network and the machine rendering of the GPU.

During working with the framework a good feature is that you can run multiple clients on a

single computer (individual's renderers) for the purposes of the developer tests. However, for

reasons of performance requirements, it is recommended that each client will be running on a

separate computer. Applications can be running centrally from the server using the ssh protocol

(then on each client and server the application should be exactly in the same folder) or fostered

run them on clients and then call on the server.

This library is quite divided into logical modules: "eq :: Node" represents a physical

computer, "eq :: Pipe" represents the GPU, "eq :: Window" is the window in which the image is

displayed from a single computer, which can be divided into separate parts, the channels "eq ::

Channel" can share and send one image on multiple projectors. Using the class "eq :: Canvas"

configures the displayed image on any surface including CAVE as well. When displayed flat

surfaces such powerwall must configure frustum for all screens while the systems in which the

screens do not form a frustum lines should be set up for each screen separately. Properly

configured frustum should be the same one as used in application in the calculation of the

transformation matrix for head tracking system. Each canvas composed of segments, which

already represents the projected image on the screen. Segment should be assigned to each

screen or projector. Segments can overlap for use in projectors with the option of combining

images (Edge-Blend) and may have cracks for use in so-called walls system of display (Display

Walls). To configure frustrum we use segments of the Viewport.

For passive 3D stereo installation we must configure the segments "eq :: Segment" for each

eye. The two channels (left and right) should be assigned to the same viewport. For active

stereo 3D display is used framelock mechanism using software or the hardware barrers. Only

hardware barriers (e.g., those in the G-Sync) give confidence properly and correctly

synchronization of the image at the right time.

Page 45: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

45

Fig. 5.2. Application osgScaleViewer integrates Equalizer with OpenSceneGraph

From my point of view, a very important element is the integration with the

OpenSceneGraph. Until 2010 there was prepared and provided sample application

osgScaleViewer with Equalizer which shows integration with OpenSceneGraph. This project

renders the node OSG through load 3D object as cow in example show in Fig. 5.2. This

example is an extension of the demo eqPly application and advanced management at the level

of multiple clusters in a distributed system graphics rendering. In addition to that still

respectively part of the functions of the OSG has been replaced by a side Equalizer so you

need to learn the proper application development based on OSG and Equalizer.

On the basis of the source code it can be seen that the framework is still being developed.

The latest version of the code for the moment of writing work was released in late 2013. Also,

the latest version of the accompanying documentation "Equalizer - Programming and User

Guide" dated July 2013.

5.2.1.3.4. Summary

As you can see there are not so much available frameworks dedicated for CAVE

development. In the past we can saw VR Juggler which is great suited for it. But unhappiness

it's not developed for long time and doesn't support modern scene graph engines. Similar

situation we see with ViSTA framework which was well developed in the past and currently is

outdated.

Here we have only Equalizer which is very advanced and difficult to use. It is working with

OpenSceneGraph but do not work witch OpenSG. The integration with OSG was done with

group of students so we can see that module is overgrown and is very difficult to use making

chance to do something wrong. But we don't have many more possibilities to choose. We can

also use just OpenSG or OpenSceneGraph cluster modules and implement own CAVE support

functionalities.

Page 46: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

46

Table 5.2. Comparison of CAVE frameworks integrations

Description ViSTA VRJuggler Equalizer

Distributed computing x x x

Static Distributed Object x x x

Versioned Distributed Object x x x

Head Tracking x x x

CAVE support x x x

File based configuration x x x

CAVE simulation mode - - x

OpenGL Performer support - x -

OpenSG support v. 1.8 v. 1.8 -

OpenSceneGraph support - 13

v. 2 v. 2 and 3

Advanced scalability - - x

Table 5.2 shows that base functionality to provide CAVE solutions is provided by each

library. The main difference is in supporting modern scene graphs engines and with advanced

functionality. Based on supported scene graphs engines we have one winner which is

Equalizer. Equalizer contain CAVE simulation mode support which display 5 window at your

desktop. This will give you some view how the result applications will looks like. Also Equalizer

contains advanced scalability which gives you possibility to scale application between different

nodes by splitting your image.

5.2.1.4. Support libraries

As support libraries you can use physics, animation, scientists, graphics and other. Here I

want only focus at two of them which are used in some of the scene graphs. Cg is used in

OpenSceneGraph and NVIDIA SceniX. NVIDIA OptiX is used only in NVIDIA SceniX. Cg is well

known and currently is marked as depreciated so I will write only a few sentences about it. But

OptiX looks great and is still developed. This is not famous library which provide foto-realistic

result almost in real-time. That's the reason why is noticed here.

5.2.1.4.1. Cg Toolkit

Cg toolkit is obsolete framework for writing applications that runs on the GPU for OpenGL

and DirectX on Windows, MacOSX and Linux. It is no longer developed and supported by

NVIDIA. The last version comes from April 2012 which has been developed from 2005. In the

pleace NVIDIA recommends using GLSL shaders directly, HLSL or just developed nvFX [34],

lua [35] or glfx [36].

5.2.1.4.2. NVIDIA OptiX

13

There is planned future integration ViSTA with Open Scene Graph.

Page 47: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

47

NVIDIA OptiX is a dedicated solution for interactive programmable ray tracking dedicated to

professional Quadro graphics and Tesla computing cards. It uses GPU graphics cards with

CUDA framework. This solution accelerated rendering such scenes from minutes for software

solutions to milliseconds based on hardware. It allows for an interactive verification and for play

with light, reflections and shadows almost in real time. The use of ray-tracing improves the

quality of the generated image, and thus it becomes even closer to realism and allows

simulations where operates with the follow-rays (Ray Tracking) such as the design of the optical

and acoustic radiation detection and collision analysis [37].

By using this library we can implement very fast professional-quality image rendering

system for your own engine. You can take advantage of NVIDIA SceniX which use it.

This solution is available for Windows, Linux and Mac OSX in both 32 and 64 bit. OptiX

version 3.5 works with CUDA 5.5 and VS 2012. Included examples are based on Freeglut.

CUDA is used to generate new rays, calculating the intersections with the surface to interact

with the intersections of rays. OptiX supports both OpenGL and DirectX.

OptiX is distributed without licensing fees. However, it does not provide the source code.

The current release 3.0.1 has been released in August 2013. You can now get access to

version 3.5 after registering on the site and positive consideration of NVIDIA based on a

request. It seems to me that this version was made available in early March 2014. Optix from

version 3.5 can be continuing use for free of charge but the commercial applications require a

commercial license to redistribute this library. All this shows that this library is constantly

evolving.

OptiX has been used, among others by the film studio Pixar Animation Studios (PIXAR)

[38], which is owned by The Walt Disney Company in the framework of advanced lighting

management and to create light effects in KATANA [39] previously created by Sony Pictures

Imageworks [40]. KATANA is deck used, among others, to produce a mascot in the movie Ted

[41] and the many scenes of such films as The Amazing Spider-Man [42], Oz the Great and

Powerful [43] or After Earth [44] as well as in many other theatrical productions AAA. KATANA

is used by many film studios such as Sony Pictures Imageworks, PIXAR and also by studies

dealing August special effects for films such as Digital Domain [45] and Industrial Light and

Magic [46]. These examples confirm the quality and high-class NVIDIA OptiX ray tracer.

5.2.2. Graphical editors

The easiest way of creating 3D applications is to use a graphical editor. This is true

especially for developing CAVE applications. It helps you just in positioning objects into scene

and gives you possibility to create whole applications through GUI. You can also add and create

own scripts into applications. You can use a node editor which is a graphical programming

language which makes your work even easier. When you finish working at application usually

you can save it into one file. This make whole development and distribution process much

easier. The drawback is that when you need some advanced specialized tool in editor or you

need create at low level some changes then you are not able to, or you need to use provided

SDK.

Page 48: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

48

5.2.2.1. Create own editor with GUI

If you like to have control about everything and have good knowledge and time you can

create own editor with GUI. Its takes time, but give you possibility to do whatever you want. You

can write everything from scratch or use existing scene graphs engines. This gives you a lot of

freedom in the way of creating own visualizations.

5.2.2.2. GUI libraries

There are many GUI frameworks, but I want just focus at one of them. Qt framework is

multi-platform and gives you possibility to project GUI independent of application logic. It is very

rich in available controls and contains many multi-platform system libraries which make easier

programming process. At least is natively integrated with OpenGL that's make it a good choice

to design GUI for own editor.

Qt

Qt is a cross-platform library and UI framework for developers which can use C++ or QML,

a CSS & JavaScript like language. Qt was developed across 20 Years and is used by over

500,000 developers worldwide. It contains Qt Creator, an integrated development environment

(IDE) that provides you with tools to design and develop applications with the Qt application

framework. Qt is designed for developing applications and user interfaces once and deploying

them to several desktop and mobile operating systems, such as Android and iOS. It is available

for Linux, Mac OS X and Windows operating systems. Qt Creator provides you with tools for

accomplishing your tasks throughout the whole application development life-cycle, from creating

a project to deploying the application to the target platforms. Qt is open-sourced and may be

used with a few free and commercial licence types [47].

Framework provides QtQuick 2 widgets based on OpenGL ES based on scene graph

implementation which support us to create graphic effect and efficiency user interface. QML and

Java Script are mainly used for UI creation. The back-end is driven by C++.

Qt is not just for UI. It contains several libraries to create console application, operate at

strings, it have a few containers on which you can work in Java-like style (e.g. iterators), file IO

operations which include parsing XML and CSV, network libraries, pointers manipulators,

threading and more multi-platforms libraries which can be used to develop new applications.

5.2.2.3. Existing graphic editors

Existing graphics editor we can split into a few categories: configurable simulators where we

can setup some objects and missions, game editors which are specialised in creating games

and studio which are specialised in creating virtual worlds and simulations. There are available

solutions for desktop computers and specialised for CAVE installations. Sometimes there are

available some tricks or plug-ins for editors which enable you to run created application in

CAVE.

Page 49: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

49

5.2.2.3.1. Simulators

Simulators as the name suggest provide configurable training missions for you. Here

usually you don't have options to create or import new objects but you get database of object

which you can choose and configure. This helps you to create and prepare in a quick way some

mission and train at it. Usually new kind of missions, objects or functions are provided by

supplier of applications.

DNV

Fig. 5.3. Training room for bigger train group

DNV offers wide spectrum of trainings for professionals from maritime, mining and energy

branches as well as in management systems. Company provide a wide range of software

solutions helping you manage risk, supporting transparency and sustainability and enabling you

to comply with regulatory requirements and industry standards [48].

Page 50: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

50

Fig. 5.4. Training room for individual sessions for trainers

In 2010 after a few years of hard working, research and development DNV opened DNV

Academy Gdynia - training centre in virtual reality for maritime branch. Training centre is

equipment with professional devices which gives you possibility to train in stereo 3D and is

supported with rear projection which enhance immerse and gives you full comfort of interaction

in virtual reality.

Fig. 5.5. Training in action

DNV projected and developed SurveySimulator, in short SuSi, the inspection simulator, first

interactive in 3D, designed for advanced and accelerate trainings for DNV inspectors and for

their customers. The software is for everyday use for professionals who should learn everything

Page 51: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

51

about ship. Inspections and technical overview require long and deep process of training which

may takes even a few years. Practical trainings on the ships depend on their type or lifetime

phase. In addition the experienced practitioners have lacks of time for teaching others. Because

of the simulator it is possible to shorten trainings time from five years to even one year, which is

great result.

Fig. 5.6. Small training room

Fig. 5.7. One of the onboard visualisation used in trainings

Page 52: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

52

Fig. 5.8. SurveySimulator have high and real-like quality of graphic

Simulator contains four detailed 3D models of vessels are available for realistic survey

simulations. Every vessel in SurveySimulator is a reflection of a real ship or structure.

Fig. 5.9. Training scene

Page 53: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

53

Fig. 5.10. Learn marker

SurveySimulator offers four different training modes:

Ship knowledge mode - overs maritime naming convention, parts naming and

certificate requirements,

Areas of attention mode - highlight of areas where hull structural deficiencies are

likely to occur,

Survey requirements mode - visualization of class and statutory survey

requirements (based on DNV NPS),

Findings mode - display of build in deficiencies and descriptions.

Fig. 5.11. SurveSimulator provide visualizations for real models

Page 54: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

54

Fig. 5.12. Use PDA to learn about areas of attention

SurveySimulator is designed for tutor guided training and also for self learning Users can

benefit from knowledge included in software anywhere, anytime. Strong feature of

SurveySimulator is interactivity: realistic, virtual tools, and machinery. It is constantly developed,

upgraded and extended with new functionalities.

Fig. 5.13. The very first of the SurveySimulator

Page 55: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

55

Fig. 5.14. Another image of very first simulator

First version of simulator was created in OpenGL. The current one base on Unity engine

which speed-up process of development. Application contains high quality, photo-realistic 3D

graphic. In some places you can feel just as you are at the ship. SurveySimulator is very

interactive, has a lot of things which you can observe, get information about it, change their

attributes, make photo and document, mark by spray, touch and switch their states. It's provide

real-like environment which is your sandbox to learn, train for specific tasks which you will do in

real life at real ship.

Fig. 5.15. Use PDA to learn about environment

Page 56: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

56

The training centre can train multiple persons at the time, where each of them can work

independently. There are three training rooms with different specification to provide different

case trainings for different audiences. One room is equiped with big 3D stereo display with

professional Barco rear projection system. For this solution projector has additional room with

two mirrors to project image. Second room contains big, a few meter display and seven

computers, where one is a control place for trainer. Trainer can switch between all platforms

and remote preview to help trained persons. The third room is presentation room. The software

is available as standalone so you can use it anywhere to train own self.

5.2.2.3.2. CAVE supported and dedicated

CAVE dedicated software is used for developing CAVE applications without needs to use

any other frameworks or library to run such application.

5.2.2.3.2.1. VBS - Virtual Battlespace

Virtual Battlespace (VBS) is CAVE supported commercial specialized mainly for military but

also for police, fire-fighters and medical, game-based simulation platform created by Bohemia

Interactive Simulations [49]. It is used by many of real armies in the world e.g. US Army with

PEOSTRI (US Army's Program Executive Office for Simulation, Training and Instrumentation)

[50], JMSC (Joint Multinational Simulation Center) at the NATO in Rome [51] or OBRUM in

Poland (Research and Development Centre for Mechanical Appliances OBRUM). It is used for

tactical trainings and mission rehearsal as was done by Australian Defence Force in 2005 for

mission in Iraq [52]. VBS can be use with many real looks devices and platforms for trainings in

the sense of teaching how to use military weapons and vehicles. Gdansk University of

Technology gets a USB keys to use VBS 2 and 3.

The main advantage is real simulation of many factors for militaries. There we have a

mission editor based on 3D terrain which we can display as 2D map. You can use real texture

of terrain as background of map. Here we have a real-like coordinates. We can use compass

and GPS coordinate system with a few different metrics to place our units. All the time we can

switch between 2D, 3D and preview mode. For our map we can add units like soldiers, civilians

and animals, groups of units, waypoints which define your action at the mission and many more

characteristics, hundreds of military and civilian vehicles, many different markers as flags,

objectives, targets which you can pin into map, useful triggers and a lot of different objects like

trucks, cables etc.

For each mission you can define brief and objectives. You can setup weather with overcast,

fog, rain, wind and snow. As well you can define date and time of the mission. You can add

airstrikes, artillery strikes, mines, gun line fires etc.

Editor is supplied with hundreds of military and civil creatures counted in hundreds. There

are available soldiers from Afghanistan, United States, Great Britain, Australian, New Zeeland,

Canadian, Czech Republic and some of the Taliban's and Independent military groups with

many configurations. The civilians represent men, women, children including pregnant women,

suicide bombers, dock workers, policeman, press agents, businessman etc. Also we can use a

Page 57: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

57

few animals. For each people we can define rank with special, combat, behaviour and stance.

Each of them we can adjust their health, ammunition, presence probability and many of

psychological and psychical settings.

In the case that we will not find interesting unit we can create new one unit by unit template

editor. We can use hundreds of weapon and military objects. We can use magazines with

ammo, flares, ground markers, grenades, non-lethal shots and sponges, many different

cartridge rockets, handy missile systems, assault rifles and grenades, image intensifier sets,

thermal imager, night vision goggles, anti-structure weapon, different sniper rifles, batons,

batteries, many pistols, laser markers, mines, flares, mines, binoculars, med kits, search mirror,

radios, alarm clocks, bombs, cell phones and many more military stuff. This give us impressive

possibilities to simulate different kinds of wars, battles and operations.

There are hundreds of vehicles - military and civilian machines. There are bikes,

motorcycles, quads, cars, jeeps, trucks, buses, tanks, helicopters, motorboats, ships, oil

tankers, ferries, yachts and submarines, air planes, engineering machines as bulldozer or

forklift, and stand alone mortars, machineguns, radar stations, balloons, robots and beepers.

There is added pack of vehicles from Sweden Army as well. Each of vehicles is playable and

almost all of them have own cockpit. Some of the vehicles have additional options which you

may use e.g. switching light, opening ramps and doors, turning engine on and off etc.

In VBS may be performed individual or collective trainings conducted to improve trainee's

performance and to attain a required level of knowledge or skill. Trainees are prepared for real-

life challenges by being immersed in life-like virtual environments.

The solution includes runtime environment, scenario editors and after-action review which

are key to desktop training. To further enhance the training experience, administrators may also

use the development suite to import relevant terrain, or integrate VBS2 with other simulations by

HLA or DIS. For VBS you can attach additional modules as VBS2Fires which provides high

fidelity call-for-fire simulation from a range of weapon platforms including mortars, fixed and

towed artillery and MLRS. VBS2Strike provides realistic close air support for immersive air

controller trainings. VBSWorlds is another module which provides an easy-to-use development

environment for creating completely new training games using VBS content.

Generally VBS is multi-purpose simulator for training soldiers and emergency services with

map and mission editors. The second version has rather good quality of graphics. It is easy to

use and the main advanced functions are provided to setup units and mission which can

simulate real world actions.

5.2.2.3.2.2. Quazar3D

Quazar3D created by i3D company is equipped with a graphical editor. In this editor you will

setup entire scene just in a few mouse clicks.. The main parts of system are available in editing

window. Here you can see the entire scene and immediately you can navigate through the

created scene and make the transformation to the existing facilities. There is a tree with the

entire contents of the scene, the construction of which is consistent with that described in the

engine management stage. We can see a list of all objects in the scene, their types and select

Page 58: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

58

objects for further editing in other panels. We have a window allowing graphically configure the

nodes of the scene ("Node Properties"), change the material and texture ("Window Materials").

Editor has integrated file manager for all elements of the scene, a console with the possibility of

filtering the log and script editor window ("Script Editor") [53].

Fig. 5.16. Quazar3D

Quazar3D is available as demo version with unlimited term action. In addition it's providing

a free Viewer, which allows you to view scenes on the computer with the installed browser

Quazar3D Viewer (prepared scene can be exported to a single file, which can then be opened

in the Quazar3D Viewer). Along with the application we get a fairly well-developed manual in

the form of assistance in the same application, disputes the "Master Tutorial" and a few tutorials

to help launch the application.

Page 59: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

59

Fig. 5.17. Quazar 3D Viewer

Graphical scene settings provide globally setup for the entire scene through the window

Simulation Settings. Here you can configure the quality of the displayed scene, the manner of

its rendering (OpenGL or DirectX) as well as enable and disable many graphical effects such

as: DOF, HDR, occlusion, auxiliary buffer, refraction, motion blur, light scattering, glow, etc.

These effects after applying are immediately visible in real time editor.

Fig. 5.18. Flowgraph editor

Page 60: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

60

Fig. 5.19. Example visual script for moving by WASD keys

All the power in the editor lies in own scripting language similar to C + + and in Flowgraph,

which is an interface called visual programming. With their help we program the entire logic of

application. Flowgraph is linked with all elements of the scene which can be used in

implementation logic. There is no need to recompile code so you can quickly test different

configurations and immediately see the results of work. There are a lot of elements and objects

from which we can build logic. All of these elements are extendable and configurable. It is a

very strong element of this editor. It allows for people not associated with programming to

create applications VR.

We can use editor to prepare complete VR application containing a 3D scene, the elements

of physics (with own implementation and also based on NVIDIA PsyhicX), GUI created using

widgets, interact with objects (e.g. opening / closing doors, controlling the vehicle, move objects,

etc.), music and sounds and plug-ins expanding the functionality of the editor. In this way

prepared scene is rendered using high quality graphics for both OpenGL and DirectX.

Application have complex and well documented tutorial named "Quazar3D Master Tutorial".

This tutorial contains the most frequently used elements during the creation of the VR scene. It

is well prepared, contains many images that accurately show the configuration of the scene.

The most important is that it shows how to create a complete scene from the beginning. All work

with the editor is fairly intuitive. However, to take full advantage of the tools you should start with

this tutorial then study the accompanying documentation.

5.2.2.3.2.3. EON Studio

EON Studio was created by EON Reality. EON Studio was leading software for the

production of interactive 3D applications. It is available for Windows 32 and 64 bit versions. In

principle, this is an application that allows you to easily produce interactive 3D simulations

without the need of knowing programming language [54]. It supports a large number of files that

Page 61: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

61

can be used. We can find more than 100 templates nodes such as models, sensors, operations,

demonstration rooms (showroom), special effects, etc. Scene is based on a tree. We use

scripting languages VBScript and JScript via Script node. As the only application in its segment

it allows to export and viewing 3D scenes in web browsers as Internet Explorer or Mozilla

Firefox as well as documents and presentations in Microsoft PowerPoint, Microsoft Word,

Director Macromedia, Shockwave, Flash or Visual Basic. Solution contain simple scene browser

EON Viewer.

Fig. 5.20. EON Studio

We can order the book "Interactive 3D Application Development: Using EON Professional

for creating 3D visualizations". This book covers the basics of creating applications in EON

Studio, the manner of its use and ends with the publication and release of the finished work. In

addition, we obtain EON User Guide, which is a kind of copy assistance from an earlier version

of the EON Studio 6 so despite the voluminous material has about 390 pages. We receive also

little outdated EON Reference Guide. All of them have been added to the product at the end of

2012.

The most part of editor contains a list of over 100 nodes and a panel Property Bar with

different properties of nodes. It looks very technically and for people who do not have the basics

regarding scene management it may be a little difficult and understandable. The interface itself

reminds those who reigned about 10 years ago. Hence, my first contact and the first

assessment of the application itself is generally low.

Page 62: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

62

Fig. 5.21. Shoreline engine example application

Regarding the use of the EON Studio in CAVE environment is hard to find any specific

information. EON Reality create EON iCube solutions that are available in EON Studio. The

documentation included with the application includes the topic that describes how to configure

the application for EON iCube. Of course there is available stereo 3D mode. On the other hand

there is described configuration of multiple clusters, and thus multiple monitors which need to

install additional modules and quite large additional network configuration (such description is

available only for Windows 95/98/ME). Perhaps in the current versions of Windows 7, such

complex configuration is not necessary, but on this topic I have not found any information about

that. It all gives the impression of an outdated applications and quite clumsy.

EON SDK allows you to expand the capabilities of EON Studio. It's allows to create new

nodes to access the database, device drivers, etc. The information on the web page indicates

that the module works under Visual Studio 2010. Also, it is not available for download so it may

not be available for latest release of EON Studio.

From available extensions there is free EON Raptor, a plug-in for 3ds Max that allows you

to view and interact with the scene in real time. We may also use the plug-in EON Dynamic

Load, which is a server service for sharing interactive content via the Internet and is also a

repository for a large number of scenes. This plug-in also allows you to track user interaction

which results may be written to the database.

5.2.2.3.2.4. Vizard

Vizard was created by WorldViz. It is available for Windows in both 32 and 62 bit. With the

application you get book in PDF (about 130 pages), sample applications and the SDK plug-ins

for 3ds Max [55].

Page 63: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

63

Fig. 5.22. Vizard application

The main element of the application is a graphical editor which comes as an integrated IDE

for application development in Python language. Python is an interpreted language so all the

changes that have made in the code are immediately reflected in the application and there is no

need to recompile code after make any changes into it. But to see the change on the side of the

application it must be restarted after changing the code of the script. In addition there is

available SDK, which allows you to write your own additional plug-ins to extend functionality of

editor.

At the first glance, the application is very similar to Microsoft Visual Studio. When opened

without loaded scene it presents itself really simple and does not produce the impression of a

large and sophisticated system. The scene is saved in a .py or .pyw file which is a Python script.

This script is shown in the main part of the application. When we lunch application a new

window is open, which we can dock in the editor. In order to prepare the application via Vizard

we should write it in Python.

With Vizard there are included dozens of sample applications. These sample applications

are actually scripts that take a small amount of memory in computer.

A definite distinguishing feature of this package is 3D avatars. They are characters which

somehow live their own life. We have a library of 100 people comprising civilian, military,

medical and other including more than 20 different animals. We can add them to the scene.

Each character has a skeletal system (bones) and is available in 3 variants of complexity

geometry (600, 2500 and 5000 vertices). If, however, such quality is not enough for us WorldViz

offers an additional form of so-called HD package with much more precise complexity of 3D

geometry with higher resolution 2048x2048 textures. Characters we can modify by using

external software such as 3DMax Character Studio. They are beautifully made and animated.

SDK allows you to write dll plug-ins in C++. Plug-ins can be handled in a scripting language

at the level of application engine. Vizard is based on OpenSceneGraph 3 scene management

Page 64: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

64

engine. Plug-ins allows, among others, to get access to the nodes, as well as create new nodes

or access the device drivers such as manipulators.

Fig. 5.23. Inspector displaying tree of the scene

In editor we have available a number of GLSL shaders and possibility to attach own ones.

Of course we also have available light and shadows, but as a curiosity worth mentioning is the

possibility of light baking, which requires the use of 3ds Max.

Physics is based on ODE (Open Dynamics Engine) so that we have handled simulation of rigid

body dynamics, collision detection and friction [56].

For applications you can import practically all of the multimedia files, because the list of

supported formats is really long. We can import 2D, 3D graphics files, audio and video material.

Regarding the use of the application under CAVE it supports up to 64 clusters (separate

computers on which the application may be running) as well as several GPUs. You can use

Vizard to create applications for HMD or AR applications. Vizard supports multi-screen

projections and multi-imaging (when one screen is composed of several images) as well as

joining techniques and correction of such images. It's support active 3D stereo projection image.

Motion detection is also included for head, hands and entire body. We may also use the

features of haptic devices which provide for us feedback. It seems to me that in terms of

available configurations for various devices Vizard 5 has no equal competitors. By default, it

supports multiple motion detection systems, HMD, CAVE, Powerwall, 3D screens, haptic

devices, eye tracking, AR, controllers, manipulators and data retrieval systems (Data

Acquisition).

Currently is being preparing a new 5 version of Vizard (at the time of writing available in

beta version), it allows direct use of 16 materials15

, those have been added in 3ds max. It also

adds a graphical application Vizconnect Configuration Interface to configure applications for

15

Supported materials: ambient, diffuse, specular, detail, specular level, glossiness, self-illuminations, rim,

rim radial, opacity, bump, reflection, reflection falloff, refraction, and displacement.

Page 65: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

65

various hardware configurations (number of screens, HMD, input/output devices, etc.). An

important component that has been included is the ability to view and edit the nodes of the tree

scene with the possibility to add and delete new nodes. It also introduced full support for 64-bit

mode for the entire graphics engine so we can use a large amount of memory available to the

GPU side.

In conclusion, it is worth mentioning that prepared the scene can be exported to a single

.exe file, which can then be played back already without the need for a separate player.

5.2.2.3.2.5. Summary

As you can see there are a few very good positions. EON Studio is very famous because of

his past. At the moment is powerful but is outdated and have unfriendly user interface what

makes it not comfortable to use. By the other side Vizard support a lot of devices and platforms.

To use it you need to learn Python framework which is rather easy to use. Quazar3D from the

other side is the most powerful and functional solution. It contain modern and friendly user

interface, it have own script and graphic language which make comfortable work with them.

Table 5.3. Comparison of CAVE dedicated editors

Feature Vizard Quazar3D Eon Studio

WYSIWYG x x x

IDE x x x

OS Windows Windows Windows

CAVE x x x

HMD x x x

Scripting Python Own (similar to C++) VBScript or JScript

Documentation x x x

Examples x x x

Interactive console - x -

Core C++ C++ C++

Clustering x x x

AR x - -

Physics ODE x x

SDK C++ C++ C++

Flowgraph - x x

Kinect support x x x

Mobile control x - x 16

Shaders x x x

Separate 3D image x x -

iZ3D17

x x -

Anaglyph x x x

Quad-buffer x x x

16

Needs EON Mobile module. 17

Support for iZ3D Stereoscopic display.

Page 66: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

66

3D above/Below x x x

3D Interleaved x x x

Multi-display

support

x x x

Edge blending x x x

Color correction x x x

Hands capture x x x

Full body capture x x x

Haptics x x x

Head capture x x x

Network support x x x

DirectX support - x -

Oculus Rift x x x

Supported 3D files 3ds, ac, bsp, dae, dw,

dxf, fbx, gem, geo, iv,

wrl, ive, logo, lwo, lw,

geo, lws, md2, obj, ogr,

flt, osg, shp, stl, sta,

wrl, x

fbx 3ds, dwf, 3d, dwg, dxf,

IGES, Solidworks, STL,

obj and VRML2.0

Supported 2D files bmp, dds, gif, hdr, jps,

jpc, jpeg, jpg, attr, pic,

png, pnm, ppm, pgm,

pbm, rgb, sgi, rgba, int,

inta, bw, tga, tiff, tif

dds, jpg, png, psd, tga jpg, png or dds

Trial 3 month, full

functionality

unlimited time, trial

version

1 month, full functionality

Table 5.3 shows that the CAVE support for each framework is similar for each application.

The most differences are at the stage of provided functionalities of each editor. The most

advanced definitely is Quazar3D and also is the most user friendly.

5.2.2.3.3. Game dedicated engines

On the market there are available huge amount of game editors. Here I only focus for the

most advanced of them which support CAVE natively or through plug-ins. Each of presented

game editors are extremely powerful and functional. There is no space to write about them what

they can do, because each of them is high-end and provide almost a lot of functionality.

5.2.2.3.3.1. UNIGINE

UNIGINE is the one game editor which supports CAVE, input devices and tracking system

natively with unlimited number of displays. This is the biggest advantage over other systems.

But also is the most expensive one. It's also great documented. This engine is often used in

games and professional simulations. For example OBRUM, Polish Army uses it for training our

soldiers. You can turn on stereo 3D mode under development at desktop PC which is a big

advantage. It is also available with source-code licensing [57].

Page 67: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

67

UNIGINE use Syncker module, multi-node rendering system that makes it easy to create an

immersive CAVE system or a large-scale visualization wall on a computer cluster synchronized

over the network in real-time. Syncker works among different platforms and can connect nodes

working on Windows, Linux and Mac OS X at the same time. It allows you to render a world with

a fixed FPS (by default, 60 FPS) across different computers synchronized over LAN. The

application is master-slave type. Master control application logic and synchronize frames and

slaves renders world at their GPU's. You can control Syncker via command line or script

options. If you want to use projectors instead monitors you are suppose to use AppProjection

plug-in.

It is recommended to use at least 1 Gb LAN. Otherwise, you may experience network lags.

All applications over each node need to have access to application files, otherwise it will be not

rendered. You can copy all project files onto all computers or use one server to store these files

and provide network access with FileClient plug-in for nodes to these files.

At least UNIGINE provide sample AppWall and AppSurround applications showing

configuration of multi-monitor rendering system. Both applications enable a separate camera

configuration for each of the monitors and support asymmetric viewing frustums. They also

feature flexible, on-the-fly adjustment to the display position to achieve an optimal viewing

angle.

5.2.2.3.3.2. UDK

Unreal Development Kit (UDK) today is the most popular game engine editor. Most of the

AAA class games are created in this engine but also there are available 3D movies, simulations

and trainers. At the moment UDK costs 19$ monthly and provide source-code for whole engine.

Unfortunately UDK doesn't support CAVE-like installations natively. There are some tricks

and trials to run UDK application in CAVE. You can use multiplayer mode, maybe not exactly

multiplayer, but network connection between clients and synchronise between them objects and

transform cameras to display image in a multi-display manner [58].

CaveUT at the moment outdated and no longer supported was the first extension for Unreal

Tournament 2004, the predecessor of UDK. It adds functionalities to use in CAVE installations

with one computer per screen connected over the LAN [59]. CaveUT was originally developed

at the University of Pittsburgh and later extended to include stereoscopy. It can be used with

VRGL [60] for better results. VRGL is a library, which introduce off-axis and spherical projection

effects into the display.

Successor CaveUDK is a VR Game Engine Middleware extension for UDK 3 supporting

CAVE-like installations. Historically, CaveUDK development started in 2011 in Teesside

University, and is now carried on at Würzburg University, under the supervision of HCI Group.

CaveUDK does not affect the game engine performance, even with complex real-time

applications, such as fast-paced multiplayer First Person Shooter (FPS) games or high-

resolution graphical environments with 2M+ polygons. It provides more advanced and generic

UnrealScript VR Class framework and set of software tools for multi-screen visualisation,

interaction, conversion, calibration and deployment. It is a dll plug-in which don't need to

Page 68: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

68

recompile game source code. Input devices and tracking systems are supported as well. This

makes it comfortable to use. But at the moment I didn't find the solution to download to verify it

[61].

At the time of writing its looks that trying to run UDK in CAVE could be a little bit difficult.

Source code is available so it is possible to write own CAVE support into engine which makes

takes a long time of development.

5.2.2.3.3.3. CryEngine

CryEngine is relatively new game engine which is extremely powerful and is dedicated for

creating AAA games. It's available for 9,90 EUR per month or you can buy full licence with

access to source-code. CryEngine does not support CAVE and distributed rendering natively

[62].

Some research resulted in CryEngine automatic Virtual Environment (CryVE) software

based on CryEngine2 game engine. The CryVE software implementation consists of three

components: a game modification (mod), a game flowgraph and a modified multiplayer map.

The software architecture contains multiplayer instances of a CryEngine2 game started on all

computers in the system. Computers are connected to each other through a network, where

one of them acting as a server (master) while the rest are game clients (slaves). The server can

control in-game action while the clients provide extra “cameras” that complete the peripheral

view required by the CAVE, and synchronize to the pose and motion of the master. Finally each

computer renders own piece of the virtual world to the corresponding projection screen. CryVE

is a software extension that sits on top of CryEngine2 without modifying its internal workings.

Therefore it can be redistributed independently from the game that is being used in the system

[63].

The disadvantages of CryVE are reported: average frame rates below 20 fps and some lags

between slaves, which may not be sufficient to support a comfortable viewing and interaction

experience. Also I didn't found any notice about supported 3D stereo image so I suppose that is

not supported here. And at the least CryVE don't support newest modern version CryEngine 3.

5.2.2.3.3.4. UNITY

UNITY is game engine editor with huge popularity and available modules for supporting

CAVE installations. The CAVE support is not natively implemented in UNITY but it comes from

independent companies. At the moment UNITY visual performances do not reach that other

advanced game engines like the CryEngine or the UDK, but every day is closer and closer and

one day the differences may be gone [64].

ICT VR CAVE Multiscreen configuration project for UNITY 4 is available on Unity Asset

Store package used with the VR CAVE (EON iCube4). It currently does not include the stereo

support. The new second version will support 3D NVIDIA Vision stereo, head and controller

tracking, and many more features. It's currently under developed [65].

If you don't want to wait for finish development of ICT VR CAVE module then you can reach

for MiddleVR for Unity. MiddleVR exists in Free, Academic and Pro versions. It support all you

Page 69: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

69

need from CAVE applications as tracking, input devices, 3D stereo modes, distributed rendering

system and 64 bit player. This is all-in-one supported module for UNITY to creating CAVE

applications [66].

The Integra AV which installs I3DVL will provide sample application based on UNITY which

will be works in CAVE. UNITY has the most complete plug-in to run standard simulation in

CAVE which make it easy to use and is not so expensive.

Page 70: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

70

6. DEMONSTRATIVE PROJECT FOR I3DVL

The demonstrative project is a maze game with distributed rendering, synchronized

between nodes working at 12 computers. Maze is based on graphic image so it is possible to

generate new mazes by just substitution of new images. Application is master-slave type. One

master control each slave nodes which just render scene for each wall. It is possible to run

game in distributed environment at a few computers or just at one computer.

Fig. 6.1. Maze in action at 3 nodes (left, front, right, bottom, back, top cameras)

I have chosen OpenSceneGraph 3 (OSG) as a framework for simulation. Also I have

developed base maze distributed applications on OpenSG 2 and with use Equalizer on top

OSG. This shows differences in architectures of each specified frameworks.

6.1 System project

The system is divided into 2 parts: master and slave. Master controls cameras, user

movement and interaction, synchronisation of object states between master and slaves. Slaves

just render scene based on master data and time synchronisation. In my application master also

acts as slave. Master represents front camera view and slaves correspond to other views.

Page 71: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

71

Fig. 6.2. Class diagram of sample application

Project consists of a few modules: Scene, SkyBox, Message Queue, Camera, Keyboard

Handler and Application. Scene module is responsible for scene creation and generates maze

based on provided bitmap. It generates scene graph with all scene objects as geometry,

cameras, lights, background, water at GLSL shader, flying plane and manages it. SkyBox

generates cube background and places it at the scene. Such background is created from 6

different bitmaps which look continuously at one surroundings image at the scene. Message

Queue is responsible for managing events and messages between nodes. It contains

Broadcaster, Receiver, DataConverter and CameraPackets modules. CameraPackets module

stores information about cameras in format which is transportable by Broadcaster module.

Broadcaster module sends objects states to connected nodes. Receiver module receives

objects states as packets and provides it for application. DataConverter module converts data to

and from socket readable form. Camera module manages and manipulates cameras on the

scene. Keyboard Handler module reacts for user input by keyboard. Application module builds

Application

-m_Arguments: osg::ArgumentParser-m_Viewer: osgViewer::Viewer-m_ViewerMode: ViewerMode-m_Wall: ViewerMode-m_Debug: bool-m_SocketNumber: int-m_CameraFov: float-m_CameraOffset: float

<<create>>-Application(argc: int, argv: char)+parseArguments(): int+setupCurrentWall(): void+setupCamera(): void+initEventsHandlers(): void+run(): void+distribute(): void+setSceneData(root: osg::Group): void

Broadcaster

-_so: SOCKET-_initialized: bool-_port: short-_buffer: void-_buffer_size: unsigned int-saddr: SOCKADDR_IN-_address: unsigned long

<<create>>-Broadcaster(: void)<<destroy>>-Broadcaster(: void)+setPort(port: short): void+setBuffer(buffer: void, buffer_size: unsigned int): void+setHost(hostname: char): void+sync(: void): void-init(: void): bool

CameraPacket

+_debug: bool+_byte_order: unsigned int+_masterKilled: bool+_matrix: osg::Matrix+_frameStamp: osg::FrameStamp+_events: osgGA::EventQueue::Events

<<create>>-CameraPacket()+setDebug(on: bool): void+setPacket(matrix: osg::Matrix, frameStamp: osg::FrameStamp): void+getModelView(matrix: osg::Matrix, angle_offset: float): void+readEventQueue(viewer: osgViewer::Viewer): void+writeEventQueue(viewer: osgViewer::Viewer): void+setMasterKilled(flag: bool): void+getMasterKilled(): bool

DataConverter

+_debug: bool+_startPtr: char+_endPtr: char+_numBytes: unsigned int+_swapBytes: bool+_currentPtr: char

<<create>>-DataConverter(numBytes: unsigned int)+setDebug(on: bool): void+reset(): void+write1(ptr: char): void+read1(ptr: char): void+write2(ptr: char): void+read2(ptr: char): void+write4(ptr: char): void+read4(ptr: char): void+write8(ptr: char): void+read8(ptr: char): void+writeChar(c: char): void+writeUChar(c: unsigned char): void+writeShort(c: short): void+writeUShort(c: unsigned short): void+writeInt(c: int): void+writeUInt(c: unsigned int): void+writeFloat(c: float): void+writeDouble(c: double): void+readChar(): char+readUChar(): unsigned char+readShort(): short+readUShort(): unsigned short+readInt(): int+readUInt(): unsigned int+readFloat(): float+readDouble(): double+write(fs: osg::FrameStamp): void+read(fs: osg::FrameStamp): void+write(matrix: osg::Matrix): void+read(matrix: osg::Matrix): void+write(event: osgGA::GUIEventAdapter): void+read(event: osgGA::GUIEventAdapter): void+write(cameraPacket: CameraPacket): void+read(cameraPacket: CameraPacket): void

ViewerMode<<enumeration>>

+STAND_ALONE+SLAVE+MASTER+FRONT+LEFT+RIGHT+BOTTOM+TOP+BACK

MyCameraManipulator

-m_c: osg::Vec3d-m_q: osg::Quat-m_d: float-m_timeLast: double-m_timeDelta: double-m_rotationAngle: double-m_bForward: bool-m_bReverse: bool-m_bTurnRight: bool-m_bTurnLeft: bool-m_bStrafeLeft: bool-m_bStrafeRight: bool-m_bRotateLeft: bool-m_bRotateRight: bool-m_fSpeed: float

<<create>>-MyCameraManipulator()+setByMatrix(matrix: osg::Matrixd): void+setByInverseMatrix(matrix: osg::Matrixd): void+getMatrix(): osg::Matrixd {CppConst}+getInverseMatrix(): osg::Matrixd {CppConst}+home(t: double): void+home(ea: osgGA::GUIEventAdapter, aa: osgGA::GUIActionAdapter): void+getRotationAngle(): double+handle(ea: osgGA::GUIEventAdapter, aa: osgGA::GUIActionAdapter): bool+computePosition(eye: osg::Vec3, center: osg::Vec3, up: osg::Vec3): void

MyKeyboardEventHandler

+handle(ea: osgGA::GUIEventAdapter, : osgGA::GUIActionAdapter): bool

Receiver

-_so: SOCKET-saddr: SOCKADDR_IN-_initialized: bool-_port: short-_buffer: void-_buffer_size: unsigned int

<<create>>-Receiver()<<destroy>>-Receiver()+setBuffer(buffer: void, size: unsigned int): void+setPort(port: short): void+sync(: void): void-init(: void): bool

Scene

-m_RootNode: osg::Group {CppPointer = *}-m_Bitmap: BMP-map: int

<<create>>-Scene()+readMap(fileName: std::string): void+generateLights(): void+generatePlatform(): void+generateWater(): void+generateMaze(): void+generateSkybox(): void+generatePlane(): void+saveToFile(fileName: std::string): void+getRoot(): osg::Group-createTexture(filename: std::string): osg::Texture2D-createAnimationPath(radius: float, time: float): osg::AnimationPath

SkyBox

<<create>>-SkyBox()<<create>>-SkyBox(copy: SkyBox, copyop: osg::CopyOp)-META_Node(: osg, : SkyBox)+setEnvironmentMap(unit: unsigned int, posX: osg::Image, negX: osg::Image, posY: osg::Image, negY: osg::Image, posZ: osg::Image, negZ: osg::Image): void+computeLocalToWorldMatrix(matrix: osg::Matrix, nv: osg::NodeVisitor): bool {CppVirtual, CppConst}+computeWorldToLocalMatrix(matrix: osg::Matrix, nv: osg::NodeVisitor): bool {CppVirtual, CppConst}+generateSkybox(node: osg::Group): void<<destroy>>-SkyBox()

Page 72: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

72

and manages whole system at all. It also synchronises messages between nodes. All modules

have implemented corresponding classes which you can see in the figure 6.2.

Fig. 6.3. Created object used as wall in maze

In the figure 6.3 you can see created object in 3D Studio Max 2015 used as piece of wall in

maze. I exported this object to .obj with .mtl files which store its geometry built on faces and

materials. I exported normals and texture coordinates too. The configuration of export can be

seen in the figure 6.4. Application loads once such object and instance based on bitmap

coordinates. The black pixel on this map put such object into a scene.

Fig. 6.4. Configuration for export objects in 3D Studio Max

Page 73: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

73

It is possible to run each node independently. By console attributes you can specify whether

started node will be master or slave and which wall will be represented. Also you can provide

bitmap file to at start-up to generate maze based on it.

The use cases of application:

Run a few instances of applications at one computer,

Run a few instances of applications at a few computers (nodes) in the network,

Run application as master,

Run application as slave,

Run application with different camera view (for use to different wall),

Generate maze.osg file,

Generate different 3D maze based on provided bitmap,

Navigate in maze,

Display scene and performance statistics during simulation,

Display scene in solid, wireframe and vertex mode,

Move camera in X,Y,Z axis for each node.

6.2 Implementation notices

System is designed in object oriented way. File main.cpp contain application main entry

function which starts simulation. Logs are displayed in console window. Whole logic of

application is stored in Application class. Anyway main function step by step initialises,

configures and generates the application as you can see:

Application *app = new Application(&argc,argv); Scene *scene = new Scene(); if( !app->parseArguments() ) { return 1; } app->setupCurrentWall(); scene->readMap("map1.bmp"); scene->generateLights(); scene->generateWater(); scene->generateSkybox(); scene->generateMaze(); scene->generatePlane(); app->setSceneData(scene->getRoot()); app->setupCamera(); app->initEventsHandlers(); app->run(); //scene->saveToFile("lab.osg");

app->distribute();

Page 74: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

74

Here you can specify which part of scene should be generated by using function

scene->generate...(). Function app->run() shows windows and starts threads, function

app->distribute() starts master and slaves for sending or receiving events and messages.

Parse arguments from command line are ArgumentParser class:

osg::ArgumentParser m_Arguments(argc,argv);

while (m_Arguments.read("-m")) m_ViewerMode = MASTER;

Each window is managed by Viewer class. To manipulate camera by arrow keys I have

implemented CameraManipulator class. To setup entry camera position you need to setup its

parameters in setupCamera function from Application class:

pManipulator->setHomePosition( osg::Vec3d( 15,1,0.15 ), osg::Vec3d( 15,1,0 ), osg::Vec3d( 0,0,1 ) );

One very important thing is that 3D Studio Max 2015 exports object to .obj file with wrong

.mtl file. Here is setup value 0.0000 in Tr parameter of material. This make invisible object after

import into OpenSceneGraph scene. To see such object at scene you have to change this value

to 1.0000 in the corresponding .mtl file as for e.g.:

newmtl Material__26

Ns 10.0000

Ni 1.5000

d 1.0000

Tr 1.0000

Tf 1.0000 1.0000 1.0000

illum 2

Ka 0.0000 0.0000 0.0000

Kd 0.5880 0.5880 0.5880

Ks 0.0000 0.0000 0.0000

Ke 0.0000 0.0000 0.0000

map_Ka D:\brick.jpg

map_Kd D:\brick.jpg

6.3 Quality tests

The quality tests are based on walking in maze and visual observations. As I can see there

are a few problems. It seems that there is a problem with synchronisation objects generated by

GLSL shader between nodes. You can see it in the figure 6.4.

Page 75: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

75

Fig. 6.4. Water generated by GLSL shader is not synchronised between nodes

Second problem is with keyboard manipulations. When first time left or right arrow is

pressed then camera is always rotating to right until left arrow key is pressed. The problem may

be because of initial translation value of variable which is not zero after its read.

Third problem is that you can cross walls. Inner sides of walls sometimes can have

graphical artefacts as vertical lines of overlapping textures as you can see in the figure 6.5. This

is probably according to normals of the geometry which have vectors directed outside of the

geometry. Very rarely it is also possible to see other artefact at outer sides of walls. It may be

because my graphic card cannot load whole geometry with textures 1024x1024px. At newer

computer configurations I didn't see such problems.

Fig. 6.5. Artefacts at internal side of wall

6.4 Performance tests

For tests I used computer with Pentium QuadCore 2,4GHx with 6GB RAM and 128GM

SDD. The graphic card is NVIDIA GeForce 550Ti with 1GB of DDR5 memory. The tests I make

with one and three instances of application. With 3 nodes I get average frame rate about 60-70

FPS per each instance with windows 400x400px. The minimum was about 45 FPS and

maximum about 110 FPS. In full-screen mode the average FPS stops in about 35-60. The time

Page 76: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

76

for Event, Update, Cull, Draw and GPU was stable at any part of maze. The result of it you can

see in the figure 6.6.

Fig. 6.6. Maze performance statistics

The scene is not so complicated. It contains about 100 000 vertices where about 25 000 of

it is rendered through one camera. You can see it in the figure 6.7.

Fig. 6.7. Scene statistics

6.5 System presentation

The maze is generated on the base of a bitmap. An example of the bitmap can be seen in

the figure 6.8. You can use different map and load it into application. Black color in the bitmap

generates a wall, white is a space with water, green is start platform and red is a target platform

to reach.

Page 77: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

77

Fig. 6.8. Maze diagram

System generates animated water which you can see in the figure 6.9. The water is

generated at GLSL shader so it be not slow down the performance of application.

Fig. 6.9. Generated water

Application also uses a cube map as background of scene which you can see in the

figure 6.10.

Page 78: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

78

Fig. 6.10. Cube-map background

6.6 User manual

Application is configured by passing arguments via command line. You can run it at several

computers or at one computer as a few processes. Each application will communicates with

each other via network socket. Master broadcasts messages to all slave nodes. Slave receives

this messages and responds to it. 25x25 sector's maze is generated base on "maze.bmp" file.

Commands line arguments:

-m - master mode (only one node may be master),

-s - slave mode,

-front - front camera view,

-left - left camera view,

-right - right camera view,

-save - saves generated 3D maze to maze.osg file which then it can be lunched in

OpenSceneGraph viewer or in Equalizer (with some incompatibilities e.g. SkyBox

will not works).

For example to run maze at 3 nodes you can use specified commands:

Maze -m -front

Maze -s -left

Maze -s -right

To control movement in maze you should use arrow keys. The control is very similar to one

used in FPS games. By pressing UP key you go forward, DOWN - backward then LEFT and

RIGHT will rotate your direction. Keys '<' will move you left and '>' key will move you to the right.

Page 79: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

79

You can translate camera for each node in three axes. By pressing 's' key you can toggle on

statistics. The 'w' key changes mode of rendering scene from solid to wireframe or vertices.

These modes you can see in the figures 6.11.

Fig. 6.11. Solid, wireframe and vertex mode

Page 80: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

80

7. FUTURE R&D WORK FOR I3DVL

The I3DVL project is at the installation and configuration stage, which minds that, is not

ready at the moment. It will be finished and opened in next months. When it will be ready the

main thing will be to run and test example applications in different configurations. I think that

there will be a lot of difficulties and problems to configure laboratory to work with different

frameworks, applications and engines in full mode 6-wall CAVE installation which will include

VirtuSphere inside. This is because laboratory is unique installation, contains high-end

hardware and devices which have never been working in such configuration. So this should be

the first step to make tests and trials the best configuration of CAVE installation.

VirtuSphere should be another point of view. How does it work? How should it be

programmed to obtain natural human walk interface? The other test should contain 3D stereo

modes which will work in VirtuSphere. The research should answer which 3D mode will provide

the best results or maybe no one will be enough, if so then why?

Then we can focus on applications. What kind of applications will work better with

VirtuSphere? How VirtuSphere can help us to navigate in virtual worlds? We should answer for

this question by some practice, through creating some applications which will provide us to test

different scenarios and make evaluate of our theory in practice. Based on our future experience

we can give recommendation how to implement CAVE applications for other departments of

University.

Because of arising Crisis Management Centre we will cooperate with Bohemia Interactive

Simulations Company at the level of developing and setup of VBS. This project also assumes

cooperation with emergency services which will specify their requirements and will tell us how

Crisis Management Centre should work. I suppose that there will be cooperation with Bohemia

to R&D new functionalities and to improve VBS product.

Page 81: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

81

8. SUMMARY

The process of creating Master Thesis was busy and time-consuming. The subject evaluate

with the time to provide the best results. This is the solid research base for developing

applications for I3DVL at Gdansk University of Technologies and other CAVE-like systems.

What is the result of this work? The main thing is that process of developing applications for

CAVE with use of existing frameworks and libraries is difficult. Sometimes there may be a need

to make some additional tests to check and verify if such functionality of some frameworks or

library will works in distributed system or if it will operate with each other. When you will get

something to work then you should be member of distributed architecture, local computations,

synchronisation and keep in mind that application will run in CAVE installations. This makes the

development process more difficult and error prone. To help you and make easier process of

creating applications for CAVE you can use tools with GUI where you can just make some

applications in visual way. Modern tools contains visual programming languages which even

allows for creating simple applications for people which are not programmers, even IT guys.

Usually this is in the cost of lower possibilities of creating advanced applications, but in the most

cases it will allow you to create each kind of applications.

Anyway, usually each CAVE installation is unique and there is a need to tuning each

framework or tool to best work with specified installation. This is also valid for I3DVL where is a

need to configure and compare different setups to get the best result of this hi-end laboratory at

Gdansk University of Technology.

Page 82: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

82

THE STUDY BENEFITED FROM THE FOLLOWING REFERENCES

1. Sherman R. Wiliam, Craig B. Alan .: Understanding virtual Reality, INTERFACE, APPLICATION, AND DESIGN, Morgan Kaufmann Publishers 2003, San Francisco, CA, USA.

2 Burdea C. Grigore, Coiffet P. .: Virtual Reality Technology, Piscataway, New Jersy, Versailes France 2003.

3 UNIGiNE, http://sim.unigine.com/platform/built_for_simulation/, (access date 10.10.2013).

4 Panasonic for business, http://www.panasonicforbusiness.com/2011/03/think-outside-of-the-box-

edge-blending-technology-allows-av-professionals-to-create-more-unique-video-presentations/, (access date 10.10.2013).

5 Mitsubishi electric, http://www.mitsubishielectric.com/bu/projectors/products/data/installation/xd8000u_lu_features.html, (access date 10.10.2013).

6 How 3-D PC Glasses Work, http://computer.howstuffworks.com/3d-pc-glasses1.htm, (access date 11.10.2013).

7 Nvidia, http://www.nvidia.com/object/quadro-sync.html, (access date 09.02.2014).

8 ROAD TO VR, http://www.roadtovr.com/smi-3d-eye-tracking-glasses/, (access date 10.10.2013).

9 Terratec, http://www.terratec.net/de/produkte/Aureon_7.1_PCI_2251.html, (access date 10.10.2013).

10 Virtusphere, http://www.virtusphere.com/view.html, (access date 10.10.2013).

11 Gdansk University, http://www.dzp.pg.gda.pl/data/post/02307/specyfikacja/1368515549.pdf, (access date 10.10.2013).

12 Aechen University, http://www.rz.rwth-

aachen.de/aw/cms/rz/Themen/Virtuelle_Realitaet/infrastructure/~tos/aixCAVE_at_RWTH_Aachen_Uni

versity/?lang=de, (access date 10.10.2013).

13 Cross Grid, http://www.eu-crossgrid.org/, (access date 22.01.2014).

14 Flooding Crisis Simulation, http://vrc.zid.jku.at/projekte/visualisierungen/flooding/?lang=en, (access date 22.01.2014).

15 Molecule visualization, http://vrc.zid.jku.at/projekte/visualisierungen/molecule/, (access date 22.01.2014).

16 Anatomical structures in medicine, http://vrc.zid.jku.at/projekte/visualisierungen/neuro/, (access date 22.01.2014).

17 Interactive 3D art, http://vrc.zid.jku.at/projekte/studenten_ws01/kunstwerk/, (access date 22.01.2014).

18 Multi user maze, http://vrc.zid.jku.at/projekte/studenten_ws01/maze/, (access date 22.01.2014).

19 Ski simulator, http://vrc.zid.jku.at/projekte/studenten_ws06/cave_skiing/, (access date 22.01.2014).

20. Finney C. Kenneth.: 3D Game Programming, All in One, Premier Press 2003, Boston, USA.

21. Czech J. Zbigniew.: Wprowadzenie do obliczeń równoległych, Wydawnictwo naukowe PWN 2013, Warsaw, Poland.

22. Movania M. Muhammaad.: OpenGL Development Cookbook, PACKT Publishing 2013, Brimingham, UK.

23. Wright S. Richard, Haemel N.: OpenGL SuperBible: comprehensive tutorial and reference. - sixth edition, Adison-Wesley, Paerson Education 2014, USA.

24. Walsh P.: Advanced 3D Game Programming with DirectX 10.0, Wordware Publishing 2008, Texas, USA.

25 OpenGL Performer, http://oss.sgi.com/projects/performer/, (access date 02.06.2014).

26 OpenSG, http://www.opensg.org, (access date 10.12.2014).

Page 83: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

83

27 OpenSceneGraph, http://www.openscenegraph.org/, (access date 11.11.2013).

28 NVIDIA SceniX, https://developer.nvidia.com/scenix, (access date 12.03.2014).

29 Autodesk Showcase, http://www.autodesk.com/products/showcase/overview, (access date 12.03.2014).

30 CgFX, http://developer.download.nvidia.com/shaderlibrary/webpages/cgfx_shaders.html, (access date 12.03.2014).

31 ViSTA Virtual Reality Toolkit, https://www.itc.rwth-aachen.de/cms/IT-Center/Forschung-

Projekte/Virtuelle-Realitaet/Infrastruktur/~fgmo/ViSTA-Virtual-Reality-Toolkit/lidx/1/, (access date 29.08.2014).

32 VR Juggler, https://code.google.com/p/vrjuggler/, (access date 01.05.2014).

33 Equalizer, http://www.equalizergraphics.com/index.html, (access date 01.12.2013).

34 nvFX, https://developer.nvidia.com/sites/default/files/akamai/gamedev/docs/nvFX%20A%20New%20Shader-

Effect%20Framework.pdf, (access date 01.03.2014).

35 Lua, http://prideout.net/blog/?p=1, (access date 01.03.2014).

36 glfx, https://code.google.com/p/glfx/, (access date 01.03.2014).

37 OptiX, http://www.nvidia.com/object/optix.html, (access date 03.04.2014).

38 PIXAR, http://www.pixar.com/about, (access date 03.04.2014).

39 KATANA, http://www.thefoundry.co.uk/products/katana/, (access date 03.04.2014).

40 Sony Pictures Imageworks, http://www.imageworks.com, (access date 03.04.2014).

41 Ted movie, http://www.tedisreal.com, (access date 03.04.2014).

42 Amazing Spider-Man movie, http://www.theamazingspiderman.com/site/, (access date 03.04.2014).

43 Oz the Great and Powerful movie, http://movies.disney.com/oz-the-great-and-powerful, (access date 03.04.2014).

44 After Earth movie, http://www.sonypictures.com/movies/afterearth/discanddigital/, (access date 03.04.2014).

45 Digital Domain, http://digitaldomain.com and http://www.imdb.com/company/co0011313/, (access date 03.04.2014).

46 Industrial Light and Magic, http://www.ilm.com, (access date 03.04.2014).

47 Qt, http://qt-project.org/, (access date 04.05.2014).

48 DNV, http://www.dnvgl.com/about-dnvgl/default.aspx, (access date 04.02.2014).

49 VBS, http://bisimulations.com/products/vbs2/overview, (access date 05.05.2014).

50 PESTRI, http://www.peostri.army.mil/, (access date 05.05.2014).

51 JMSC, http://www.eur.army.mil/jmtc/JMSC.html, (access date 05.05.2014).

52 OBRUM, http://www.obrum.gliwice.pl/, (access date 05.05.2014).

53 Quazar 3D, http://www.quazar3d.com/, (access date 05.02.2014).

54 EON Studio, http://www.eonreality.com/eon-studio, (access date 06.02.2014).

55 Vizard, http://www.worldviz.com/products/vizard/preview, (access date 02.03.2014).

56 ODE, http://www.ode.org/, (access date 02.03.2014).

57 UNIGiNE, http://unigine.com/, (access date 03.01.2014).

58 UDK, https://www.unrealengine.com/products/udk/, (access date 23.01.2014).

59 CaveUT, http://publicvr.org/html/pro_caveut.html, (access date 23.01.2014).

60 VRGL, http://publicvr.org/html/pro_vrgl.html, (access date 23.01.2014).

61 CaveUDK, http://hci.uni-wuerzburg.de/projects/caveudk.html, (access date 23.01.2014).

Page 84: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

84

62 CryEngine, http://cryengine.com, (access date 21.04.2014).

63 CryVE, http://cryve.id.tue.nl, (access date 21.04.2014).

64 UNITY, http://unity3d.com, (access date 28.04.2014).

65 ICR VR CAVE, http://theictlab.org/2014/05/ict-vr-cave-multiscreen-configuration-project-on-unity-

asset-store-package, (access date 28.04.2014).

66 MiddleVR, http://www.imin-vr.com/middlevr, (access date 28.04.2014).

Page 85: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

85

LIST OF FIGURES

Fig. 3.3. CAVE installation at Aachen University [12] ................................................................. 21

Fig. 3.4. Flooding system in action [14]....................................................................................... 22

Fig. 3.5. Molecules and particle system visualization [15] .......................................................... 23

Fig. 3.6. Anatomical structure in medicine [16] ........................................................................... 23

Fig. 3.7. Travel in virtual world .................................................................................................... 24

Fig. 3.8. Interactive 3D art [17] .................................................................................................... 24

Fig. 3.9. Multi user maze [18] ...................................................................................................... 25

Fig. 3.10. Ski simulator [19] ......................................................................................................... 26

Fig. 5.1. NVIDIA SceniX viewer .................................................................................................. 36

Fig. 5.20. EON Studio ................................................................................................................. 61

Fig. 5.21. Shoreline engine example application ........................................................................ 62

Fig. 5.22. Vizard application ........................................................................................................ 63

Fig. 5.23. Inspector displaying tree of the scene ........................................................................ 64

Fig. 6.1. Maze in action at 3 nodes (left, front, right, bottom, back, top cameras) ...................... 70

Fig. 6.2. Class diagram of sample application ............................................................................ 71

Fig. 6.3. Created object used as wall in maze ............................................................................ 72

Fig. 6.4. Configuration for export objects in 3D Studio Max........................................................ 72

Fig. 6.4. Water generated by GLSL shader is not synchronised between nodes ....................... 75

Fig. 6.5. Artefacts at internal side of wall .................................................................................... 75

Fig. 6.6. Maze performance statistics ......................................................................................... 76

Fig. 6.7. Scene statistics ............................................................................................................. 76

Fig. 6.8. Maze diagram................................................................................................................ 77

Fig. 6.9. Generated water ........................................................................................................... 77

Fig. 6.10. Cube-map background ................................................................................................ 78

Fig. 6.11. Solid, wireframe and vertex mode .............................................................................. 79

Page 86: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

86

LIST OF TABLES

Table 5.1. Scene graphs comparison ......................................................................................... 37

Table 5.2. Comparison of CAVE frameworks integrations ......................................................... 46

Table 5.3. Comparison of CAVE dedicated editors .................................................................... 65

Page 87: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

87

Attachment A - Example Applications

1. ViSTA

01KeyboardDemo

The program presents registration and handling

events related to the operation of the keyboard. After

pressing the "l", "n" or "z" different text on the

screen. However, after pressing the ESC key will be

displayed sequentially pressed letter.

02GeometryDemo

The program shows how to create a procedural

primitive 3D objects. After launch we will see two

units: a textured sphere and a cube. By using the

mouse, we can manipulate camera and view objects

from different angles.

03TextDemo

The program shows how to create and display

text as an overlay on the 2D image and 3D text.

04LoadDemo

The program shows how to load 3D objects on

scene and camera configurations.

05InteractionDemo This program shows the communication

between application and an external program.

Program consists of two projects:

05InteractionDemo and 05InteractionDemoSender.

In the first you should run 05InteractionDmo then run

05InteractionDemoSender. At the start

05Interactiondemo console you should see a

Page 88: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

88

message about creating a Message Port:

----- [VistaSystem]: Creating External TCP / IP Msg

Harbor -

MessagePort created at IP [127.0.0.1] - Port

[6666] after which the program

05InteracionDemoSender will connect with the

application 05InteractionDemo.

When you press SHIFT + B in the application

execution 05InteracionDemo notice callback function

assigned to that key, which in this example is based

on the message [SomeButtonCallback] 1.

However, after you type the letters in the application

05InteractionDemoSender it will send its value as a

number to 05InteracionDemo application where

console display its value in the form "TOKEN: 49 ...".

06CameraControlDemo

This program shows the implementation of

camera control using keyboard keys ("Arrow" and

WASD). Controlling the camera is limited to its

translation and rotation.

07OverlayDemo

The program shows the implementation

imposed an additional layer of the image on the 3D

scene. An additional layer of arrows shows

animations in 3D. You can turn it on or off by

pressing the 'i' key.

09EventDemo

The program presents the opportunity to record

events and listening functions which are performed

at the time of the event occurring.

In the application event are activate by pressing the

keys:

SPACE - logs the event NULL,

'd' - sends an event of type DEMO_EVENT,

't' - record time observer every 2 seconds listening to

events,

After you create an event we can see in the log

result of handling the event for a listening function.

10DisplayDemo The program shows the initialization of multiple

screens, projection to display several directions.

Page 89: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

89

11OGLDemo

The program presents generation of 3D

graphics, texturing and lighting natively in OpenGL.

12IntentionSelectDemo

The program shows how to select 3D objects

(cubes) by hovering the mouse over them, and

generate in their place BBox.

13KernelOpenSGExtDemo

This program consists of two projects:

13KernelOpenSGExtParticlesDemo and

13KernelOpenSGExhShadowDemo that present

generation of molecular effects and shadows as

smoke, particles textured by sprites and flares or

glows. These programs present the use of SkyBox.

In the project 13KernelOpenSGExtParticlesDemo by

use the keyboard, you can control the animation:

1 - Normal Billboarding (manually created),

2 - Special Particles (created by ParticleManager),

3 - Increase sigma

4 - Reduce the sigma

5 - Increase m

6 - Reduce m

In the project

13KernelOpenSGExhShadowDemo usethe

keyboard, you can control the animation:

s - on. / off. Shadows

+ - The next shadow

- Previous shadow

Page 90: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

90

* - Double the size of shadow maps

/ - Reduce by half the size of shadow maps

Available shadows:

- Standard (Standard Shadow Mapping)

- Perspective (Perspective Shadow Mapping)

- SHADED (dithered Shadow Mapping)

- PCF (Percentage Closer Filtered Shadow

Mapping)

- PCSS (Percentage Closer Soft Shadow Mapping)

- A varied (Variance Shadow Mapping)

- None (No Shadow)

14DataFlowNetDemo

This program presents features of DataFlowNet

(DFN). In this case, the graphs are defined in the

.xml files, which allow making changes to running

application without needs to restart the application.

This configuration based primarily on the interfaces,

such as a mouse. The application shows the defined

.xml files to random change color and position of the

sphere that follow the mouse pointer.

15VtkDemo

The program presents the integration

framework VTK with ViSTA. It contains procedural

creation of 3D objects (cone and pivot) by VTK

library.

16PhantomDemo

The program presents features of integration

ViSTA with haptic devices. As required libraries

OpenHaptic today are not already available in the

form of Open Source.

17MsgPortDemo

The program consists of two projects

17MsgPortAlice and 17MsgPortBob that

communicate with each other through the Message

Port. The first step is to run the project

17MsgPortBob who will expect communication from

the 17MsgPortAlice. 17MsgPortBob receives events

from 17MsgPortAlice and responds to them

appropriately. Use the WASD or arrow keys in

program 17MsgPortAlice move camera in the

17MsgPortAlice.

18DebuggingToolsDemo

The program shows the use of debugging tools

in ViSTA. They provide colors the logs, take a

snapshot of the stack trace for the selected

Page 91: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

91

functions.

20PythonDFNDemo

The program presents integration of ViSTA with

Python.

2. OpenSG 1.8

With libraries are provided sample programs to illustrate the capabilities and use of

framework. OpenSG 1.8 compiled and configured samples are located in the

workspace\OpenSG\1_8\tutorials18

directory at attached disc.

To run sample applications you have to prepare a suitable configuration for Visual Studio:

• Include directories: frameworks \ opensg1_8_vc10 \ include; C: \ pg \ frameworks \

opensg1_8_vc10 \ supportlibs \ include; $ (IncludePath)

• Library Directories: frameworks \ opensg1_8_vc10 \ lib; C: \ pg \ frameworks \

opensg1_8_vc10 \ supportlibs \ lib; C: \ pg \ frameworks \ supportlibs \ lib; $ (LibraryPath)

• Additional Depedencies: MSVCPRTD.lib; MSVCRTD.lib; winmm.lib; wsock32.lib;

OSGBaseD.lib; OSGSystemD.lib; OSGWindowGLUTD.lib; glut32.lib; glu32.lib; OPENGL32.LIB;

tif32.lib; libjpeg.lib, freeglut.lib;% (AdditionalDependencies).

01hello

The application shows the basic way to initialize

and configure the scene using GLUT, create and

add a simple object, mouse support camera

movement and keyboard support.

02move

The sample application shows the use of

transition animation displacement node for the

rotation in the 3D scene. Also in this program has

been used multithreading and synchronization of

threads embedded in the same library.

03share This application shows how to copy an object at

the level of a native operation based on instancing in

order to save memory. This is based on the

establishment and use of a single node representing

an object in a space.

18

Prepared for Visual Studio 2012 x64.

Page 92: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

92

04hiertransform

The program shows the use of hierarchical

graph transformation. As a result, we obtain the

possibility of joint movement of objects and

interacting with them.

05geometry

This example shows how to create and

manipulate simple geometries involving nodes. In

OpenSG geometry data are stored in separate

vectors. They can be individually manipulated. Basic

shapes are directly drawn in OpenGL and can be

freely shaped in a way which allows us to call

glBegin().

06indexgeometry

The program presents the use of indices in

order to save memory and reuse geometry. Also in

this program is used colouring based on data at the

vertices.

07multiindexgeometry

Presented here a method for mapping indexes

for use with other attributes by defining a number of

indexes on a single vertex.

08materials The application shows process of creation of

materials and textures loaded from a file and assign

them to objects. SimpleMaterial node is simply an

Page 93: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

93

add-on material from OpenGL layer.

09traverse

The sample application shows how to search in

graph and modify its nodes. Function can be called

directly in front of the entrance to the node or

immediately after leaving it.

10loading

The program shows how to load the scene file

(VRML97, OBJ, OFF and RAW).

11picking

This application shows how to retrieve an

object. We can use the space key to locate the

object.

12ClusterServer It is a cluster server running in multicast mode

or SockPipeline.

13ClusterClient Sample client for the server cluster.

14switch

Presentation of use switch node to substitute

nodes to be rendered on the stage.

Page 94: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

94

15attachments

This example illustrates how to add data for any

node. For each node can be attached a few classes

with additional information. Such data can be of any

type, ranging from a simple text to any class.

16lights

Example shows the creation and animation

several light sources (point, directional and spot).

17deepClone

The application presents core copying scenes

with the change of its parameters in order to save

memory and speed up the rendering time.

18opengl_slave

Example showing the use of direct OpenGL.

However, it is not very efficient solution and the

larger objects can significantly slow down the entire

application.

19LocalLights

The program depicts the establishment of local

light sources.

20MaterialSort This application shows how to manually change

sorting order of materials in the scene.

Page 95: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

95

21Shadows

Presentation show shadow rendering in real-

time using shadow maps. Keys 1 and 2 switch

between light sources.

22Shader

The program presents the use of GLSL shader.

23videotexturebackground

The program shows possibility of using an

animated texture as the background of scene using

GL_ARB_texture_non_power_of_two extension. In

this way we learn how to load an extension to

OpenGL.

24Shadows

Another presentation of shadow generation in

real time. However, in this example, the new method

was used on the class ShadowViewport. And this

time, the shadows are based on the shadow map.

Available are following types of shadows: standard

shadow mapping, shadow mapping perspective,

dither shadow mapping (soft shadows), PCF

shadow mapping (soft shadows), PCSS shadow

mapping (soft shadows) and variance shadow

mapping (soft shadows). This example implements a

simple counter frames per second. Keys from 8 to 0

toggles the light source, 1 - 7 changes the type of

Page 96: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

96

shadows, y, x, c changes the size of shadow maps

(512, 1024 and 2048).

25OcclusionCulling

The program implements several algorithms for

occlusion culling: stop and wait, hierarchical multi

frame and multi frame. Algorithm "stop and play"

renders the scene in order from front to rear. For

each object in the scene (except one nearest) is

generated cube around (bounding box occlusion). If

cube is visible then renders the object. Overall this is

a relatively slow algorithm. The algorithm "multi

frame" renders the entire scene while maintaining

the depth buffer. The difference, however, lies in the

fact that only the object for the next frame is

generated when the cube is visible. It's a really fast

algorithm, but at relatively rapid movements may

result in some errors. And "hierarchical multi frame

frame" is the optimized version of the algorithm,

which reduces the number of test occlusion by the

use of hierarchical test occlusion. Key 'c' turn on and

off occlusion. On the other hand the keys from 1 to 3

change mode occlusion: stop and wait, hierarchical

multi frame and multi frame, respectively.

26RationalSurface

This program demonstrates the use of NURBS

surfaces. In this type of geometry, all the planes are

practically smooth, and the apexes of the control

have their own weight. By use the 1 - 3 we can load

other objects (teapot, torus and cylinder). These

objects are tesselated. Time the load is relatively

long. Use the 'z', 'x' and 'c' change polygon mode,

respectively: point, line, fill - then appear,

respectively, only vertices, lines, or filled surfaces.

3. OpenSG 2.0

Sample applications are included on the disc. They are located in directories: workspace \

OpenSG \ 2_0 \ tutorials, workspace \ OpenSG \ 2_0 \ simple and workspace \ OpenSG \ 2_0 \

advanced. I configured it and compiled under Visual Studio 2012.

01hello The program shows simplest stage in OpenSG.

It includes the manipulation of the camera with the

mouse, and access to the statistics by pressing

the's' key.

Page 97: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

97

02move

An example enhanced use of node transition to

animate the object's rotation.

03share

The application shows how to create instances

of objects and display them in order to conserve

memory usage.

04hiertransform

Presentation performs transformations on

objects graph.

05geometry

This application shows how to build and

transform the geometry of objects using geometry

nodes.

06indexgeometry The program shows use of indices in order to

re-use the geometry and adds colouring objects

based on colour information stored at the vertices.

Page 98: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

98

07multiindexgeometry

Using index mapping to store several attributes

Indices for use in various purposes.

08materials

The application shows how to create materials

and textures from files.

09traverse

The application shows how to make changes to

make on nodes through all objects in the graph.

10loading

Application presents the way of loading scene

files: VRML97, OBJ, OFF, and RAW are supported.

11picking The program presents possibilities of selecting

objects. To do this, move the mouse over the object

and press spacebar.

Page 99: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

99

attachments

Presented here how to add additional

information to the nodes.

clipplanecaps i clipplanecaps2

Applications shows use the trimming plane. Use

the 1 and 2 keys to switch on / off plane, 3 and 4

geometry, while the 'w', 'a', 's', and 'd' controls the

position of the cube.

deepclone

The application illustrates a way of cloning

objects.

fbotexture This application shows how to render a video

frame to texture FBO (Frame Buffer Object) and

then assign it to the object. This application shows

an animated texture.

Page 100: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

100

hiresimage

The demonstration presents several options for

the implementation screenshots of high quality. One

of them is the use object GrabForeground, while

another is use texture FBO. Dropped images are not

stored in the computer memory. They are written to

disk. Press key 1 will make snapshot made by

GrabForeground while using the key 2 will make it

through FBO. Images are saved in .ppm file with a

resolution about 4800x2400 (each one takes about

30 MB).

idbuffer

The application shows the object identification

by colour. By space key, you can select the object.

lights

The application demonstrates how to create and

manipulate lights. Key 'a' turn on all the lights, and

the key 's', 'd' and 'p' switching between lights: spot,

directional and point.

locallights

The application presents the use of a few local

lights that illuminate only the selected objects. Here

are 4 lights: red, green, blue and white.

materialsort This application shows how to manually change

order of sort rendered materials on objects.

Page 101: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

101

occlusionculling

The program implements several algorithms for

occlusion culling: stop and wait, hierarchical multi

frame and multi frame. Algorithm "stop and play"

renders scene in order from front to rear. For each

object in the scene (except one nearest) is

generated cube around (bounding box occlusion). If

the cube is visible then renders the object. Overall

this is a relatively slow algorithm. The algorithm then

"multi frame" renders the entire scene while

maintaining the depth buffer (depth buffer.). The

difference lies in the fact that only object for the next

frame is generated when the cube is visible. It's a

really fast algorithm, but at relatively rapid

movements may result in some errors. And

"hierarchical multi frame frame" is the optimized

version of the algorithm, which reduces the number

of test occlusion by the use of hierarchical test

occlusion. Key 'c' turn on and off occlusion.

openglcallback

The application illustrates an embodiment of the

method bypass feedback after each graph. This

allows us to use native code in OpenGL.

Unfortunately, this functionality does not work in a

distributed environment.

shadows

The application shows use of dynamically

generated shadows by using shadow maps. Use the

'1 'and '2' key to toggle the light.

sortlastclient A simple client for a cluster with applications

running on a distributed infrastructure.

Page 102: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

102

switch

The application shows use of the switch node to

switch items to display (we can do it by using the

keys '1 'to '0' and 'a', which will render the all objects

and 'n', which will hide them all).

videotexturebackground

This application shows how to set the

background of the scene as an animated texture.

16text_pixmap (tutorial)

The application shows how to create subtitles in

3D space by using the font based on the bitmap.

OpenSG converts text to a bitmap in this case.

16text_vector (tutorial)

The application presents a way of generating

3D subtitles.

deferredshading (advanced) This application shows how to implement own

shadows using GLSL shader.

Page 103: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

103

4. OpenSceneGraph 3

osganalysis

This application allows you to upload a 3D

scene and view basic information about it: the

number of nodes of geometric objects. The keys '1

'to '4' change the way of manipulating the stage,

where '1' is the trackball mode that is suitable for

manipulation by trackball that allows the left mouse

button gentle rotation of the mouse, the right allows

you to guess and dismiss the scene and with the

help down the scroll allows us to pan, '2' to Flight

mode, where the scene is constantly moving and the

mouse determine the direction of flight, '3' to Drive,

which differs from the flight mode that can speed up

or slow down movement using the left and right

mouse buttons and '4' which is a Terrain mode, a

mode where you use the left mouse button to rotate

the scene, while holding down the right mouse

button and move away scene closer while using

pressed scroll you perform pan on the stage. Use

the 'F' key to switch between modes at full screen

and window. The 'S' turns and switches between

different modes of statistics.

osgviewer

It is a simple file viewer. Key 'w' switch between

display modes: plane, edges and vertices. 't' on / off

texture, 'l' on / off additional light, 'f' on / off full-

screen mode, the 's' includes additional statistics

about the displayed scene. Use space to return to

the default setting of the camera view.

This application shows how to create path animation

of objects.

osganimate This application shows how to create a path

animation of objects.

Page 104: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

104

osganimationeasemotion

The application shows use of different effects

for animation along the path. We can see and

compare these smooth, jumping and other

accelerations. Used here include Effects:

InOutQuadMotion, InOutCubicMotion,

InOutQuartMotion, InOutBounceMotion,

InOutElasticMotion, InOutSineMotion,

InOutBackMotion, InOutCircMotion,

InOutExpoMotion and Linear.

osganimationhardware

The application shows the hardware operation

of scaling objects. You can run it with the software

and compare performance between hardware and

software operation.

osganimationmakepath

Application "draw" path based animation.

Osganimationmorph

The application presents use of morphing

animation - the transformation of one object into

another.

Osganimationnode The application shows how to animate node

along the path. In addition, the material of the object

is animated as well.

Page 105: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

105

Osganimationskining

The application shows how to animate the skin.

Osganimationsolid

The application shows how to animate objects

based on key frames.

Osganimationtimeline

The application shows how to create skeletal

animation based on a time line.

Osganimationviewer

The application is a skeletal animation player.

You should run it with the filename as a parameter.

The interface has been made using osgWidget.

Osgatomiccounter This application shows how to render pixels in a

specific order that create a 3D object.

Page 106: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

106

Osgautocapture

This application shows how to take a snapshot

of the screen directly to the .jpg file.

Osgautotransform

The application displays the automatic

transformation of objects rotation relative to the

camera or screen.

Osgbillboard

This application shows how to create an object

of billboard type which is a sprite always faces the

camera.

Osgblendquation

This application shows how to mix rendered

scene at the frame buffer. Use the arrow keys to

switch between different modes, we can mix it by

using functions: FUNC_ADD, FUNC_SUBTRACT,

FUNC_REVERSE_SUBTRACT, RGBA_MIN,

RGBA_MAX, ALPHA_MIN, ALPHA_MAX and

LOGIC_OP.

Osgcallback This application shows how to add a call back

function to run the scene after tree nodes and

disconnection of rendering elements that are

obscured (implementation cull technique).

Page 107: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

107

Osgcamera

This application shows how to configure

multiple cameras at scene. By launching the

application with the parameter '-1' open stage in one

window with several cameras. By '-2' run the same

scene displayed in many windows. In addition, this

application uses several models of thread

management: '-s' SingleThreaded, '-g'

CullDrawThreadPerContext, '-d'

DrawThreadPerContext and '-c'

CullThreadPerCameraDrawThreadPerContext.

Osgcatch

This application is a complete game that

involves catching letters falling from the sky. It

shows how to capture part of the implementation of

common objects in the scene.

Osgclip

This application shows how to add a clip node,

which "split" the geometry of the model and renders

it in the form of edge.

Osgcluster

The application shows the implementation of

the cluster, both the master and the server to render

the scene.

Page 108: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

108

Osgcompositeviewer

This application shows how to create a scene

consisting of several completely independent views,

which can be independently manipulated.

Osgcomputeshaders

The application shows use of a compute shader

to render the scene. Requires at least OpenGL 4.3.

Osgcopy

The application creates a copy of scene.

Osgcubemap

The application demonstrates how to create and

use cube-maps.

Osgdelaunay

The application shows a triangulate, which can

be used to generate areas.

Osgdepthpeeling The application shows the use of the depth to

render the scene in the form of stroke. We can

change the date parameters and observe the

changes in the scene. Render the scene takes

place, of course, through defined by the vertex and

Page 109: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

109

pixel shaders.

Osgdistortion

The application shows effect of the so-called

curved mirror implemented using multiple cameras.

Only in the middle part of the screen image is not

distorted at the edges of the image is stretched out.

Osgdrawinstanced

This application shows how to render a scene

based on 1,024 separate instances of objects, all of

which make up a scene showing the logo OSG.

Based on vertex shader.

Osgfadetext

The application shows use of the effect of text

displayed on demand distance of the display.

Osgfont

The application allows you to load a .ttf font and

displays a list of its characters. Its call must specify

the name of the font .ttf file and the size of

characters you want to render on the stage.

Osgforest The application presents several mechanisms

for creating forest. The used methods to generate

the forest include: Billboard, double quad

Page 110: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

110

MatrixTransform and Vertex / FragmentProgram.

Osgfpdepth

The application shows use of floating point

buffer to display the scene.

Osgfxbrowser

The application is a browser osgFx effects such

as Anistroping Lighting, Bump Mapping, Cartoon,

Outline, ScriBbe and Specular Highlight.

Osggameoflife

Application presentation ping-pong game type

based on the concept of "Game of Life" by Conway.

It converts imported image and creates on the basis

of animation in ping-pong style. To start application

as a parameter you should enter the file name.

Based on the graphic, will be generated a game.

Osggeometry

The application shows how to create geometry

based on the class osg :: Geometry. Each type of

geometry at the level of OpenGL is properly

transformed in order to obtain the best performance.

Osggeometryshaders This application shows how to create geometry

using the geometry shader.

Page 111: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

111

Osggraphiccost

The application calculates the estimated costs

to build and display the scene.

Osghangglide

The application presents the implementation of

its motion manipulator to control the camera.

Osghud

The application shows the creation of a HUD

user interface.

Osgimagesequence

The application demonstrates how to create and

use a sequence of images, which are mapped on

the object and turn it alternately displayed as

animation texture.

Osgimpostor

This application shows how to create a small

town scene.

Page 112: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

112

Osgkeyboard

The application shows the implementation of

keyboard as the interface to the program.

Osgkeyboardmouse

The application presents the implementation

mechanism of selection of the object using the

mouse and keyboard.

Osgluncher

This application shows how to run external

applications based on interaction with the elements

of the scene (click on the object in the scene to

launch selected application).

Osglight

This application shows how to add lights to the

scene.

Osglightpoint

This application shows how to use light node

osgSim :: LightPointNode.

Osglogicop The application illustrates application of boolean

operations on 3D objects.

Page 113: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

113

Osglogo

It is a scene representative the logo of OSG.

Osgmanipulator

The application shows the use of manipulators

translation, rotation and scale for the selected

objects. To use the manipulator please press down

the CTRL key and then make manipulation with the

mouse.

Osgmotionblur

The application shows the addition of a blur

effect to the camera, which motion blurs objects.

Osgmovie This application shows how to use the

QuickTime video as a texture.

Osgmultiplerendertarget

The application presents the use of MRT

(Multiple Render Targets) for camera, which renders

up to 4 textures processed by one shader which

result in one texture that is assigned to the object

that is displayed.

Osgmultitexture Application shows use of multi-texture.

Page 114: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

114

Osgmultitouch

This application shows how to implement touch

interface that allows you to manipulate the scene

and interact with objects.

Osgmultiviewpagging

The application shows how to arrange the

objects in a complex view.

Osgoccluder

This application shows how to use occluder in

the scene.

Osgocclusionquery

This application shows how to use occluder in a

scene by using OcclusionQueryNode.

Osgoit The application shows use of transparency (it

can also be transparent objects based on the

shader) in a scene together with objects that are not

Page 115: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

115

transparent.

Osgoutline

This application shows how to add a stroke to a

3D object.

Osgpackdepthstencil

This application shows how to use depth stencil

buffer.

Osgparametric

This application shows how to create an object

based on a parametric curve using the shader.

Object shape parameters are animated.

Osgparticle

This application shows how to implement a

variety of molecular systems.

Osgparticleeffects This application shows how to implement

effects such as fire, explosion or smoke based on

the molecular system.

Page 116: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

116

Osgparticleshader

The application shows implementation of the

fountain based on a molecular system using the

shader.

Osgpick

This application shows how to "capture" objects

from both the HUD layer and with the 3D scene.

Osgplanets

The application shows animated solar system.

Use the 'e', 'm' and 's' keys to change the position of

the camera in such way that indicated the Earth,

Moon and Sun.

Osgpointsprite

The application shows the creation of a scene

of the points that are sprites.

Osgposter This application shows how to make

screenshots of high resolution (e.g. 6400x6400) to a

file called. "poster". To perform a screenshot press

Page 117: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

117

the 'p' key.

Osgprepricipitation

The application visualizes the point cloud. In

this example, the rain is added to the scene.

Osgprerender

The application presents the ability to render

scene to a texture and then assign it to an object in

the scene. Rendered texture can be animated.

Osgprerendercubemap

The application presents the use of OpenGL

extensions GL_ARB_shadow to pre rendering

texture for the object.

Osgpresentation

Load and display loaded model.

Osgreflect

The application illustrates an embodiment of

reflections for 3D objects.

Osgrobot The application shows use of the hierarchical

Page 118: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

118

structure of the scene. Use the 'w', 'a', 's', 'd' and 't',

'd', 'f', 'g' keys to rotate each robot arms.

Osgscalarbar

This application shows how to create a scale

bars in both 3D space and on the HUD interface.

Generated bars are scaled accordingly.

Osgscribe

The application shows the use of the effect of

scribe, which renders the mesh of object.

Osgsequence

This application shows how to use the

sequences to display various scenes.

Osgshadercomposition

The application shows method of creating

shader dynamically.

Osgshadergen This application shows how to generate the

object using the shader.

Page 119: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

119

Osgshaders

The application shows the way to generate

objects in the shaders. Additionally, in the example

is animated texture.

Osgshaderterrain

The application demonstrates how to render the

terrain using shaders.

Osgshadow

This application shows how to create shadows

changing their location in real time.

Osgshape

The application demonstrates how to create

basic geometric solids available in OSG.

Osgshaderarray This application shows how to use the Array.

Page 120: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

120

Osgsidebyside

The application shows the creation of an

additional camera in the scene, which shows the

same picture as the main camera.

Osgsimplegl3

This application shows how to use shaders to

render a scene in OpenGL 3

Osgsimpleshaders

The application presents a simple way to use

shaders to render the scene.

Osgsimulation

The application shows the aircraft flight around

the Earth.

Osgspacewarp The application shows the generation of points

which fulfil the cube volume.

Page 121: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

121

Osgspotlight

This application shows how to create a spot

light.

Osgteapot

This application shows how to create geometry

in the native OpenGL.

Osgtesselate

The application shows the use of different

techniques of tessellating objects.

Osgtesselationshaders

The application shows how to tessellate object

in tessellation shader.

Osgtext The application shows the use of the 2D text in

the scene.

Page 122: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

122

Osgtext3d

The application shows the use of 3D text in the

scene.

Osgtexture1d

This application shows how to add a one-

dimensional texture to the object.

Osgtexture2d

This application shows how to add two-

dimensional texture to the object.

Osgtexture3d

This application shows how to add a three-

dimensional texture to the object.

Osgtexturerectangle The application shows the use of texture

TextureRectangle type.

Page 123: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

123

Osgthirdpersonview

The application shows the configuration of the

camera to make it behave like the view from the

third person perspective.

Osguniformbuffer

This application shows how to use a uniform

buffer to rendering the scene by the shader.

Osgunittests

These are unit tests for selected parts of the

library.

Osgvertexattributes

This application shows how to assign a shader

program for attributes from the application.

Osgvertexprogram The application shows the use of the vertex

program using the OpenGL extension

"ARB_vertex_program".

Page 124: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

124

Osgviewerglut

This application is OSG scene browser based

on GLUT library.

Osgvirtualprogram

The application shows the use of a few of

shaders.

Osgvolume

The application shows volume in space.

Osgwidgetaddremove

The application shows how to add dynamic

menu items to add and remove new entries in the

submenu.

Osgwidetbox The application shows the addition of floating

boxes (you can switch them on stage with your

mouse), elements that are interactive elements of

Page 125: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

125

user interface.

Osgwidgetcanvas

The application shows the creation of a

separate part of the screen, which is designed for

the user interface.

Osgwidgetframe

The application shows how to create frames to

the user interface.

Osgwidgetinput

This application shows how to add an editable

text field to the stage.

Osgwidgetlabel

The application shows how to add labels with

the description to the scene for the user interface.

Osgwidgetmenu This application shows how to add a menu to

the user interface.

Page 126: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

126

Osgwidgetmessagebox

This application shows how to add a popup for

the user interface.

Osgwidgetnotebook

This application shows how to add windows

with tabs that display different information for the

user interface.

Osgwidgetscrolled

This application shows how to add group

element whose content we can scroll using the

mouse wheel on the user interface.

Osgwidgetshader

This application shows how to add an element

that is based on the shader for the user interface.

Osgwidgetstyled This application shows how to create a user

interface element that has individual graphic theme.

Page 127: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

127

Osgwidgettable

This application shows how to add a table as

part of the user interface.

Osgwindows

This application shows how to configure

multiple windows for a single scene.

4.1. Sample applications based on the books

All of the following examples are based on book OpenSceneGraph Begginer's Guide.

Hello World

The shortest possible Hello World application. It

shows how easy it can be a way of working with this

framework.

Monitoring counted objects

Openscenegraph includes the implementation

of self-managed cluster. In this implementation

GarbageCollector takes care of their removal which

greatly simplifies the work of application

development. This example shows the use of

pointers osg :: ref_ptr for objects that inherit from

the class osg :: Referenced.

Page 128: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

128

Parsing command line arguments

The sample application which contain a

mechanism to parse command-line arguments from

the command line application. OSG is equipped with

a parser, which leaves the default parameters

encoded in the application and allows you to

override it from the command line.

Log file

Presented here how to perform your own

implementation of a simple logger that allows you to

save the log to a file.

Creating simple objects

The application demonstrates how to create

simple objects through the available classes.

Additionally shown how to position and colours the

created objects. This is an inefficient method and is

recommending mainly for the creation of temporary

objects.

Colored quad

This example shows the creation of custom

objects based on the determination of the vertex

normal and the colour values. This is an efficient

way to create objects.

Octahedron

The application shows how to create geometry

based on vertices and their indexes. With this

method of indexing vertices saves memory and thus

increase the efficiency of the whole application.

TesselatingPolygon

The application shows how to tessellate

geometry on the example of a simple polygon.

Page 129: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

129

CollectingTriangleFaces

This application shows how to perform

operations on the individual vertices of the

geometry. In this example only prints them to the

console. These operations are performed only once

before the rendering of the image.

OpenGLTeapot

The application shows how to call the

instructions directly from OpenGL. This example

uses the OpenGL Utility Toolkit (GLUT) 19

to

generate solid kettle.

TransformatiotionsOfChildrenNodes

This application shows how to apply the

translation matrix to move objects, and how to add

an additional instance of an object.

SwitchingNodes

This application shows how to use the Switch

node to substitution the loaded object.

LODCessna This application shows how to use the LOD.

Simplification of geometry followed by calling the

19

Więcej informacji można znaleźć na stronie: http://www.opengl.org/resources/libraries/glut/.

Page 130: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

130

class osg :: Simplifier. This happens just before

running the simulation by which considerably

increases the start-up time. In the sample

application, the creation of two additional objects on

my computer took about 3 minutes (this happens

every time you run this application).

LoadingModelAtRuntime

The application shows how to load

asynchronous model to the scene.

AnimatingSwitchNode

This application shows how to create a custom

node that uses the method traverse() executed in

each frame. This example uses this feature to

perform the animation changes the Switch node

switch very one second to display alternating

between the two models of objects.

CessnaStructure

This application shows how to perform an

operation on all nodes of the graph scene. In the

sample application is displayed in the console graph

structure (list of nodes and their hierarchy) for the

current scene.

PolygonModes

The application demonstrates how the display

mode is activated with edge geometry.

LightingTheGlider This application shows how to control the

lighting for the selected objects. Object on the left is

lit, on the right is not.

Page 131: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

131

SimpleFog

This application shows how to add fog to the

scene.

LightSources

This application shows how to add directional

lights to the scene.

2DTextures

This application shows how to load a texture

from a file, giving the mapping for vertices and

assign it to the geometry.

TranslucentEffect

The application discloses a method of mixing

the materials based on the channel assigned to the

object transparency. In this example, the texture was

mixed with the white colour of a 50% transparency.

In addition, the same geometry as set as

transparent.

CartoonCow The application shows the use of vertex shader

and fragment GLSL to render the scene as drawing

consisting exclusively of four colours.

Page 132: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

132

BezierCurve

The application shows the use of vertex shader

and geometry to draw a Bezier curve. Unfortunately,

on my computer has been drawn straight line, I

suppose that because I have not supported OpenGL

extension on my graphic card.

HUDCamera

This application shows how to generate

elements permanently assigned to the screen as

HUD in the form of 3D geometry.

SimulationLoop

This application shows how to implement own

loop frame() (equivalent to the draw ()) to implement

the application logic.

MoreScenesAtOneTime

The application illustrates how create a scene

consisting of several windows. Each window

contains other objects to generate. All windows are

independent of each other.

GlobalMultisampling This application shows how to set global

settings for the scene. In the example is given

multisampling.

Page 133: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

133

AnaglyphStereoScenes

This application shows how to set the stereo 3D

mode for the scene. In this example, activates the

fish anaglyph.

DrawingAircraft

This application shows how to render 3D

geometry as an animated texture assigned to

another object on the basis of the existing nodes.

SwitchingNodesAnimation

The application provides a method of

implementing animation on a call back function

associated with that node (in this case switch).

DrawingGeometrydynamically

The application shows how to create a 3D

model generated by code. In addition, the animation

is attributing to the displacement of the position of

one of the vertices of the geometry.

AnimationPath The application demonstrates how to animate

along the path. In the sample application, the path

has the shape of a circle and a plane flying in the

Page 134: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

134

loop.

FadingIn

This application shows how to apply the effect

to the course of the animation. In the example it is

use Fade In to attribute the transparency of the

material object - from fully transparent to opaque.

FlashingSpotlight

This application shows how to create an

animated texture on the basis of the code.

AnimationChannels

The application represents a powerful way to

create animation based on the keys and values of

the position, scale and rotation. In the sample

application is created an animation of an aircraft in a

circle.

CharacterSystem

This application shows how to handle skeletal

animations (created in an external editor). Here we

can see a list of available animations, and play

them. To control the application use the command

line. Skeletal animation can be downloaded from the

.fbx file or Collada DAE (there is also an application

Cal3D plugin osgCal2).

To view the list of available animation run the

application as:

CharacterSystem.exe - listall

However, in order to play the animation:

Page 135: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

135

CharacterSystem.exe - animation

Idle_Head_Scratch.01

DrivingCessna

This application shows how to operate with the

keyboard. In the example with the keys 'W', 'A', 'S'

and 'D' you may rotate the model of aircraft.

UserTimer

The application demonstrates how to add your

own events to the node, which is then handled with

interval of 100ms. It then displays in the console and

it switches the display model using the switch node

switch.

SelectingGeometries

The application includes an implementation

mechanism for selecting items on the stage. In this

example, the selected object is outlined by a

rectangle, which you can not choose. To select an

object press the 'Ctrl' then click the mouse on the

object.

TraitRenderingWindow

The application shows how to create graphic

context.

NewFileformat This is an example of a simple implementation

of plug-ins to load a new file type. The example

shows how to load a text file.

Serializer

This application shows how to serialize and de-

serialize scene to/from file (read and write to the

native format of OSG).

Page 136: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

136

BannersFacingYou

This application shows how to create banners,

2D graphics, which is always turned on to the

camera.

TextExample

This application shows how to add a string to a

2D scene, which is a regular part of HUD. The

example used is a vector font and is characterized

by high quality.

Text3DExample

This application shows how to generate the 3D

spatial texts based on .ttf font.

Fountain

This application shows how to create a fountain

based on the particle system.

Shadows

The application shows how to create shadows.

It's use the technique of receiving and generating

shadows for objects in the scene.

Outline This application shows how to add special

effects to the scene. In this example is contour

Page 137: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

137

added to the displayed object, which is based on the

stencil buffer and uses the technique of multi-pass

rendering (a model covered contour is rendered

several times).

Thread

This application shows how to use a new thread

to handle events from the keyboard. In the example

on the basis of the text entered in the text console is

generated in the scene.

ThreadingModels

The application implements the various models

of thread management. The scene consists of 20

thousand objects. The sample program should be

run from the console:

• ThreadingModels - Single <- uses only one thread.

• ThreadingModels - useContext <- in separate

threads are generated for data culling and drawing

for each view,

ThreadingModels - useCamera <- uses separate

threads for each window and the camera.

Occluders

The application presents the implementation of

occlude which restricts the scene rendering in this

case consisting of one hundred thousand elements.

Use a 'S' key to turn on and switch the various

statistics regarding the performance of the rendered

scene.

SharingTextures

This application shows how to use the

mechanism of optimizing the use of texture that uses

objects in the scene with the same textures and

forms the basis of their individual textures with

multiple references to them for the objects using

them. In this example, set up in 1000 with 2 textures

of objects that has been optimized.

QuadTree The application generates the terrain saved to a

file quadtree.osg (actually there are saved 86 files

Page 138: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

138

generated for the 4 levels of detail - LOD), which is

divided into sectors, and contains a variety of detail

parts. Mechanism is used to display the scene is a

Quad Tree, which divides the object into the

rectangles and depending on the distance from the

object generate an area in the right amount of detail.

To view the generated terrain (with marked areas in

4 different colours) you should run the application

with parameter e.g.

ParsingCommandLineArgument.exe - model

quadtree.osg.

5. Nvidia SceniX 7

GLUTMinimal

This application shows the simplest way to use

the library NVIDIA Scenix. It contains the basic

configuration of the stage, lighting and camera,

and adds a few simple geometric objects. It is

based on the use of windows through

FreeGLUT'a.

GLUTMinimal_HUD

This application differs from the previous

one only by added the texts in the lower left

corner of the screen. This text was added by

creating your own rendering system inherited

from nvgl :: SceneRendererGL2. Here, the

whole scene is rendered with additionally layer

(see the implementation of the overloaded

methods doRernder).

FBOMinimal

This example shows how to use Frame Buffer

Object (FBO) and save the contents to an

image file, which you can see above. Also in

this example has shown the ability to

incorporate with 3D stereo image.

MFCMinimal This is a demonstration of the connection

library with the window manager Windows

MFC.

Page 139: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

139

WinMinimal

The application shows the use of windows

and native Windows message queue. Such a

solution can be integrated with the already

existing Scenix engines natively written into the

operating system, and it seems to me that this

may be the fastest of its implementation on the

platform.

wxMinimal2

Example showing integration Scenix with

wxWidgets. It includes the top menu bar and

status bar, and support for basic buttons and

manipulating the scene with the mouse.

QtMinimal

Example showing integration of Scenix

application with Qt. Although the name

suggests that it is the simplest configuration it

is not so. This example uses: control of the

camera scene using the keypad / mouse

handling keys and stereo 3D (you must run the

application with the attribute - stereo) and

raytracking (AmbientOclussion) activated by

attribute - ray-tracing. Use the 's' key to

execute screenshot image record into the file

mono.png and stereo.pns, 'o' execute the

optimization stage, the 'x' will switch from mono

to stereo 3D view and vice versa.

QtManipulators This application presents the possibility of

manipulating the camera and objects position

in the scene. When you press the right mouse

button, you will get access to select operations:

"Manipulate Red Object" after you choose it will

be able to rotate the selected object with the

Page 140: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

140

mouse, "UseTrackballCameraManipulator" is

the standard way to navigate through the

scene where by pressing the left mouse button,

rotate the view of the stage, the wheel closer or

further away scene while you press and hold

the mouse wheel to move the camera position,

"Use CylindricalCameraManipulator" rotates

the camera around the scene along the axis of

the cylinder, "Use FlightCameraManipulator"

flight mode switches the scene where the left

mouse button zooms camera in the scene

while the "Use WalkCameraManipulator"

walking mode switches, which differs from that

of the previous permanently rotates the camera

is connected to the movement of the mouse

and additionally get the opportunity to navigate

the scene using the WASD keys.

QtProfileCommonShadow

The application shows the use of shadows in

the scene. In the case of the scene colour

shades are coloured light that strikes them, and

they mix the colours with each other.

QtForwardShadowMapping

The program shows the use of a simple

shader CgFX to generate multi-pass shadow.

QtAmbientOcclusion

This program demonstrates the use of

AmbientOcclusion practically in real time. On

my computer after changing the position of the

camera re-generated image in FullHD

resolution 1920x1080 took about 4 seconds but

the picture was progressively generated by

practically immediately visible effect, although

initially at a much lower resolution, however,

allow liquid to navigate the 3D scene. This is

Page 141: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

141

achieved by using the NVIDIA Optix. Both the

material and ray tracking was prepared in

RTFX (Optix) and uses it by CUDA.

QtIconTracer

The application demonstrates the

capabilities of ray tracking. To render the scene

as the attached picture in Full HD 1920x1080

resolution on my computer took about 10

seconds while it progressively takes place so

the effect was already evident before and had

the opportunity to interact with the object and

the scene at the time. Compared to the

previous application demo here textures are

generated by the code.

Viewer

This is a full application based on Qt

framework with source code available, which

allows you to view 3D graphic objects and the

whole scene and its components. It has been

described in the description of the NVIDIA

Scenix.

Page 142: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

142

Attachement B - ViSTA

1. Download framework

You can download framework from website: http://sourceforge.net/projects/vistavrtoolkit.

In order to build ViSTA framework you must download:

Vista-1.14.0-win32-x64.vc10_binaries.zip,

Vista-1.14.0-src.zip,

VistaCMakeCommon.zip,

supportlibs_OpenSG-freeglut_win32-x64.vc11.zip,

Vista-1.14-documentation.zip (optionally).

For the purpose of compiling an application using ViSTA framework in it's pure form (ie, without the use of additional modules, the use of which is described in the section on compiling the sample programs provided in the package ViSTA) were collected following libraries and tools:

cmake-2.8.12.1-win32-x86.exe,

Vista-1.14.0-src.zip,

supportlibs_OpenSG-freeglut_win32-x64.vc11.zip,

VistaCMakeCommon.zip,

Vista-1.14-documentation.zip. CMake install into folder $WORKSPACE\tools\cmake_2_8

2. Compilation prepare

Target configuration: Windows 8.1 x64, Visual Studio 2012

To compile ViSTA you will need the following tools and libraries:

CMake 2.8 (http://www.cmake.org/files/v2.4/ http://www.cmake.org/cmake/resources/software.html),

OpenSG 1.8 (http://sourceforge.net/projects/vistavrtoolkit/ lub http://sourceforge.net/projects/opensg/files/OldReleases/1.8.0/),

Freeglut 2.6 (http://sourceforge.net/projects/vistavrtoolkit/).

$WORKSPACE - specifies the root directory for the entire project, which includes both a library (in my case it is a folder "c:\pg"), tools, relevant projects and work files used to create this Thesis Work.

To $ WORKSPACE \ libraries \ folder you have to copy the following libraries from the

following directories:

$WORKSPACE\frameworks\vista1_14\lib,

$WORKSPACE\frameworks\vista1_14\lib\DriverPlugins,

$WORKSPACE\frameworks\freeglut2_6\lib\,

$WORKSPACE\frameworks\opensg1_8\lib\. You can use following keys during run application: $ - on/off frustum culling (OpenSG) % - on/off occlusuion culling (OpenSG) & - on/off display BBox (OpenSG) * - display additional information about ClusterMode ? - display help

Page 143: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

143

C - on/off cursor D - display additional information about DisplayManager E - display additional information about EventManager F - display framerate G - display additional information about GraphicManager I - display additional information about application proffiling V - on/off v-sync q - exit from application CTRL+UP - increase distance for the eye in x for 0.001 CTRL+DOWN - decrease distance for the eye in x for 0.001 CTRL+F5 - on/off Eyetester ALT+ENTER - turn full-screen mode

3. Setting up environment variables

Add the path to the environment path variable:

$WORKSPACE\tools\cmake_2_8\bin,

$WORKSPACE\libraries. Add new envirionment variable:

VISTA_CMAKE_COMMON: $WORKSPACE\frameworks\vista1_14_cmake_common\.

4. Prepare project for Visual Studio 2012

The following is a statement on how you can recompile ViSTA.

Run CMake GUI (cmake-gui.exe) and setup compilation as: Where is the source code:

$WORKSPACE/frameworks/vista1_14_src/VistaCoreLibs/

Where to build the binaries: $WORKSPACE/frameworks/vista1_14

Image. 0.1. Setting the project location

Press button "Configure" .

In the window that appears select the compilator: Specify the generator for this project:

Visual Studio 11 Win64 - version for Visual Studio 2012 x64 (Visual Studio 12 Win64 - version for Visual Studio 2013 x64)

Choose option:

Use default native compilers And confirm by pressing the "Finish" button.

Page 144: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

144

Image. 0.2. Choose the right compilator in CMake

For compilation I used MSVC 17.0.60610.1 compiler. CMake configuration:

OPENSG_ROOT_DIR: $WORKSPACE/frameworks/opensg1_8 GLUT_LIBRARIES:

$WORKSPACE/frameworks/freeglut2_6/lib/freeglut.lib GLUT_ROOT_DIR: $WORKSPACE/frameworks/freeglut2_6

Image. 0.3 FreeGLUT setup

It should be also selected Kernel and GLUT projects, which will be used by us later:

VISTACORELIBS_BUILD_KERNEL,

VISTACORELIBS_BUILD_KERNELOPENSGEXT,

VISTACORELIBS_BUILD_WINDOWIMP_GLUT.

By default, the kernel is not included in the compilation because it requires additional preparation and configuration.

This is the default configuration. If necessary, you can configure other options here. For example I additional chose VISTADRIVERS_BUILD_3DSPACENAVIGATOR option to generate a controller for 3D Navigator (Vista3DSpaceNavigator).

Press "Configure" button once again . Successful configuration is confirmed by message " Configuring done".

In the event of an error in the console you will get an error message and on this basis you should correct CMake configuration.

Page 145: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

145

Then press "Generate" button . If everything is properly configured at this point should be generated solution for Visual Studio in $WORKSPACE/frameworks/vista1_14folder.

Warning: This project shows the source files located in locations: $WORKSPACE/frameworks/vista1_14_src/VistaCoreLibs/

The correct generation of the project ends with message "Generating done".

At this point we can close CMake Then open the newly prepared solution in Visual Studio 2012 through the file:

$WORKSPACE\frameworks\vista1_14\VistaCoreLibs.sln . At the moment you can build solution: BUILD/Build Solution (F7) - This will root compilation

configured by CMake..

Warning: If at this point you will receive the following error message: "fatal error C1128: number of sections exceeded object file format limit : compile with /bigobj" - you may setup addidtional option in a project in which the problem occurred (in my case it was a project VistaDataFlowNet) you can add the "/ bigobj" to the compiler:

Image. 0.4. Adding the / bigobj to the compiler

Correct compilation should complete without errors.

In order to facilitate further work with the configuration of prepared projects you must copy the generated librarys from $ WORKSPACE \ frameworks \ vista1_14 \ lib \ directory to $ WORKSPACE \ libraries\.

Page 146: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

146

This step is not mandatory. However, further documentation will be based just on the $ WORKSPACE \ libraries \. If we decide to skip this step then you will need to remember to add the actual path to the libraries.

Prepared ViSTA solution contains the following projects:

Doxygen,

VistaAspects,

VistaBase,

VistaDataFlowNet,

VistaDeviceDriversBase,

VistaInterProcComm,

VistaKernel,

VistaKernelOpenSGExt,

VistaMath,

VistaTools,

ALL_BUILD,

CMakePredefinedTargets. and projects for drivers:

Vista3DSpaceNavigatorDriver,

Vista3DSpaceNavigatorPlugin,

Vista3DSpaceNavigatorTranscoder,

VistaCyberGloveDriver,

VistaCyberGlovePlugin,

VistaCyberGloveTranscoder,

VistaDTrackDriver,

VistaDTrackPlugin,

VistaFastrakTranscoder,

VistaGlutJoystickDriver,

VistaGlutJoystickPlugin,

VistaGlutJoystickTranscoder,

VistaGlutKeyboardDriver,

VistaGlutKeyboardPlugin,

VistaGlutKeyboardTranscoder,

VistaGlutMouseDriver,

VistaGlutMousePlugin,

VistaGlutMouseTranscoder,

VistaHapticDeviceEmulatorDriver,

VistaHapticDeviceEmulatorPlugin,

VistaHapticDeviceEmulatorTranscoder,

VistaHIDDriver,

VistaHIDPlugin,

VistaHIDTranscoder,

VistaRManDriver,

VistaRManTranscoder,

VistaMIDIDriver,

VistaMIDIPlugin,

VistaMIDITranscoder,

VistaPhantomServerDriver,

VistaPhantomServerPlugin,

VistaPhantomserverTranscoder,

VistaSpaceMouseDriver,

VistaSpaceMousePlugin,

VistaSpaceMouseTranscoder.

After compiling we get the following libraries (they are in the directory: $WORKSPACE\frameworks\vista1_14\lib\):

VistaAspectsD.lib,

VistaBaseD.lib,

Page 147: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

147

VistaDataFlowNetD.lib,

VistaDeviceDriversBaseD.lib,

VistaInterProcCommD.lib,

VistaKernelD.lib,

VistaKernelOpenSGExtD.lib,

VistaMathD.lib,

VistaToolsD.lib.

and libraries and device drivers supported by ViSTA:

Vista3DCSpaceNavigatorDriverD.lib,

Vista3DCSpaceNavigatorPluginD.lib,

Vista3DCSpaceNavigatorTranscoderD.lib,

VistaCyberGloveDriverD.lib,

VistaCyberGlovePluginD.lib,

VistaCyberGloveTranscoderD.lib,

VistaDTrackDriverD.lib,

VistaDTrackPluginD.lib,

VistaDTrackTranscoderD.lib,

VistaFastrakDriverD.lib,

VistaFastrakPluginD.lib,

VistaFastrakTranscoderD.lib,

VistaGlutJoystickDriverD.lib,

VistaGlutJoystickPluginD.lib,

VistaGlutJoystickTranscoderD.lib,

VistaGlutKeyboardDriverD.lib,

VistaGlutKeyboardPluginD.lib,

VistaGlutKeyboardTranscoderD.lib,

VistaGlutMouseDriverD.lib,

VistaGlutMousePluginD.lib,

VistaGlutMouseTranscoderD.lib,

VistaHapticDeviceEmulatorDriverD.lib,

VistaHapticDeviceEmulatorPluginD.lib,

VistaHapticDeviceEmulatorTranscoderD.lib,

VistaHIDDriverD.lib,

VistaHIDPluginD.lib,

VistaHIDTranscoderD.lib,

VistaIRManDriverD.lib,

VistaIRManPluginD.lib,

VistaIRManTranscoderD.lib,

VistaMIDIDriverD.lib,

VistaMIDIPluginD.lib,

VistaMIDITranscoderD.lib,

VistaPhantomServerDriverD.lib,

VistaPhantomServerPluginD.lib,

VistaPhantomServerTranscoderD.lib,

VistaSpaceMouseDriverD.lib,

VistaSpaceMousePluginD.lib,

VistaSpaceMouseTranscoderD.lib.

All such generated libraries are also available in versions of dynamic link libraries. Verification of compiling all the libraries included in the package "Vista":

Compilation all in this way prepared projects and libraries should be successful - in the Output window in Visual Studio should get a message that the compilation was successful: "========== Build All: 50 succeeded, 0 failed, 2 skipped ==========" (the number of projects may be different depending on the selected configuration).

Further verification can be carried out for example on the basis of compilation and run

demonstration programs included with the ViSTA, as described later in this work.

Page 148: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

148

5. Libraries required by the sample application

Target configuration: Windows 8.1 x64, Visual Studio 2012.

To compile the sample programs ViSTA need the following tools and libraries:

ZeroMQ 2.2.0 (http://zeromq.org/area:download - bin msi),

VTK 5.6.1 (http://www.vtk.org/VTK/resources/software.html),

Boost 1.50 (http://sourceforge.net/projects/boost/files/boost/1.50.0/),

Python 3.4 (http://www.python.org/download/releases/3.4.0/).

6. Compilation of the sample applications

Download all the necessary tools and libraries.

For the purpose of compiling the attached sample programs with ViSTA, add the following libraries to those required by the framework ViSTA itself:

ZeroMQ-2.2.0~miru1.0-win64.exe,

vtk-5.6.1.zip,

boost_1_50_0.zip,

python-3.4.0b2.amd64.msi.

Set the required environment variables. Add new environment variables: VISTA_CMAKE_COMMON: $WORKSPACE\frameworks\vista1_14_cmake_common\ VISTACORELIBS_DRIVER_PLUGIN_DIRS: $WORKSPACE\libraries This variable indicates the library copied from: $WORKSPACE\frameworks\vista1_14

\lib\DriverPlugins Generated projects in $WORKSPACE\workspace\vista\* folder based on source code

located in $WORKSPACE\frameworks\vista1_14_src\VistaDemo folder.

Preparation of the solution and projects for Visual Studio 2012 using the tool CMake: Below I defined a template procedure for compiling all the sample programs included with

ViSTA (they are in the $ WORKSPACE \ frameworks \ vista1_14_src \ VistaDemo \). For projects that require additional steps I have given this information in this manual.

For each project setup following parameters in CMake:

Where is the source code: $WORKSPACE/frameworks/vista1_14_src/VistaDemo/XXX

Where to build the binaries: $WORKSPACE/workspace/vista/XXX

where XXX is the name of sample application provided by ViSTA (values are:

01KeyboardDemo, 02GeometryDemo, 03TextDemo, 04LoadDemo, 05InteractionDemo, 06CameraControlDemo, 07OverlayDemo, 09EventDemo, 10DisplayDemo, 11OGLDemo, 12IntentionSelectDemo, 13KernelOpenSGExtDemo, 14DataFlowNetDemo, 15VtkDemo,

16PhantomDemo, 17MsgPortDemo, 18DebuggingToolsDemo, 20PythonDFNDemo).

Page 149: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

149

Image. 0.1. Setup source and destination of project

3. Press "Configure" button . 4. Choose compiler in a windw:

Specify the generator for this project: Visual Studio 11 Win64 - wersja dla Visual Studio 2012 x64 (Visual Studio 12 Win64 - wersja dla Visual Studio 2013 x64)

Choose option:

Use default native compilers And confirm by pressing the "Finish" button.

Image. 0.2. Select the right compiler

5. Setup CMake as: VistaCoreLibs_DIR:

$WORKSPACE/frameworks/vista1_14/cmake OPENSG_ROOT_DIR: $WORKSPACE/frameworks/opensg1_8 GLUT_LIBRARIES:

$WORKSPACE/frameworks/freeglut2_6/lib/freeglut.lib GLUT_ROOT_DIR: $WORKSPACE/frameworks/freeglut2_6

15VtkDemo:

Page 150: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

150

VTK_DIR: $WORKSPACE/frameworks/vtk5_61 VTK_ROOT_DIR: $WORKSPACE/frameworks/vtk5_61_src

20PythonDFNDemo BOOST_ROOT_DIR:

$WORKSPACE/frameworks/boost_1_50_0_src Boost_INCLUDE_DIR: $WORKSPACE/frameworks/boost_1_50_0_src Boost_LIBRARY_DIR: $WORKSPACE/libraries Boost_PYTHON_LIBRARY_DEBUG: $WORKSPACE/libraries/libboost_python-vc110-mt-gd-1_50.lib Boost_SYSTEM_LIBRARY_DEBUG: $WORKSPACE/libraries/libboost_system-vc110-mt-gd-1_50.lib Boost_THREAD_LIBRARY_DEBUG: $WORKSPACE/libraries/libboost_thread-vc110-mt-gd-1_50.lib PYTHON_INCLUDE_DIR: $WORKSPACE/frameworks/python34/include PYTHON_LIBRARY: $WORKSPACE/libraries/python34.lib

This is the default configuration. If necessary, you can configure other parameters.

.

6. Press "Configure" button once again . Successful configuration ends with " Configuring done" message in console.

In the event of an error in the console you will get an error message and on that basis you should correct CMake configuration.

7. Press "Generate" button . If everything is properly configured it at this point should be generated for your solution for the example programs in the directory $WORKSPACE/workspace/vista/XXX .

Warning: Generated projects in $WORKSPACE\workspace\vista\* folder based on source

code located in $WORKSPACE\frameworks\vista1_14_src\VistaDemo folder. "XXX" is the name of the sample application that is just configured.

Successful generating solution is confirmed by "Generating done" message in console log. 8. Close CMake. 9. Open just generated solution in Visual Studio 2012 through the file:

$WORKSPACE\frameworks\vista\XXX\XXX.sln . "XXX" is the name of the sample application that is just configured.. 10. Build solution: BUILD/Build Solution (F7).

This will run compilation configured by CMake..

7. Configure sample application

To run the sample programs supplied with ViSTA, you must copy the two folders configfiles

and date from the directory: $WORKSPACE\frameworks\vista1_14_src\VistaDemo\ to directory: $WORKSPACE\workspace\vista .

Additional configuration required to run the sample programs:

Page 151: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

151

05Interaction

You should configure the network connection by appending to a file: $WORKSPACE\workspace\vista\configfiles\vista.ini

in the [SYSTEM] section the following parameters (this is the default configuration on the local computer):

MSGPORT = TRUE MSPPORTPORT = 6666 MSGPORTIP = 127.0.0.1

10DisplayDemo This example shows the use of multiple monitors hence it has a different set of configuration

files. A sample configuration is in the directory: $WORKSPACE\frameworks\vista1_14_src\VistaDemo\10DisplayDemo\configfiles\* which should be copied to a directory: $WORKSPACE \workspace\vista\10DisplayDemo\configfiles\* In addition, to remove many warnings in the console remove the initialization of the device

by removing the MOUSEX_MOVABLE value from INTERACTIONCONTEXTS field in file: $WORKSPACE \workspace\vista\10DisplayDemo\configfiles\vista.ini

12IntentionSelectDemo i 14DataFlowNetDemo These examples use the custom configuration files. Sample configuration located is in the

directories: $WORKSPACE\frameworks\vista1_14_src\VistaDemo\[12IntentionSelectDemo|14DataFlow

NetDemo]\configfiles\* which you should copy into following directories: $WORKSPACE

\workspace\vista\[12IntentionSelectDemo|14DataFlowNetDemo]\configfiles\*

15VtkDemo Sample program 15VtkDemo additionally requires VTK library. After downloading the

source files must be prepared solution using CMake: Run CMake ("cmake-gui.exe")

Setup localisations: Where is the source code: $WORKSPACE/frameworks/vtk5_61_src Where to build the binaries: $WORKSPACE/workspace/vtk5_61 Press "Configure" button.

Setup following values:

VTK_DIR: $WORKSPACE/frameworks/vtk5_61 VTK_ROOT_DIR: $WORKSPACE/framework/vtk5_61_src

Page 152: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

152

In the window that appears select the compiler: Specify the generator for this project:

Visual Studio 11 Win64 - version for Visual Studio 2012 x64 (Visual Studio 12 Win64 - version for Visual Studio 2013 x64) Choose option: Use default native compilers and confirm by pressing the "Finish" button.

Successful configuration is finished with " Configuring done" message.

In the event of an error in the console you will get an error message and on that basis you should correct CMake configuration.

Press "Generate" button. If everything is properly configured at this point should be generated solution for the example program in the directory $WORKSPACE/frameworks/vtk5_61.

Warning: These projects points the source in: $WORKSPACE/frameworks/vtk5_61_src.

Successful generation solution is confirmed by displaying "Generating done" message.

Close CMake.

Then open the newly prepared solution in Visual Studio 2012 by the file: $WORKSPACE\frameworks\ vtk5_61\VTK.sln

Preparing VTK to compile under Visual Studio 2012: Warning: VS2012 has introduced changes to compiler for some standard features such as std: make_pair that they are compatible with C + +11 at the moment. The easiest way is to use a compiler VS10, which is compatible with VTK version 5.6.1

However, to compile VTK in VS 2012 you should make changes in files: - $WORKSPACE/frameworks/vtk5_61_src/Rendering/vtkMapArrayValues.cxx

replace vtkstd::make_pair< vtkVariant, vtkVariant >(from, to) to vtkstd::make_pair(from, to)

- $WORKSPACE/frameworks/vtk5_61_src/Infovis/vtkAdjacencyMatrixToEdgeTable.cxx add library "functional" by write the line "#include <functional>".

Build solution: BUILD/Build Solution (F7).

This will run compilation configured by CMake. In this way the generated library from catalogue:

$WORKSPACE\frameworks\vtk5_61\bin\Debug:

MapReduceMPI.lib,

mpistubs.lib,

vtkalglib.lib,

vtkCharts.lib,

vtkCommon.lib,

vtkDICOMParser.lib,

vtkexoIIc.lib,

vtkexpat.lib,

vtkFiltering.lib,

vtkfreetype.lib,

vtkftgl.lib,

vtkGenericFiltering.lib,

vtkGeovis.lib,

Page 153: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

153

vtkGraphics.lib,

vtkHybrid.lib,

vtkImaging.lib,

vtkInfovis.lib,

vtkIO.lib,

vtkjpeg.lib,

vtklibxml2.lib,

vtkmetaio.lib,

vtkNetCDF.lib,

vtkNetCDF_cxx.lib,

vtkpng.lib,

vtkproj4.lib,

vtkRendering.lib,

vtksqlite.lib,

vtksys.lib,

vtktiff.lib,

vtkverdict.lib,

vtkViews.lib,

vtkVolumeRendering.lib,

vtkWidgets.lib,

vtkzlib.lib.

I suggest you to copy them into the directory $WORKSPACE\libraries.

17MsgPortDemo

This example uses the custom configuration files. Sample configuration is located in the directories:

$WORKSPACE\frameworks\vista1_14_src\VistaDemo\17MsgPortDemo\Alice_shellDemo\

configfiles\* $WORKSPACE\frameworks\vista1_14_src\$WORKSPACE\frameworks\vista1_14_src\VistaDemo\17MsgPortDemo\Alice_shellDemo\ configfiles\*

which should be copied to this directory: $WORKSPACE \workspace\vista\17MsgPortDemo\Alice_shellDemo\configfiles\*

$WORKSPACE \workspace\vista\17MsgPortDemo\Bob_ApplicationDemo\configfiles\* Additional in Bob_ApplicationDemo project in bob.cpp file you should change "IVistaNode

*pNode = pSG->LoadNode( "hippo.ac");" to "IVistaNode *pNode = pSG->LoadNode( "../../data/hippo.ac");".

20PythonDFNDemo To run the sample, you must install Python and compile the Boost library: Run $WORKSPACE\frameworks\boost_1_50_0_src\bootstrap.bat

You must configure build under Visual Studio 2012 by adding the following line: using msvc : 11.0; into file: $WORKSPACE\frameworks\boost_1_50_0_src\tools\build\v2\user-config.jam Then run the Boost library compilation by the command: .\b2 from the directory: $WORKSPACE\frameworks\boost_1_50_0_src

Page 154: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

154

In this way the generated library from directory

$WORKSPACE\frameworks\boost_1_50_0_src\stage\lib\ and from Python 3.4 libs directory I suggest to copy into $WORKSPACE\libraries .

8. Manual create a project into Visual Studio 2012

Sample 3D application. I provided a simple way to create basic 3D scene based on the framework ViSTA, which will

no longer have any relationship with CMake. Its compilation and configuration is based solely on Visual Studio 2012.

To do this, create a new blank project "C + + Empty Project" and configure it according to the following scheme:

Configuration Properties: General:

Use of MFC: Use Standard Windows Libraries,

Character Set: Use Multi-Byte Character Set. Debugging: Environment:

PATH=$WORKSPACE/frameworks/vista1_14/lib;

$WORKSPACE/frameworks/vista1_14/lib/DriverPlugins;

$WORKSPACE/frameworks/opensg1_8/lib;

$WORKSPACE/frameworks/freeglut2_6/lib;

%PATH%. PATH=$WORKSPACE/frameworks/vista1_14/lib;$WORKSPACE/frameworks/vista1_14/lib/

DriverPlugins; $WORKSPACE/frameworks/opensg1_8/lib; $WORKSPACE/frameworks/freeglut2_6/lib;%PATH%

C/C++: General:

Additional Include Directories:

$WORKSPACE/frameworks/vista1_14/include;

$WORKSPACE/frameworks/vista1_14/lib/DriverPlugins;

$WORKSPACE/frameworks/opensg1_8/include;

$WORKSPACE/frameworks/freeglut2_6/include;

$WORKSPACE/frameworks/zeromq2_20/include;

%(AdditionalIncludeDirectories);

Debug Information Program: Program Database,

Warning Level: Level3,

Multi-processor Compilation: Yes. $WORKSPACE/frameworks/vista1_14 /include; $WORKSPACE/frameworks/vista1_14

/lib/DriverPlugins; $WORKSPACE/frameworks/opensg1_8/include; $WORKSPACE /frameworks/freeglut2_6 /include; $WORKSPACE /frameworks/zeromq2_20/include;%(AdditionalIncludeDirectories)

Optimization:

Optimization: Disabled,

Inline Function Expansion: Disabled.

Preprocessor: Preprocessor Definitions:

WIN32,

_WINDOWS,

_DEBUG,

DEBUG,

_CRT_SECURE_NO_WARNINGS,

OSG_WITH_GLUT,

OSG_WITH_GIF,

OSG_WITH_TIF,

Page 155: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

155

OSG_WITH_JPG,

OSG_BUILD_DLL,

_OSG_HAVE_CONFIGURED_H_.

Code Generation:

Enable String Pooling: Yes,

Enable C++ Expressions: Yes,

Basic Runtime Checks: Both,

Runtime Library: Multi-threaded Debug DLL,

Enable Enhanced Instruction Set: Streaming SIMD Extensions 2. Language:

Enable Run-Time Type Information: Yes. Precompiled Header:

Precompiled Header: Not Using Precompiled Header. Output Files:

ASM List Location: Debug. Advanced:

Compile As: Compile as C++ Code,

Disable Specific Warnings: 4251;4275;4503.

Command Line: /I"C:/pg/frameworks/vista1_14_vc10/include" /I"$WORKSPACE/frameworks/vista1_14

/lib/DriverPlugins" /I"$WORKSPACE/frameworks/opensg1_8 /include" /I"$WORKSPACE/frameworks/freeglut2_6 /include" /I"$WORKSPACE/frameworks/zeromq2_20/include" /Zi /nologo /W3 /WX- /MP /Od /Ob0 /Oy- /D "WIN32" /D "_WINDOWS" /D "_DEBUG" /D "DEBUG" /D "_CRT_SECURE_NO_WARNINGS" /D "OSG_WITH_GLUT" /D "OSG_WITH_GIF" /D "OSG_WITH_TIF" /D "OSG_WITH_JPG" /D "OSG_BUILD_DLL" /D "_OSG_HAVE_CONFIGURED_H_" /D "CMAKE_INTDIR=\"Debug\"" /D "_MBCS" /GF /Gm- /EHsc /RTC1 /MDd /GS /arch:SSE2 /fp:precise /Zc:wchar_t /Zc:forScope /GR /Fp"plik.dir\Debug\plik.pch" /Fa"Debug" /Fo"plik.dir\Debug\" /Fd"$WORKSPACE/workspace/vista/proj/bin/plik.pdb" /Gd /TP /wd"4251" /wd"4275" /wd"4503" /analyze- /errorReport:queue

Linker: General:

Enable Incrementing Linking: Yes,

Additional Library Directories:

$WORKSPACE/frameworks/vista1_14 /lib;

$WORKSPACE/frameworks/vista1_14 /lib/$(Configuration);

$WORKSPACE/frameworks/vista1_14 /lib/DriverPlugins;

$WORKSPACE/frameworks/vista1_14 /lib/DriverPlugins/$(Configuration);

$WORKSPACE/frameworks/opensg1_8 /lib;

$WORKSPACE/frameworks/opensg1_8 /lib/$(Configuration);

$WORKSPACE/frameworks/freeglut2_6 /lib;

$WORKSPACE/frameworks/freeglut2_6 /lib/$(Configuration);

%(AdditionalLibraryDirectories);

Link Library Dependencies: No. $WORKSPACE/frameworks/vista1_14/lib;

$WORKSPACE/frameworks/vista1_14/lib/$(Configuration); $WORKSPACE/frameworks/vista1_14 /lib/DriverPlugins; $WORKSPACE/frameworks/vista1_14/lib/DriverPlugins/$(Configuration); $WORKSPACE/frameworks/opensg1_8 /lib; $WORKSPACE/frameworks/opensg1_8 /lib/$(Configuration); $WORKSPACE/frameworks/freeglut2_6 /lib; $WORKSPACE/frameworks/freeglut2_6/lib/$(Configuration); %(AdditionalLibraryDirectories)

Input:

Additional Dependencies:

kernel32.lib,

Page 156: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

156

user32.lib,

gdi32.lib,

winspool.lib,

shell32.lib,

ole32.lib,

oleaut32.lib,

uuid.lib,

comdlg32.lib,

advapi32.lib,

VistaKernelD.lib,

VistaDataFlowNetD.lib,

VistaDeviceDriversBaseD.lib,

VistaToolsD.lib,

VistaMathD.lib,

VistaInterProcCommD.lib,

VistaAspectsD.lib,

VistaBaseD.lib,

glu32.lib,

opengl32.lib,

OSGWindowGLUTD.lib,

OSGSystemD.lib,

OSGBaseD.lib,

C:\pg\frameworks\freeglut2_6\lib\freeglutd.lib,

libzmq.lib. Manifest File:

Generate Manifest: Yes. Debugging:

Generate Debug Info: Yes. System:

Subsystem: Console. Advanced:

Import Library: C:/pg/workspace/vista/proj/bin/plik.lib. Command Line: /OUT:" $WORKSPACE\workspace\vista\proj\bin\plikD.exe" /INCREMENTAL /NOLOGO

/LIBPATH:" $WORKSPACE/frameworks/vista1_14/lib" /LIBPATH:" $WORKSPACE/frameworks/vista1_14/lib/Debug" /LIBPATH:" $WORKSPACE /frameworks/vista1_14/lib/DriverPlugins" /LIBPATH:" $WORKSPACE /frameworks/vista1_14/lib/DriverPlugins/Debug" /LIBPATH:" $WORKSPACE /frameworks/opensg1_8/lib" /LIBPATH:" $WORKSPACE /frameworks/opensg1_8/lib/Debug" /LIBPATH:" $WORKSPACE /frameworks/freeglut2_6/lib" /LIBPATH:" $WORKSPACE /frameworks/freeglut2_6/lib/Debug" "kernel32.lib" "user32.lib" "gdi32.lib" "winspool.lib" "shell32.lib" "ole32.lib" "oleaut32.lib" "uuid.lib" "comdlg32.lib" "advapi32.lib" "VistaKernelD.lib" "VistaDataFlowNetD.lib" "VistaDeviceDriversBaseD.lib" "VistaToolsD.lib" "VistaMathD.lib" "VistaInterProcCommD.lib" "VistaAspectsD.lib" "VistaBaseD.lib" "glu32.lib" "opengl32.lib" "OSGWindowGLUTD.lib" "OSGSystemD.lib" "OSGBaseD.lib" "$WORKSPACE\frameworks\freeglut2_6 \lib\freeglutd.lib" "libzmq.lib" /MANIFEST /ManifestFile:"plik.dir\Debug\plik.exe.intermediate.manifest" /ALLOWISOLATION /MANIFESTUAC:"level='asInvoker' uiAccess='false'" /DEBUG /PDB:" $WORKSPACE/workspace/vista/proj/bin/plik.pdb" /SUBSYSTEM:CONSOLE /PGD:" $WORKSPACE\workspace\vista\proj\bin\plik.pgd" /TLBID:1 /DYNAMICBASE /NXCOMPAT /IMPLIB:" $WORKSPACE/workspace/vista/proj/bin/plik.lib" /MACHINE:X86 /ERRORREPORT:QUEUE

Standard settings: Solution:

configfiles - configuration files,

Page 157: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

157

data - shared assets. In the Solution catalog should be added the folder "configfiles" containing the files (you can copy them, among others, the attached demo applications with source code VISTA): \configfiles\display_cave.ini:

\configfiles\display_desktop.ini,

\configfiles\display_picasso.ini,

\configfiles\interaction_cave.ini,

\configfiles\interaction_desktop.ini,

\configfiles\interaction_picasso.ini,

\configfiles\vista.ini,

\configfiles\vista_cave.ini,

\configfiles\vista_desktop.ini,

\configfiles\vista_picasso.ini,

\configfiles\xml\keyboard_standardinput.xml,

\configfiles\xml\mouse_trackball.xml,

\configfiles\xml\mouse_trackball_continuous.xml,

\configfiles\xml\navigator_flystick.xml,

\configfiles\xml\spacenavigator_navigation.xml,

\configfiles\xml\ucp_cave.xml,

\configfiles\xml\ucp_picasso.xml. Of course, not all of the .xml files are required - you must perform their selection based on the required functionality.

9. 3D objects import test

Table 8.1. Import 3D object from Blender result tests

Format Size (b) Load time (s) Memory (MB) Status

dxf 12,474,751 290 27,1 Empty

raw 6,606,318 10 49,2 Ok, not smooth

fbx 4,404,775 - - -

stl 3,795,284 5 40 Empty

ply 3,097,440 - - -

wrl 2,769,077 12 56,5 Ok, not smooth

obj 2,094,817 23 36 Ok

x3d 2,087,561 - - -

blend 602,984 - - -

x 162,545 - - -

dae 76,107 - - -

3ds 590 3 30,2 Without main object

Table 8.2. Import 3D object from Blender graphical result tests

Result image File format

OBJ

Page 158: 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

158

3DS

DXF

RAW

STL

WRL