Virtual Reality ModelingVirtual Reality Modeling
from (Burdea 1996)
The VR physical modeling:The VR physical modeling:
The Haptics Rendering Pipeline (revisited)The Haptics Rendering Pipeline (revisited)
ForceForce TactileTactileTraversalTraversal DisplayDisplay
ForceForceCalculationCalculation
Force Force SmoothingSmoothing
ForceForceMappingMapping
HapticHapticTexturingTexturing
Collision Collision DetectionDetection
ViewViewTransformTransform
LightingLighting ProjectionProjection TexturingTexturingScene Scene
TraversalTraversal
GeometryGeometry RasterizerRasterizerApplicationApplication DisplayDisplay
adapted from (Popescu, 2001)
The Haptics Rendering PipelineThe Haptics Rendering Pipeline
ForceForce TactileTactileTraversalTraversal DisplayDisplay
ForceForceCalculationCalculation
Force Force SmoothingSmoothing
ForceForceMappingMapping
HapticHapticTexturingTexturing
Collision Collision DetectionDetection
Uses bounding box collision detection for fast response; Two types of bounding boxes, with fixed size or variable size (depending on enclosed object orientation). Fixed size is computationally faster, but less precise
Collision detection:Collision detection:
Variable size Bounding BoxVariable size Bounding Box Fixed size Bounding BoxFixed size Bounding Box
Collision DetectionCollision Detection
Undetected collisionUndetected collisionBall at timeBall at time t t
Ball at time Ball at time t + 1/fpst + 1/fps
Ball at time tBall at time t
Longest distanceLongest distance
If longest distanceIf longest distance d < v d < v • 1/fps• 1/fpsthen collision is undetected (v is the ball then collision is undetected (v is the ball velocity along its trajectory)velocity along its trajectory)
For more precise detection, we use a two-stage collision detection: an approximate (bounding box ) stage, followed by a slower exact collision detection stage.
Two-stage collision detection:Two-stage collision detection:
Bounding Box PruningBounding Box Pruning multi-body pairsmulti-body pairs
Pair-wise exactPair-wise exactCollision DetectionCollision Detection
Simulation Simulation
Collision responseCollision response
yes
yes
no
Exact collision detectionExact collision detection
Surface cutting: Surface cutting: An extreme case of surface “deformation” is surface cutting. This happens when the contact force exceed a given threshold; When cutting, one vertex gets a co-located twin. Subsequently the twin vertices separate based on spring/damper laws and the cut enlarges.
Cutting instrumentCutting instrumentMesh Mesh beforebefore cut cut Mesh Mesh afterafter cut cut
V2V2V1V1 V2V2V1V1
Collision response – surface deformationCollision response – surface deformation
The Haptics Rendering PipelineThe Haptics Rendering Pipeline
ForceForce TactileTactileTraversalTraversal DisplayDisplay
ForceForceCalculationCalculation
Force Force SmoothingSmoothing
ForceForceMappingMapping
HapticHapticTexturingTexturing
Collision Collision DetectionDetection
Haptic Interface PointHaptic Interface Point
Object polygonObject polygon
I -Haptic Interface PointI -Haptic Interface PointHaptic Interface PointHaptic Interface Point
Penetration distancePenetration distance
Haptic interfaceHaptic interface
Force output for homogeneous elastic objectsForce output for homogeneous elastic objects
K K •• d, d, for 0 for 0 d d d d max max
F F maxmax for dfor d max max d d
where where F F maxmax is that haptic interface maximum output force is that haptic interface maximum output force
{{F =F =
F F maxmax
d d max 1max 1 Penetration distance d Penetration distance d d d max 2max 2
Hard objec
t
Hard objec
t
Soft object
Soft object
saturationsaturation
Force Calculation –Force Calculation –Elastic objects with harder interiorElastic objects with harder interior
K K 11 •• d, d, for 0 for 0 d d d d discontinuity discontinuity
K K 1 1 •• d d discontinuity discontinuity + K + K 2 2 •• (d –d (d –d discontinuitydiscontinuity)), , for dfor d discontinuity discontinuity d d
where dwhere d discontinuity discontinuity is object stiffness change point is object stiffness change point
{{F =F =
Penetration distance d Penetration distance d d d discontinuitydiscontinuity
FF
F F rr
F F
Force Calculation –Force Calculation –Virtual pushbuttonVirtual pushbutton
F = K F = K 11 •• d (1-u d (1-umm) +) +
FFrr •• u umm + +
K K 22 •• (d – n) u (d – n) unn
where uwhere umm and u and unn are unit step are unit step
functions at m and nfunctions at m and n
mm nn Penetration distance d Penetration distance d
Virtual wall Virtual wall
FF initial initial = K = K •• d for 0 d for 0 d d
mm
FF = 0 during relaxation,= 0 during relaxation,
F F subsequentsubsequent = K = K 11 •• d d •• u umm for 0 for 0 d d n n
FF = 0 during relaxation,= 0 during relaxation,
where uwhere umm is unit step function at m is unit step function at m
mmmm nn
Force Calculation –Force Calculation –Plastic deformationPlastic deformation
Generate energy due to sampling time-Generate energy due to sampling time-To avoid system instabilities we add aTo avoid system instabilities we add adamping termdamping term
Force Calculation –Force Calculation –Virtual wallVirtual wall
V V 0 0
Virtual wall Virtual wall Moving into the wallMoving into the wall
timetime
F F
V V 0 0
Virtual wall Virtual wall Moving away from the wallMoving away from the wall
timetime
F F
K K wallwall •• x + B v x + B v, , for v for v 0 0
K K wall wall •• x, x, for v for v 0 0
where B is a directional damperwhere B is a directional damper
{{F =F =
The Haptics Rendering PipelineThe Haptics Rendering Pipeline
ForceForce TactileTactileTraversalTraversal DisplayDisplay
ForceForceCalculationCalculation
Force Force SmoothingSmoothing
ForceForceMappingMapping
HapticHapticTexturingTexturing
Collision Collision DetectionDetection
where N is the direction of the contact force based on vertex normal interpolation
Force shading: Force shading:
Real cylinder contactReal cylinder contact forcesforces
Non-shaded contactNon-shaded contact forcesforces
Contact forces afterContact forces after shadingshading
K K objectobject •• d d • N• N , , for 0 for 0 d d d d maxmax
F F max max •• N, N, for d for d max max < d< d{{F F smoothedsmoothed = =
The haptic mesh: The haptic mesh: A single HIP is not sufficient to capture the geometry
of fingertip-object contact; The curvature of the fingertip, and the object
deformation need to be realistically modeled.
Screen sequence for squeezing an elastic virtual ballScreen sequence for squeezing an elastic virtual ball
Penetration distance Penetration distance for mesh point for mesh point i
Mesh point Mesh point i
Haptic meshHaptic mesh
Haptic Interface Point Haptic Interface Point iPenetration distance Penetration distance for mesh point for mesh point i
For each haptic interface point of the meshFor each haptic interface point of the mesh::
F F haptic-mesh ihaptic-mesh i = K = K objectobject •• d d mesh imesh i •• N N surfacesurface
where where d d mesh imesh i are the interpenetrating distances at the are the interpenetrating distances at the
mesh points, mesh points, N N surface surface is the weighted surface normal is the weighted surface normal
of the contact polygonof the contact polygon
Haptic mesh force calculationHaptic mesh force calculation
The Haptics Rendering PipelineThe Haptics Rendering Pipeline
ForceForce TactileTactileTraversalTraversal DisplayDisplay
ForceForceCalculationCalculation
Force Force SmoothingSmoothing
HapticHapticTexturingTexturing
Collision Collision DetectionDetection
Force Force MappingMapping
Force mappingForce mapping
Force displayed by the Rutgers Master interface: Force displayed by the Rutgers Master interface:
F F displayeddisplayed = ( = ( F F haptic-meshhaptic-mesh ) )•• cos cos
where where it the angle between the mesh force resultant and the piston it the angle between the mesh force resultant and the piston
The Haptics Rendering PipelineThe Haptics Rendering Pipeline
ForceForce TactileTactileTraversalTraversal DisplayDisplay
ForceForceCalculationCalculation
Force Force SmoothingSmoothing
ForceForceMappingMapping
HapticHapticTexturingTexturing
Collision Collision DetectionDetection
Tactile mouseTactile mouse
Tactile patterns produced by the Logitech mouseTactile patterns produced by the Logitech mouse
ForceForce
Time Time Time Time Time Time
haptic mouse texture simulation
Surface haptic texture produced by the PHANToM Surface haptic texture produced by the PHANToM interfaceinterface
Haptic interfaceHaptic interface
Haptic Interface PointHaptic Interface Point
Object polygonObject polygon
XX
YY
ZZ
F F texturetexture = A sin(m x) = A sin(m x) •• sin(n y), sin(n y),
where A, m, n are constants:where A, m, n are constants: A gives magnitude of vibrations;A gives magnitude of vibrations; m and n modulate the frequency of m and n modulate the frequency of vibrations in the x and y directionsvibrations in the x and y directions
BEHAVIOR MODELING BEHAVIOR MODELING The simulation level of autonomy (LOA) is a function of its
components Thalmann et al. (2000) distinguish three levels of autonomy. The
simulation components can be either “guided” (lowest),
“programmed” (intermediate” and “autonomous (high) Simulation LOA = f(LOA(Objects),LOA(Agents),LOA(Groups))Simulation LOA = f(LOA(Objects),LOA(Agents),LOA(Groups))
Interactive objectInteractive object Intelligent agentIntelligent agent Group of agentsGroup of agents
Autonomous
Programmed
Guided
Autonomous
Programmed
Guided
Autonomous
Programmed
Guided
Simulation Simulation LOALOA
adapted from (Thalmann et al., 2000)
Interactive objects: Interactive objects: Have behavior independent of user’s input (ex. clock); This is needed in large virtual environments, where it is
impossible for the user to provide all required inputs.
System clockSystem clock
Automatic door – Automatic door – reflex behaviorreflex behavior
Interactive objects: Interactive objects: The fireflies in NVIDIA’s Grove have behavior
independent of user’s input. User controls the virtual
camera;
Agent behavior: Agent behavior: A behavior model composed of perception, emotions,
behavior, and actions; Perception (through virtual sensors) makes the agent
aware of his surroundings.
PerceptionPerception EmotionsEmotions
BehaviorBehavior
ActionsActions
Virtual worldVirtual world
Reflex behavior: Reflex behavior: A direct link between perception and actions (following
behavior rules (“cells”); Does not involve emotions.
PerceptionPerception EmotionsEmotions
BehaviorBehavior
ActionsActions
Object behaviorObject behavior
Another example of reflex behavior – “Dexter” at MIT [Johnson, 1991]: Hand shake, followed by head turn
Autonomous virtual humanAutonomous virtual human
User-controlled hand avatarUser-controlled hand avatar
Agent behavior - Agent behavior - avatarsavatars
If user maps to a full-body avatar, then virtual human agents react through body expression recognition: example dance. Swiss Institute of Technology, 1999(credit Daniel Thalmann)
Autonomous virtual humanAutonomous virtual humanUser-controlled hand avatarUser-controlled hand avatar
Emotional behavior: Emotional behavior: A subjective strong feeling (anger, fear) following perception; Two different agents can have different emotions to the same perception, thus they can have different actions.
Virtual worldVirtual world
Emotions 2Emotions 2 PerceptionPerception
BehaviorBehavior
Actions 2Actions 2
Perception Perception Emotions 1Emotions 1
BehaviorBehavior
Actions 1Actions 1
Crowds behaviorCrowds behavior
(Thalmann et al., 2000)
Crowd behavior emphasizes group (rather than individual) actions; Crowds can have guided LOA, when their behavior is defined explicitly by the user; Or they can have Autonomous LOA with behaviors specified by rules and other complex methods (including memory).
Political demonstrationPolitical demonstration
Guided crowdGuided crowdUser needs to specifyIntermediate path points
Autonomous crowdAutonomous crowdGroup perceives info onits environment and decides a path to follow to reach the goalVC 5.3
MODEL MANAGEMENT MODEL MANAGEMENT It is necessary to maintain interactivity and
constant frame rates when rendering complex
models. Several techniques exist: Level of detail segmentation; Cell segmentation; Off-line computations; Lighting and bump mapping at rendering
stage; Portals.
Level of detail segmentation: Level of detail segmentation: Level of detail (LOD) relates to the number of polygons
on the object’s surface. Even if the object has high
complexity, its detail may not be visible if the object is
too far from the virtual camera (observer).
Tree with 27,000 polygons Tree with 27,000 polygons
Tree with 27,000 polygons Tree with 27,000 polygons (details are not perceived)(details are not perceived)
Static level of detail management: Static level of detail management: Then we should use a simplified version of the object (fewer polygons), when it is far from the camera. There are several approaches: Discrete geometry LOD; Alpha LOD; Geometric morphing (“geo-morph”) LOD.
Discrete Geometry LOD: Discrete Geometry LOD: Uses several discrete models of the same virtual object; Models are switched based on their distance from the
camera (r r0; r0 r r1; r1 r r2; r2 r)
LOD 0LOD 0
LOD 1LOD 1
LOD 2LOD 2
r2 r0
r1
Alpha Blending LOD: Alpha Blending LOD: Discrete LOD have problems on the r0 = r, r1 = r, r2 = r circles, leading to “popping”. Objects appear and disappear suddenly. One solution is distance hystheresis. Another solution is model blending – two models are rendered near the circles; Another solution to popping is alpha blending by changing the transparency of the object. Fully transparent objects are not rendered.
OpaqueOpaque
Less opaqueLess opaque
Fully transparentFully transparent
LOD 0LOD 0
LOD 1LOD 1
LOD 2LOD 2
r2 r0
r1
Hystheresis zoneHystheresis zone
Geometric Morphing LOD: Geometric Morphing LOD: Unlike geometric LOD, which uses several models of the same
object, geometric morphing uses only one complex model. Various LOD are obtained from the base model through mesh
simplification A triangulated polygon mesh: n vertices has 2n faces and 3n edges
Collapsing edgesCollapsing edges
V2V2V1V1
V1V1
Mesh Mesh beforebefore simplification simplification Mesh Mesh afterafter simplification simplification
Single-Object adaptive level of detail LOD: Single-Object adaptive level of detail LOD: Used where there is a single highly complex object that the user wants to inspect (such as in interactive scientific visualization. Static LOD will not work since detail is lost where needed- example the sphere on the right loses shadow sharpness after LOD simplification.
Sphere with 8192 triangles Sphere with 8192 triangles – Uniform high density– Uniform high density
Sphere with 512 triangles – Sphere with 512 triangles – Static LOD simplificationStatic LOD simplification
(from Xia et al, 1997)
Single-object Adaptive Level of DetailSingle-object Adaptive Level of DetailSometimes edge collapse leads to problems, so vertices needto be split again to regain detail where needed. Xia et al. (1997) developed an adaptive algorithm that determines the level of detail based on distance to viewer as well as normal direction (lighting).
V2V2V1V1
Vertex SplitVertex Split
V1V1
Refined MeshRefined Mesh Simplified Mesh Simplified Mesh Edge collapseEdge collapse
V1 is the “parent” vertexV1 is the “parent” vertex
(adapted from Xia et al, 1997)
Single-object Adaptive Level of DetailSingle-object Adaptive Level of Detail
Sphere with 537 triangles Sphere with 537 triangles – adaptive LOD, 0.024 sec – adaptive LOD, 0.024 sec to render (SGI RE2, single to render (SGI RE2, single R10000 workstation)R10000 workstation)
Sphere with 8192 triangles Sphere with 8192 triangles – Uniform high density, – Uniform high density, 0.115 sec to render0.115 sec to render
(from Xia et al, 1997)
Single-object Adaptive Level of Detail Single-object Adaptive Level of Detail
Bunny with 3615 triangles Bunny with 3615 triangles – adaptive LOD, 0.110 sec – adaptive LOD, 0.110 sec to render (SGI RE2, single to render (SGI RE2, single R10000 workstation)R10000 workstation)
Bunny with 69,451 Bunny with 69,451 triangles – Uniform high triangles – Uniform high density, 0.420 sec to density, 0.420 sec to renderrender
(from Xia et al, 1997)
LOD 1LOD 1
LOD 2LOD 2
Static LOD: Static LOD: Geometric LOD, alpha blending and morphing have problems maintaining a constant frame rate. This happens when new complex objects appear suddenly in the scene (fulcrum).
LOD 1LOD 1
LOD 2LOD 2
frame i frame i+1
Camera “fly-by”Camera “fly-by”fu
lcrum
fulcrum
Architectural “walk-through” Architectural “walk-through” (UC Berkeley Soda Hall)(UC Berkeley Soda Hall)
Camera path through auditorium
No LOD menegament, 72,570 polygonsNo LOD menegament, 72,570 polygons
from (Funkhauser and Sequin, 1993)
A
B
CStartStart
EndEnd
A A C
B
Time 0.2 secTime 0.2 sec
C
B
Time 1.0 secTime 1.0 sec
0 Frames 2500 Frames 250 0 Frames 2500 Frames 250
No LOD management Static LOD management
Adaptive LOD Management-continued: Adaptive LOD Management-continued: An algorithm that selects LOD of visible objects based on a specified frame rate; The algorithm (Funkhauser and Sequin, 1993) is based on a benefits to cost analysis, where cost is the time needed to render Object O at level of detail L, and rendering mode R.
The cost for the whole scene is
where the cost for a single object is
Cost (O,L,R) Cost (O,L,R) Target frame time Target frame time
Cost (O,L,R) = max (c1Polygons(O,L) + c2 Vertices(O,L), c3 Pixels(O,L))Cost (O,L,R) = max (c1Polygons(O,L) + c2 Vertices(O,L), c3 Pixels(O,L))
c1, c2, c3 are experimental constants, depending on R and type of computer c1, c2, c3 are experimental constants, depending on R and type of computer
Adaptive LOD Management: Adaptive LOD Management: Similarly the benefit for a scene is a sum of visible objects benefits;
where the benefit of a given object is
The algorithm tries to maximize each object’s “value”
Objects with higher value (larger size) are rendered first
Benefit(O,L,R)Benefit(O,L,R)
Benefit(O,L,R) = size(O) * Accuracy(O,L,R) * Importance(O) * Focus(O) *Benefit(O,L,R) = size(O) * Accuracy(O,L,R) * Importance(O) * Focus(O) *
Motion(O) * Hysteresis(O,L,R) Motion(O) * Hysteresis(O,L,R)
Value= Benefit(O,L,R)/Cost(O,L,R)Value= Benefit(O,L,R)/Cost(O,L,R)
No detail elision, 72,570 polygonsNo detail elision, 72,570 polygons
Optimization algorithm, 5,300 poly.Optimization algorithm, 5,300 poly. 0.1 sec target frame time (10 fps)0.1 sec target frame time (10 fps)
from (Funkhauser and Sequin, 1993)
AC
Level of detail segmentation – rendering modeLevel of detail segmentation – rendering mode
No detail elision, 19,821 polygonsOptimization, 1,389 poly., 0.1 sec target frame time
from (Funkhauser and Sequin, 1993)
Level of detail – darkergray means more detail
It is another method of model management,
used in architectural walk-through; To maintain the “virtual building” illusion it
is necessary to have at least 6 fps (Airey et al., 1990) Necessary to maintain interactivity and
constant frame rates when rendering complex
models.
Cell segmentation:Cell segmentation:
Model managementModel managementOnly the current “universe” needs to be rendered
Cell segmentation – increased frame rateCell segmentation – increased frame rate
Automatic floor plan partition (Airey et al., 1990)
Buildings are large models that can be partitioned in “cells”automatically and off-line to speed up simulations at run time; Cells approximate rooms; Partitioning algorithms use a “priority” factor that favors occlusions (partitioning along walls)
Cell segmentation Cell segmentation
From (Funkhauser, 1993)
Building model resides in a fully associative cache; But cell segmentation alone will not work if the model is so large that it exceeds available RAM; In this case large delays will occur when there is a page fault anddata has to be retrieved from hard disk;
Page faultsPage faults
Fra
me
tim
e (s
)F
ram
e ti
me
(s)
FramesFrames
Combined Cell, LOD and database Combined Cell, LOD and database methodsmethods
Floor plan partition (Funkhouser, 1993)
UserUser
InterfaceInterface
VisibilityVisibility
Determ.Determ.
DetailDetail
EllisionEllision RenderRender MonitorMonitor
Look-Look-ahead ahead Determ.Determ.
Cache Cache Manage-Manage-mentment
I/O I/O OperOper
DatabaseDatabase
Database managementDatabase management
It is possible to add database management techniques to preventpage faults and improve fps uniformity during walk-through; It is possible to estimate how far the virtual camera will rotate andtranslate over the next N frames and pre-fetch from the hard disk theappropriate objects.
Database managementDatabase management
LOD 1LOD 1
LOD 0LOD 0
LOD 3LOD 3
LOD 2LOD 2LOD 1LOD 1
LOD 0LOD 0
LOD 2LOD 2
LOD 1LOD 1
LOD 0LOD 0LOD 0LOD 0
Fra
me
tim
e (s
)F
ram
e ti
me
(s)
FramesFrames
Floor plan visibility and highest LOD (Funkhouser, 1990)
LOD 0 – lowest level of detail (loaded first)LOD 0 – lowest level of detail (loaded first)……..LOD 3 - highest level of detail (loaded last)LOD 3 - highest level of detail (loaded last)