Shadows Dinesh Manocha Computer Graphics COMP-770 lecture Spring 2009

Preview:

Citation preview

Shadows

Dinesh Manocha

Computer Graphics

COMP-770 lecture

Spring 2009

What are Shadows?

From Webster’s dictionary:

Shad-ow (noun): partial darkness or obscurity within a part of space from which rays from a source of light are cut off by an interposed opaque body

Is this definition sufficient?

What are Shadows?

• Does the occluder have to be opaque to have a shadow?– transparency (no scattering)

– translucency (scattering)

• What about indirect light?– reflection

– atmospheric scattering

– wave properties: diffraction

• What about volumetric or atmospheric shadowing?– changes in density

Is this still a shadow?

What are Shadows Really?

• Is this definition sufficient?• In practice, too general!• We need some restrictions

Volumes of space that receive no light or lightthat has been attenuated through obscuration

Common Shadow Algorithm Restrictions

• No transparency or translucency!– Limited forms can sometimes be handled efficiently

– Backwards ray-tracing has no trouble with these effects, but it is much more expensive than typical shadow algorithms

• No indirect light!– More sophisticated global illumination algorithms handle this

at great expense (radiosity, backwards ray-tracing)

• No atmospheric effects (vacuum)!– No indirect scattering

– No shadowing from density changes

• No wave properties (geometric optics)!

What Do We Call Shadows?

• Regions not completelyvisible from a light source

• Assumptions:– Single light source

– Finite area light sources

– Opaque objects

• Two parts:– Umbra: totally blocked from light

– Penumbra: partially obscured

umbra

penumbra

area light source

shadow

Basic Types of Light & Shadows

area, direct & indirect area, direct only point, direct only directional, direct only

simplermore realistic

more realistic for small-scale scenes, directional is realistic for scenes lit by sunlight in space!

SOFT SHADOWS HARD or SHARP SHADOWS

Goal of Shadow Algorithms

• Shadow computation can be considered a global illumination problem– this includes ray-tracing and radiosity!

• Most common shadow algorithms are restricted to direct light and point or directional light sources

• Area light sources are usually approximated by many point lights or by filtering techniques

Ideally, for all surfaces, find the fraction of lightthat is received from a particular light source

Global Shadow Component inLocal Illumination Model

• Shadowi is the fraction of light received at the surface– For point lights, 0 (shadowed) or 1 (lit)

– For area lights, value in [0,1]

• Ambient term approximates indirect light

NumLights

iiiiii SpecularDiffuseAmbientSpotDistentGlobalAmbiI

1

NumLights

iiiiiii

NumLights

iii SpecularDiffuseSpotDistShadowAmbientSpotDistentGlobalAmbiI

11

Without shadows:

With shadows:

What else does this say?

• Multiple lights are not really difficult (conceptually)• Complex multi-light effects are many single-light

problems summed together!– Superposition property of illumination model ()

• This works for shadows as well!• Focus on single-source shadow computation• Generalization is simple, but efficiency may be improved

NumLights

iiiiiii

NumLights

iii SpecularDiffuseSpotDistShadowAmbientSpotDistentGlobalAmbiI

11

Characteristics of Shadow Algorithms

• Light-source types– Directional

– Point

– Area

• Light transfer types– Direct vs. indirect

– Opaque only

– Transparency / translucency

– Atmospheric effects

• Geometry types– Polygons

– Higher-order surfaces

Characteristics of Shadow Algorithms

• Computational precision (like visibility algorithms)– Object precision (geometry-based, continuous)

– Image precision (image-based, discrete)

• Computational complexity– Running-time

– Speedups from static viewer, lights, scene

– Amount of user intervention (object sorting)

• Numerical degeneracies

Characteristics of Shadow Algorithms

• When shadows are computed– During rendering of fully-lit scene (additive)– After rendering of fully-lit scene (subtractive)

not correct, but fast and often good enough

• Types of shadow/object interaction– Between shadow-casting object and receiving object

– Object self-shadowing

– General shadow casting

Taxonomy of Shadow Algorithms• Object-based

– Local illumination model (Warnock69,Gouraud71,Phong75)

– Area subdivision (Nishita74,Atherton78)

– Planar projection (Blinn88)

– Radiosity (Goral84,Cohen85,Nishita85)

– Lloyd (2004)

• Image-based– Shadow-maps (Williams78,Hourcade85,Reeves87,

Stamminger/Drettakis02, Lloyd 07)

– Projective textures (Segal92)

• Hybrid– Scan-line approach (Appel68,Bouknight70)

– Ray-tracing (Appel68,Goldstein71,Whitted80,Cook84)

– Backwards ray-tracing (Arvo86)

– Shadow-volumes (Crow77,Bergeron86,Chin89)

Good Surveys of Shadow AlgorithmsEarly complete surveys found in (Crow77 & Woo90)

Recent survey on hard shadows: Lloyd 2007 (Ph.D. thesis)

Recent survey on soft shadows: Laine 2007 (Ph.D. thesis)

Survey of Shadow AlgorithmsFocus is on the following algorithms:

– Local illumination

– Ray-tracing

– Planar projection

– Shadow volumes

– Projective textures

– Shadow-maps

Will briefly mention:– Scan-line approach

– Area subdivision

– Backwards ray-tracing

– Radiosity

Local Illumination “Shadows”• Backfacing polygons are in shadow (only lit by ambient)• Point/directional light sources only• Partial self-shadowing

– like backface culling is a partial visibility solution

• Very fast (often implemented in hardware)• General surface types in almost any rendering system!

Local Illumination “Shadows”• Typically, not considered a shadow algorithm• Just handles shadows of the most restrictive form• Dramatically improves the look of other restricted

algorithms

Local Illumination “Shadows”

Properties:– Point or directional light sources

– Direct light

– Opaque objects

– All types of geometry (depends on rendering system)

– Object precision

– Fast, local computation (single pass)

– Only handles limited self-shadowingconvenient since many algorithms do not handle any self-shadowing

– Computed during normal rendering pass

– Simplest algorithm to implement

Ray-tracing Shadows

Only interested in shadow-ray tracing (shadow feelers)– For a point P in space, determine if it is shadow with respect

to a single point light source L by intersecting line segment PL (shadow feeler) with the environment

– If line segment intersects object, then P is in shadow, otherwise, point P is illuminated by light source L

L

P

shadow feeler(edge PL)

Ray-tracing Shadows• Arguably, the simplest general algorithm• Can even handle area light sources

– point-sample area source: distributed ray-tracing (Cook84)

Li

P

Area light Li

P

NumLights

iiiiiii

NumLights

iii SpecularDiffuseSpotDistShadowAmbientSpotDistentGlobalAmbiI

11

Shadowi = 0 Shadowi = 2/5

Ray-tracing Shadows

– Slow• Intersection tests are (relatively) expensive

• May be sped up with standard ray-tracing acceleration techniques

– Shadow feeler may incorrectly intersect object touching P

• Depth bias

• Object tagging– Don’t intersect shadow feeler with object touching P

– Works only for objects not requiring self-shadowing

Sounds great, what’s the problem?

Ray-tracing Shadows

How do we use the shadow feelers?

2 different rendering methods

– Standard ray-casting with shadow feelers

– Hardware Z-buffered rendering with shadow feelers

Ray-tracing Shadows

For each pixel:

• Trace ray from eye through pixel center

• Compute closest object intersection point P along ray

• Calc Shadowi for point by performing shadow feeler intersection test

• Calc illumination at point P

Eye

Light

Ray-casting with shadow feelers

Ray-tracing Shadows

• Render the scene into the depth-buffer (no need compute color)

• For each pixel, determine if in shadow:– “unproject” the screen space pixel point

to transform into eye space

– Perform shadow feeler test with light in eye space to compute Shadowi

– Store Shadowi for each pixel

• Light the scene using per-pixel Shadowi values

Eye

Light

Z-buffering with shadow feelers

Ray-tracing ShadowsZ-buffering with shadow feelers

Method 1: compute lighting at each pixel in software

• Deferred shading

• Requires object surface info (normal, materials)

• Could use more complex lighting model

How do we use per-pixel Shadowi values to light the scene?

Ray-tracing ShadowsZ-buffering with shadow feelers

Method 2: use graphics hardwareFor point lights:

• Shadowi values either 0 or 1

• Use stencil buffer, stencil values = Shadowi values

• Re-render scene with the corresponding light on using graphics hardware but use stencil test to only write into lit pixels (stencil=1). Should perform additive blending and ambient-lit scene should be rendered in depth computation pass.

For area lights:

• Shadowi values continuous in [0,1]

• Multiple-passes and modulation blending

• Pixel Contribution = Ambienti + Shadowi*(Diffusei+Speculari)

How do we use per-pixel Shadowi values to light the scene?

– Point, directional, and area light sources

– Direct light (may be generalized to indirect)

– Opaque (thin-film transparency easily handled)

– All types of geometry (just need edge intersection test)

– Hybrid : object-precision (line intersection), image-precision for generating pixel rays

– Slow, but many acceleration techniques are available

– General shadow algorithm

– Computed during illumination (additive, but subtractive is possible)

– Simple to implement

Ray-tracing ShadowsProperties

Planar Projection Shadows

• Shadows cast by objects onto planar surfaces• Brute force: project shadow casting objects onto the

plane and draw projected object as a shadow

Directional light(parallel projection)

Point light(perspective projection)

Planar Projection Shadows

Not sufficient – co-planar polygons (Z-fighting) : depth bias

– requires clipping to relevant portion of plane : shadow receiver stenciling

Planar Projection Shadowsbetter approach, subtractive strategy

Render scene fully lit by single light

For each planar shadow receiver:• Render receivers: stencil pixels covered

• Render projected shadow casters in a shadow color with depth testing on, depth biasing (offset from plane), modulation blending, and stenciling (to write only on receiver and to avoid double pixel writing)

– Receiver stencil value=1, only write where stencil equals 1, change to zero after modulating pixel Texture is visible in shadow

Planar Projection Shadowsproblems with subtractive strategy

• Called subtractive because it begins with full-lighting and removes light in shadows (modulates)

• Can be more efficient than additive (avoids passes)

• Not as accurate as additive. Doesn’t follow lighting model– Specular and diffuse components in shadow

– Modulates ambient term

– Shadow color is chosen by user

NumLights

iiiiiii SpecularDiffuseAmbientSpotDistrShadowColoentGlobalAmbiI

1

NumLights

iiiiiii

NumLights

iii SpecularDiffuseSpotDistShadowAmbientSpotDistentGlobalAmbiI

11

as opposed to the correct version

Planar Projection Shadowseven better approach, additive strategy

• Draw ambient lit shadow receiving scene (global and all lights’ local ambient)

• For each light source:For each planar receiver– Render receiver: stencil pixels

covered– Render projected shadow casters into

stenciled receiver area: depth testing on, depth biasing, stencil pixels covered by shadow

– Re-render receivers lit by single light source (no ambient light): depth-test set to EQUAL, additive blending, write only into stenciled areas on receiver and not in shadow

• Draw shadow casting scene: full-lighting

Planar Projection ShadowsProperties

– Point or directional light sources

– Direct light

– Opaque objects (could fake transparency using subtractive)

– Polygonal shadow casting objects, planar receivers

– Object precision

– Number of passes: L=num lights, P=num planar receivers• subtractive: 1 fully lit pass, L*P special passes (no lighting)

• additive: 1 ambient lit pass, 2*L*P receiver passes, L*P caster passes

Planar Projection ShadowsProperties

– Can take advantage of static components:• static objects & lights: precompute silhouette polygon from light source

• static objects & viewer: precompute first pass over entire scene

– Visibility from light is handled by user(must choose casters and receivers)

– No self-shadowing (relies on local illumination)

– Both subtractive and additive strategies presented

– Conceptually simple, surprisingly difficult to get rightgives techniques needed to handle more sophisticated multi-pass methods

Shadow VolumesWhat are they?

Volume of space in shadow of a single occluder with respect to a point light sourceOR

Volume of space swept out by extruding an occluding polygon away from a point

light source along the projector rays originating at the point light and passing through the vertices of the polygon

point light

occluding triangle

3D shadow volume

Shadow VolumesHow do you use them?

• Parity test to see if a point P on a visible surface is in shadow:– Initialize parity to 0

– Shoot ray from eye to point P

– Each time a shadow-volume boundary is crossed, invert the parity

• if parity=0, P is in shadowif parity=1, P is lit

What are some potential problems?

point light

eye

occluder

parity=0 parity=1 parity=0

00

0

1

1

0

Shadow VolumesProblems with Parity Test

0 0

0

1

Eye inside of shadow volume

Self-shadowing of visible occluders

Multiple overlapping shadow volumes

• Incorrectly shadows pts(reversed parity)

• Should a point on the occluder flip the parity?(consistent if not flipped)

0

• Point on the occluder should not flip the parity

• Touching boundary is not counted as a crossing

0 1 10 0

• Incorrectly shadows pts (incorrect parity)

• Is parity’s binary condition sufficient?

Shadow VolumesSolutions to Parity Test Problems

1 1

1

0

Eye inside of shadow volume

Self-shadowing of visible occluders

Multiple overlapping shadow volumes

• Init parity to be 0 when starting outside and 1 when inside

• Do not flip parity when viewing the “in”-side of an occluder

0

• Do not flip parity when viewing “out”-side of an occluder either

0 1 12 0

• Binary parity value is not sufficient, we need a general counter for boundary crossings: +1 entering a shadow volume, -1 exiting

+1 +1 -1 -1

Shadow VolumesA More General Solution

Determine if point P is in shadow:– Init boundary crossing counter to number of

shadow volumes containing the eye pointWhy? Because ray must leave this many shadow volumes to reach a lit point

– Along ray, increment counter each time a shadow volume is entered, decrement each time one is exited

– If the counter is >0, P is in shadow

Special case when P is on an occluder– Do not increment or decrement counter

– Point on boundary does not count as a crossing 0 1 12 0

+1 +1 -1 -1

Shadow VolumesMore Examples

Can you calculate the final boundary count for these visible points?

Shadow VolumesMore Examples

Can you calculate the final boundary count for these visible points?

0

+1+1

+1

+1

+1

-1

-1

1

-1

-1

+1

0 2 0

1

1

0

Shadow VolumesHow do we use this information to find shadow pixels?

Could just use ray-casting (ray through each pixel)– Too slow, possibly more primitives to intersect with

– Could use silhouette of complex objects to simplify shadow volumes

+ -

+

+

+

++

++

++

+

+

++

+

+

-

-

-

-

-

-

-

-

-

1

1

1 1 2 0

0

0

00

Shadow VolumesUsing Standard Graphics Hardware

Simple observations: – For convex occluders, shadows volumes form convex shape.

– Enter through front-facing shadow-volume boundariesExit through back-facing

+ -

+

+

+

++

++

++

+

+

++

+

+

-

-

-

-

-

-

-

-

-

1

1

0

0

00

Shadow VolumesUsing Standard Graphics Hardware

Use standard Z-buffered rendering and the stencil buffer (8 bits) to calculate boundary count for each pixel– Create shadow volumes for each occluding object (should be convex)

– Render the ambient lit scene, keep the depth values

– For each light source• Initialize stencil values to number of volumes containing the eye point

• Still using the Z-buffer depth test (strictly less-than), but no depth update– Render the front-facing shadow-volume boundary polygons, increment stencil

values for all pixels covered by the polygons that pass the depth test

– Render the back-facing boundary polygons, but decrement the stencil.

• Pixels with stencil value of zero are lit, re-render the scene with lighting on (no ambient, depth-test should be set to equal).

Shadow VolumesUsing Standard Graphics Hardware: step-by-step

• Create shadow volumes

• Initialize stencil buffer valuesto # of volumes containing eye

per-pixel stencil values initially 0

Shadow VolumesUsing Standard Graphics Hardware: step-by-step

• Render the ambient lit scene

• Store the Z-buffer

• Set depth-test to strictly less-than

Shadow VolumesUsing Standard Graphics Hardware: step-by-step

• Render front-facing shadow-volume boundary polygons– Why front faces first? Unsigned stencil values

• Increment stencil values for pixels covered that pass depth-test

Shadow VolumesUsing Standard Graphics Hardware: step-by-step

• Render back-facing shadow-volume boundary polygons

• Decrement stencil values for pixels covered that pass depth-test

Shadow VolumesUsing Standard Graphics Hardware: step-by-step

• Pixels with stencil value of zero are lit

• Set depth-test to strictly equals

• Re-render lit scene with no ambient into lit pixels

Shadow VolumesMore Potential Problems

• Lots o’ geometry!– Only create on shadow-

casting objects (approximation)

– Use only silhouettes

• Lots o’ fill!– Reduce geometry

– Have a good “max distance”

– Clip to view-volume

• Near-plane clipping

Shadow VolumesProperties

– Point or directional light sources

– Direct light

– Opaque objects (could fake transparency using subtractive)

– Restricted to polygonal objects (could be generalized)

– Hybrid: object precision in creation of shadow-volumes, image-precision per-pixel stencil evaluation

– Number of passes: L=num lights, N=number of tris• additive: 1 ambient lit, 3*N*L shadow-volume, 1 fully lit

• subtractive: 1 fully lit, 3*N*L shadow-volume, 1 image pass (modulation)

• Could be made faster by silhouette simplification, and by hand-picking shadow casters and receivers

Shadow VolumesProperties

– Can take advantage of static components:• static objects & lights: precompute shadow volumes from light

sources

• static objects & viewer: precompute first pass over entire scene

– General shadow algorithm, but could be restricted for more speed

– Both subtractive and additive strategies presented

Projective Texture ShadowsWhat are Projective Textures?

Texture-maps that are mapped to a surface through a projective transformation of the vertices into the texture’s “camera” space

Projective Texture ShadowsHow do we use them to create shadows?

Project a modulation image of the shadow casting objects from the light’s point-of-view onto the shadow receiving objects

Light’s point-of-view Shadow projective texture (modulation image or light-map)

Eye’s point-of-view, projective texture

applied to ground-plane(self-shadowing is from

another algorithm)

Projective Texture ShadowsMore details

Fast, subtractive method• For each light source:

– Create a light camera that encloses shadowed area

– Render shadow casting objects into light’s viewonly need to create a light map (1 in light, 0 in shadow)

– Create projective texture from light’s view

– Render fully-lit shadow receiving objects with applied modulation projective-textures (need additive blending for all light sources except first one)

• Render fully-lit shadow casting objects

Projective Texture ShadowsMore examples

Cast shadows from complex objects onto complex objects in only 2 passes over shadow casters and 1 pass over receivers (for 1 light)

Lighting for shadowed objects are computed independently for each light source and summed into a final image

Colored light sources. Lit areas are modulated by value of 1 and shadow areas can be any ambient modulation color

Projective Texture ShadowsProblems

• Does not use visibility information from the light’s view– Objects must be depth-sorted

– Parts of an object that are not visible from the light also have the projective texture applied (ambient light appears darker on shadows receiving objects)

• Receiving objects may already be textured– Typically, only one texture can be applied to an object at a time

Projective Texture ShadowsSolutions… well, sort of...

• Does not use visibility information from the light’s view– User selects shadow casters and receivers

– Casters can be receivers, receivers can be casters

– Must create and apply projective textures in front-to-back order from the light

– Darker ambient lighting is accepted. Finding these regions requires a more general shadow algorithm

• Receiving objects may already be textured– Use two passes: first to apply base texture, second apply

projective texture with modulation blending

– Use multi-texture: this is what it is for! Avoids passes over the geometry!

Projective Texture ShadowsProperties

• Point or directional light sources

• Direct light (fake transparency, with different modulation colors)

• All types of geometry (depends on the rendering system)

• Image precision (image-based)

• For each light, 2 passes over shadow-casting objects (1 to create modulation image, 1 with full lighting), 1 pass over shadow receiving object (fully-lit w/ projective texture)

• More passes will be required for shadow-casting objects that are already textured

• Benefits mostly from static scene (precompute shadow textures)

• User must partition objects into casters and receivers (casters could be receivers and vice versa)

Projective Texture ShadowsHow do we apply projective textures?

• All points on the textured surface must be mapped into the texture’s camera space (projective transformation)

• Position on texture’s camera viewplane window maps into the 2D texture-map

How can this be done efficiently?Slight modification to perspectively-correct texture-mapping

Projective Texture ShadowsPerspectively-incorrect Texture-mapping

• Relies on interpolating screen-space values along projected edge

• Vertices after perspective transformation and perspective divide:(x,y,z,w)(x/w,y/w,z/w,1)

11

1

1

1

1

1

1 ,,,, tsw

z

w

y

w

xA

22

2

2

2

2

2

2 ,,,, tsw

z

w

y

w

xB

BtAttI )1()(

Projective Texture ShadowsPerspectively-correct Texture-mapping

• Add 3D homogeneous coordinate to texture-coords (s,t,1)

• Divide all vertex components by w after perspective transformation

• Interpolate all values, including 1/w

• Obtain perspectively-correct texture-coords (s’,t’) by applying another homogeneous normalization (divide interpolated s/w and t/w terms by interpolated 1/w term)

11

1

1

1

1

1

1

1

1

1 1,,,,,ww

t

w

s

w

z

w

y

w

xA

BtAttI )1()(

22

2

2

2

2

2

2

2

2

2 1,,,,,ww

t

w

s

w

z

w

y

w

xB

w

t

w

szyxcorrectpersp I

I

I

IIIItszyxI

/1/1

,,,,',',',','

Final perspectively-correct values, by normalizing homogeneous texture-coords

Projective Texture ShadowsProjective Texture-mapping

• Texture-coords become 4D just like vertex coords:(x,y,z,w)(s,t,r,q)

• Full 4x4 matrix transformation is applied to texture-coords

• Projective transformations also allowed, another perspective divide is needed for texture-coords:Vertices: homogeneous space to screen-space

(x,y,z,w)(x/w,y/w,z/w)

Texture-coords: homogeneous space to texture-space(s,t,r,q) (s/q,t/q,r/q)

• Requires another per-vertex transformation, but per-pixel work is same as in perspectively-correct texture-mapping (Segal92)

Projective Texture ShadowsProjective Texture-mapping

Given vertex v, corresponding texture-coords t, and two 4x4 matrix transformations M and T (M = composite modeling, viewing, and projection transformations, and T = texture-coords transformation matrix)– Each vertex represented as [ M*v, T*t ] = [ x y z w s t r q ]

– Transformed into screen space through a perspective divide of all components by w

[ x y z w s t r q ] [ x/w y/w z/w s/w t/w r/w q/w ]

– All values are linearly interpolated along edge (across polygon face)

– Perform per-pixel homogeneous normalization of texture-coords by dividing interpolated q/w value

[ x’ y’ z’ s’ t’ r’ ] = [ x/w y/w z/w (s/w)/(q/w) (t/w)/(q/w) (r/w)/(q/w) ]

– Same as perspectively-correct texture-mapping, but instead of dividing by interpolated 1/w, divide by interpolated q/w (Segal92)

Projective Texture ShadowsProjective Texture-mapping

1

1

1

1

1

1

1

1

1

1

1

1

1

1 ,,,,,w

q

w

r

w

t

w

s

w

z

w

y

w

xA

BtAttI )1()(

2

2

2

2

2

2

2

2

2

2

2

2

2

2 ,,,,,w

q

w

r

w

t

w

s

w

z

w

y

w

xB

wq

r

wq

t

wq

szyxcorrectpersp I

I

I

I

I

IIIIrtszyxI

///

,,,,,',',',',','

Final perspectively-correct values, by normalizing homogeneous texture-coords

Projective Texture ShadowsProjective Texture-mapping

So how do we actually use this to apply the shadow texture?

• Use the vertex’s original coords as the texture-coords

• Texture transformation:T = LightProjection*LightViewing* NormalModeling

Shadow-Mapsfor accelerating ray-traced shadow feelers

• Previously, shadow feelers had to be intersected against all objects in the scene

• What if we knew the nearest intersection point for all rays leaving the light?

• The depth-buffer of the rendered scene from a camera at the light would give us a discretized version of this

• This depth-buffer is called a shadow-map

• Instead of intersecting rays with objects, we intersect the ray with the light viewplane, and lookup up the nearest depth value.

• If the light’s depth value at this point is less than the depth to the eye-ray nearest intersection point, then this point is in shadow!

Light

Eye

Eye-ray nearest intersection point

Light-ray nearest intersection point

L

E

If L is closer to the light than E, then E is in shadow

Shadow-Mapsfor accelerating ray-traced shadow feelers

Cool, we can really speed up ray-traced shadows now!– Render from eye view to accelerate first-hit ray-casting

– Render from light view to store first-hits from light

– For each pixel-ray in the eye’s view, we can project the first hit point into the light’s view and check if anything is intersecting the shadow feeler with a simple table lookup!

– The shadow-map is discretized, but we can just use the nearest value.

What are the potential problems?

Shadow-MapsProblems with Ray-traced Shadow Maps

• Still too slow– requires many per-pixel operations

– does not take advantage of pixel coherence in eye view

• Still has self-shadowing problem– need a depth bias

• Discretization error– Using the nearest depth value to the projected point, may not

be sufficient

– How can we filter the depth-values? The standard way does not really make sense here.

Shadow-Mapsfaster way: standard shadow-map approach

• Not normally used as a ray-tracing acceleration technique, normally used in a standard Z-buffered graphics system

• Two methods presented (Williams78):– Subtractive: post-processing on final lit image (like full-scene

image warping)

– Additive: as implemented in graphics hardware (OpenGL extension on InfiniteReality)

Shadow-Mapsillustration of basic idea

Shadow-map from light 1 Shadow-map from light 2 Final view

Shadow-MapsSubtractive

• Render fully-lit scene• Create shadow-map: render depth from light’s view• For each pixel in final image:

– Project point at each pixel from eye screen-space into light screen-space (keep eye-point depth De)

– Look up light depth value Dl

– Compare depth values, if Dl<De eye-point is in shadow

– Modulate, if point is in shadow

Shadow-MapsSubtractive: advantages

• Constant time shadow computation!just like full-scene image-warping: eye view pixels are warped to

light view and then a depth comparison is performed

• Only a 2-pass algorithm:1 eye pass, 1 light pass (and 1 constant time image-warping pass)

• Deferred shading (for shadow computation)

Zhang98 presents a similar approach using a forward-mapping (from light to eye, reverses this whole process)

Shadow-MapsSubtractive: disadvantages

• Not as accurate as additive (same reasons)– Specular and diffuse components in shadow

– Modulates ambient term

• Has standard shadow-map problems:– Self-shadowing : depth-bias needed

– Depth sampling error : how do we accurately reconstruct depth values from a point-sampling?

Shadow-MapsAdditive

• Create shadow-map: render depth from light’s view• Use shadow-map as a projective texture!• While scan-converting triangles:

– apply shadow-map projective texture

– instead of modulating with looked-up depth value Dl, compare the value against the r-value (De) of the transformed point on the triangle

– Compare De to Dl , if Dl<De eye-point is in shadow

Basically, scan-converting triangle in both eye and light spaces simultaneously and performing a depth comparison in light space against previously stored depth values

Shadow-MapsAdditive: advantages

• Easily implemented in hardwareonly a slight change to the standard perspectively-correct

texture-mapping hardware: add an r-component compare op

• Fastest, most general implementation to date!As fast as projective textures, but general!

Shadow-MapsAdditive: disadvantages

• Computes shadows on a per-primitive basisAll pixels covered by all primitives must go through shadowing

and lighting operation whether visible or not (no deferred shading)

• Still has standard shadow-mapping problems– Self-shadowing

– Depth sampling error

Shadow-MapsSolving main problems: self-shadowing

Use a depth bias during the transformation into light space– Add a z translation towards the light source after

transformation from eye to light

OR

– Add z-translation towards eye before transforming into light space

OR

– Translate eye-space point along surface normal before transforming into light space

Shadow-Maps

Solving main problems: depth sampling

Could just use the nearest sample, but how would you anti-alias depth?

Shadow-MapsDepth sampling: normal filtering

• Averaging depth doesn’t really make sense (unrelated to surface, especially at shadow boundaries!)

• Still a binary result, (no anti-aliased softer shadows)

Shadow-MapsDepth sampling: percentage closer filtering (Reeves87)

• Could average binary results of all depth map pixels covered

• Soft anti-aliased shadows

• Very similar to point-sampling across an area light source in ray-traced shadow computation

Shadow-MapsHow do you choose the samples?

Quadrilateral represents the area covered by a pixel’s projection onto a polygon after being projected into the shadow-map

Scanline Algorithmsclassic by Bouknight and Kelley

• Project edges of shadow casting triangles onto receivers

• Use shadow-volume-like parity test during scanline rasterization

Area-Subdivision Algorithmsbased on Atherton-Weiler clipping

• Find actual visible polygon fragments (geometrically) through generalized clipping algorithm

• Create model composed of shadowed and lit polygons

• Render as surface detail polygons

Area-Subdivision Algorithmsbased on Atherton-Weiler clipping

Multiple Light Sourcesfor any single-light algorithm

• Accumulate all fully-lit single-light images into a single image through a summing blend op (standard accumulation buffer or blending operations)

• Global ambient lit scene should be added in separately• Very easy to implement• Could be inefficient for some algorithms• Use higher accuracy of accumulation buffer (usually

12-bit per color component)

Area light Sourcesfor any point-light algorithm

• Soft or “fuzzy” shadows (penumbra)• Some algorithms have some “natural” support for these• For restricted algorithms, we can always sample the

area light source with many point light sources: jitter and accumulate

• Very expensive: many “high quality” passes to obtain something fuzzy

• Not really feasible in most interactive applications• Convolution and image -based methods are usually

more efficient here

Backwards Ray-tracing

• Big topic: sorry, no time

Radiosity

• Big topic: sorry, no time

ReferencesAppel A. “Some Techniques for Shading Machine Renderings of Solids,” Proc AFIPS

JSCC, Vol 32, 1968, pgs 37-45.

Arvo, J. “Backward Ray Tracing,” in A.H. Barr, ed., Developments in Ray 8-Tracing, Course Notes 12 for SIGGRAPH 86, Dallas, TX, August 18-22, 1986.

Atherton, P.R., Weiler, K., and Greenberg, D. “Polygon Shadow Generation,” SIGGRAPH 78, pgs 275-281.

Bergeron, P. “A General Version of Crow’s Shadow Volumes,” CG & A, 6(9), September 1986, pgs 17-28.

Blinn, Jim. “Jim Blinn’s Corner: Me and My (Fake) Shadow,” IEEE CG&A, vol 8, no 1, Jan 1988, pgs 82-86.

Bouknight, W.J. “A Procedure for Generation of Three-Dimentional Half-Toned Computer Graphics Presentations,” CACM, 13(9), September 1970, pgs 527-536. Also in FREE80, pgs 292-301.

Bouknight, W.J. and Kelly, K.C. “An Algorithm for Producing Half-Tone Computer Graphics Presentations with Shadows and Movable Light Sources,” SJCC, AFIPS Press, Montvale, NJ, 1970, pgs 1-10.

Chin, N., and Feiner, S. “Near Real-Time Shadow Generation Using BSP Trees,” SIGGRAPH 89, pgs 99-106.

ReferencesCohen, M.F., and Greenberg, D.P. “The Hemi-Cube: A Radiosity Solution for Complex

Environments,”SIGGRAPH 85, pgs 31-40.

Cook, R.L. “Shade Trees,” SIGGRAPH 84, pgs 223-231.

Cook, R.L., Porter, T., and Carpenter, L. “Distributed Ray Tracing,” SIGGRAPH 84, pgs 127-145.

Crow, Frank. “Shadow Algorithms for Computer Graphics,” SIGGRAPH ‘77.

Goldstein, R.A.and Nagel, R. “3-D Visual Simulation,” Simulation, 16(1), January 1971, pgs 25-31.

Goral, C.M., Torrance, K.E., Greenberg, D.P., and Gattaile, B. “Modeling the Interaction of Light Between Diffuse Surfaces,” SIGGRAPH 84 pgs 213-222.

Gouraud, H. “Continuous Shading of Curved Surfaces,” IEEE Trans. On Computers, C-20(6), June 1971, 623-629. Also in FREE80, pgs 302-308.

Hourcade, J.C. and Nicolas, A. “Algorithms for Antialiased Cast Shadows,” Computers & Grahpics 9, 3 (1985), pgs 259-265.

Nishita, T. and Nakamae, E. “An Algorithm for Half-Tone Representation of Three-Dimensional Objects,” Information Processing in Japan, Vol. 14, 1974, pgs 93-99.

Nishita, T., and Nakamae, E. “Continuous Tone Representation of Three-Dimensional Objects Taking Account of Shadows and Interreflection,” SIGGRAPH 85, pgs 23-30.

ReferencesReeves, W.T., Salesin, D.H., and Cook, R.L. “Rendering Antialiased Shadows with Depth

Maps,” SIGGRAPH 87, pgs 283-291.

Segal, M., Korobkin, C., van Widenfelt, R., Foran, J., and Haeberli, P. “Fast Shadows and Lighting Effects Using Texture Mapping,” Computer Graphics, 26, 2, July 1992, pgs 249-252.

Warnock, J. “A Hidden-Surface Algorithm for Computer Generated Half-Tone Pictures,” Technical Report TR 4-15, NTIS AD-753 671, Computer Science Department, University of Utah, Salt Lake City, UT, June 1969.

Whitted, T. “An Improved Illumination Model for Shaded Display,” CACM, 23(6), June 1980, pgs 343-349.

Williams, L. “Casting Curved Shadows on Curved Surfaces,” SIGGRAPH 78, pgs 270-274.

Woo, Andrew, Pierre Poulin, and Alain Fournier. “A Survey of Shadow Algorithms,” IEEE CG&A, Nov 1990, pgs 13-32.

Zhang, H. “Forward Shadow Mapping,” Rendering Techniques 98, Proceedings of the 9th Eurographics Rendering Workshop.

Acknowledgements

Mark Kilgard (nVidia) : for various pictures from presentation slides (www.opengl.org)

Advanced OpenGL Rendering course notes (www.opengl.org)

Recommended