Upload
tobias-joseph
View
214
Download
0
Embed Size (px)
Citation preview
Lighting for GamesLighting for GamesKenneth L. Hurley
Agenda
Introduction to LightingWhat is Radiosity?LightmapsPer Pixel LightingHigh Dynamic Range ImagesLow Dynamic Range ImageBRDFS
Introduction to Lighting
Ambient LightingI = Ia x Ka
Introduction to Lighting
Diffuse LightingIp x Kd x (N . L)
Introduction to Lighting
Phong ShadingKs x (R . V)^n)
Reflection CalculationR = (2 x N x (N . L)) - L
Radiosity
What is RadiosityObjects reflect light at different wave length
Can create a scattered lighting effect
Lightmaps are determined from radiosity solutionsRay tracing with diffuse reflection calculations usually used to determine radiosity
Lightmaps
Encodes a Diffuse Lighting Solution inSeparate Texture
Think of an interior building wallBrick surface pattern on walls may be common to many walls and highly repeatedDiffuse lighting solution is different for each wall, but typically low resolution
Light maps decouple surface texture from diffuse lighting contribution
http://hcsoftware.sourceforge.net/RadiosGL/RadiosGL.html
(modulate)
=
lightmaps only decal onlydecal only
combined scenecombined scene
Lightmaps in Quake2
(modulate)
+ =
Specularlighting
contribution(per-vertex
lighting)
Gloss maptexture
Diffuselighting
contribution(per-vertex
lighting)
Final combinedresult
Gloss Map Example
Per Pixel Lighting Overview
Introduction to per-pixel lightingNormal mapsHow to create them
Tangent or “surface-local” spaceWhy we need itHow to use itThings to watch out for
Animation & Other Topics
Per-Pixel Lighting
Per-Pixel lighting is the next leap in visual quality after simple multi-texturingIt allows more apparent surface detail than would be possible with triangles aloneDX7 HW with DOT3 was a huge leap in per-pixel capabilityDX8 HW increases performance again, and adds completely new capabilities
Examples
A single quad lit per-pixel
Simple geometry, high detail
Reflective bumps
Per-Pixel Lighting / Bump Mapping
Bump Mapping is a subset of Per-Pixel LightingThese slides will discuss them interchangeablyMost older Bump Mapping examples were only performing diffuse directional lightingBump Mapping / Per-Pixel Lighting can be used to achieve diffuse and/or specular point lights, spotlights and volumetric lights also
Normal Maps are Bump Maps
Height maps are popular (3DS Max, Maya, ..)Normal maps are better
Height Map Normal Map
Creating Normal Maps
Normal maps are easy to create from height mapsFind slope along each axis: dHeight/dU, dHeight/dVCross product of slopes gives normal vectorConvert normal vector (X,Y,Z) [-1,1] to R,G,B color [0,1]
X R, Y G, Z BZ is “up” out of the image planeRGB = ( 0.5, 0.5, 1.0 ) corresponds to XYZ = ( 0, 0, 1 )XYZ = ( 0, -1, 0 ) RGB = ( 0.5, 0.0, 0.5 )
Surface normals mostly point up out of the bump map plane, so normal maps are mostly blue
simulated surface
Creating Normal Maps From Height Maps
Simplest: Use 4 nearest neighbors
dz/du = ( B.z - A.z ) / 2.0f // U gradientdz/dv = ( D.z - C.z ) / 2.0f // V gradientNormal = Normalize( (dz/du) (dz/dv) ) denotes cross-product
D
AA
C
B
Surface Normal
UV
ZD
C
B
Creating Normal Maps From Height Maps
Make sure your height map uses the full range of gray values
Get smoother results by sampling a larger area around each point
3x3, 5x5, …
NVIDIA provides three tools:Normal Map Generation Tool (best sampling)BumpMaker (simple 2-neighbor sampling)Photoshop plug-in
Creating Normal Maps From Geometry
More esoteric approachCan be done in a DCC appModel surface detail in 3D
Create detail up from a flat surface
Render surface with red, green, and blue directional lights, one color for each 3D axis
Need negative lights as well as positiveOrthographic projection
Creating Normal Maps From Geometry
5 lights, positive & negativeAmbient = ( ½ , ½ , ½ )
+R
-R +G
-G
+B
Normal Map Applied to Geometry
We now have a normal vector for each pixel of the object
Use the normal in standardN • L + N • H lighting eqn.
Normal map vector is relative tothe flat triangle it is on. It is NOT a normal in world or object space!
N • L must have Normal and Light Vector in the same coordinate system!
The Light Vector
With vertex lighting, we hadNormal vector per vertexLight vector per vertex
So far, we’ve got Normal vector per pixel
We need a light vector for every pixel!Start with vector to light at each vertexHW iterates that vector across each triangle
Iterated color or texture coordinate
Interpolated Vector -- Watch Out!
We’re interpolating between vectors linearlyInterpolated vector is not normalizedIt can be shorter than unit lengthOnly noticeable when light is close to object
normalized
normalized
not normalized
Solution – Re-Normalize the Vector
Do this only if you have toOnly if distance from tri to light is less than longest edge of tri, or some other metric
What if you don’t?Highlights are dimmerRare cases you will notice a darkening near the light
Use normalization cube map
Pixel Shaders: Use one step of Newton-Raphson technique to re-normalize
Developed by Scott Cutler at NVIDIA
Normalization Cube Map
Access cube map with un-normalized vector (U,V,W)Result is RGB normalized vector in same direction
Input ( 0, 0, 0.8 ) RGB ( 127, 127, 255 ) which is a normalized vector for per-pixellighting
+U,+X
-W-Z
+V,+Y
+X face-X face
+Y face
-Y face
+Z face - Z face
Normalization Cube Map
Cube map doesn’t need to be huge32x32x864x64x16
www.nvidia.com/Developer“Simple Dotproduct3 Bump Mapping” demo
Newton-Raphson Re-Normalization
One step of numerical technique for normalizing a vectorDX8 Pixel Shaders (or OGL Register Combiners)Faster than cube normalization mapNumerical method:Normalize( V ) V / 2 * ( 3 - V • V )when V is close to unit length
Great when angle between interpolated vectors of a tri is no more than about 40º That’s a big difference, so this is valid for most models & circumstances
Newton-Raphson in DX8
Approximate V / 2 * ( 3 - V • V )V/2 * (3 – V•V)= 1.5V – 0.5V * (V•V)
= V + 0.5V – 0.5V * (V•V)= V + 0.5V * ( 1 – ( V •
V ) )
Pixel Shader code: V = t0 vector
def c0, 0.5, 0.5, 0.5, 0.5
mul r0, t0, c0 // 0.5 * Vdp3 r1, t0, t0 // V DOT Vmad r0, 1-r1, r0, t0
N • L Per-Pixel
Can visualize light vector x,y,z as an RGB colorSame [-1,1] [0,1] conversion as for the normal vector
• =
Per-Pixel Lighting
•
Normal mapLight Vector, L
=
What Coordinate System?
Normal vector is expressed relative to each triangleThis is “surface-local” space, aka. “texture space”It’s a 3D basis, consisting of three axis vectors
S, T, S T ( = cross product )
Texture space depends onGeometric position of verticesU,V coordinates of vertices which determine how the normal map is applied
SxTSxT
S
TT
S
How to Calculate Texture Space
NVIDIA sample code!D3DX utility library for DX8.1 will do it for you!If you must know…For each tri, find derivatives of U and V texture coordinates with respect to X,Y, and Z
S vector = dU/dX, dU/dY, dU/dZT vector = dV/dX, dV/dY, dV/dZThen take S T
Now we have S, T, ST texture space basis for each triangleS, T, ST is a transform from Object Space into Texture Space
Resultant Texture Space
Express texture space per-vertexFor each vertex’s S vector, average the S vectors of the tris it belongs toSame for T and ST vectorsAnalogous to computing vertex normals from face normals!
S
SxTSxT
SxT
S
TT
S
S
T
T
SxT
Add It to Your Geometry
Add S, T, ST vectors to your vertex format (FVF)
We can now transform the object space Light Vector into texture spaceThis puts L in the same space as our normal map vectors, so N • L lighting will work
DX7: Must transform light vector in SWStuff it into Diffuse or Specular color for iterationor a 3D texture coord for Normalization Cube Map
DX8: Use a Vertex Shader to transform light vector at each vertex
Put it into a color or texture coord for iteration
DX7 vs. DX8 Hardware Implementation
DX7 hardwareWrite light vector to a color for iterationTextureStageState setup example:
COLORARG0 D3DTA_DIFFUSE // light vecCOLORARG1 D3DTA_TEXTURE // normal mapCOLOROP D3DTOP_DOTPRODUCT3
DX8 hardwareWrite light vector to a texture coord for iterationVarious Pixel Shader program approaches:
tex t0 // normal maptexcoord t1 // light vectorDP3 r0, t0_bx2, t1 // expand unsigned vals
GeForce I, II Details
Remember: Under DX8, GeForce I & II have a new temporary result registerAlso new triadic ops & 3rd argument:D3DTOP_MULTIPLYADD, D3DTOP_LERP
VertexBuffer->Lock(); Write light vector to color or texture coord; VertexBuffer->Unlock()
N • L * BaseTexture0, COLORARG0 D3DTA_DIFFUSE // light vec0, COLORARG1 D3DTA_TEXTURE // normal map0, COLOROP D3DTOP_DOTPRODUCT31, COLORARG0 D3DTA_CURRENT // dot3 result1, COLORARG1 D3DTA_TEXTURE // base tex1, COLOROP D3DTOP_MODULATE
GeForce 3 Approach
FVF = { pos, nrm, diffuse, t0, S, T, SxT }Declare vertex shader: S v4; Tv5; SxTv6SetVertexShaderConst( C_L, vLightPosObjSpace..)
vs.1.1dp3 oD1.x, v4, c[C_L]dp3 oD1.y, v5, c[C_L]dp3 oD1.z, v6, c[C_L]mov oD1.w, c[CV_ONE]
ps.1.1
tex t0 // base
tex t1 // normal map
dp3 r0, t1_bx2, v1_bx2
mul r0, r0, t0
// plenty of slots left if you
// want to do normalization
Animation
Keyframe:Don’t blend between radically different keysInterpolate S, T, ST & re-normalize (VShader)
Matrix Palette Skinning:Animate S, T, ST vectors with the same transform as for the normal vectorVertex Shader program makes this trivialTry using the vertex Normal in place of ST if you need room
Final Bump Map Thoughts
Once you have texture space, you’re all set for many other effectsNormal maps can be created and modified on the fly very quickly with DX8 hardware!Normal Map + Detail Normal Map for added detail
Similar to texture + detail texture
Per-pixel lighting adds tremendous detail
High Dynamic Range Images
Developed by Paul E. Debevec and Jitendra Malikhttp://www.debevec.org
Radiance can vary beyond precision of 8 bitsEncodes radiance in floating point valuesDemo at site uses Geforce2Commercial Licensing Required
Low Dynamic Range Images
Simply lighting encoded in cubemapLow precision but can be effective for Diffuse lighting
Take high resolution photographs of mirrored ball from as many as 6 angles
Low Dynamic Range Images
Align images into cubemap faces.
Low Dynamic Range Images
Run though diffuse convolution filter
Low Dynamic Range Images
Results
BRDFS
Principals of BRDF Lighting
BRDF Stands for Bi-directional Reflectance Distribution FunctionBRDF is a function of incoming light direction and outgoing view direction
NL
VθL θV
observerlight
surface
What is a BRDF?
In 3D, a direction D can be represented in spherical coordinates (D, D)
A BRDF is a 4D function: BRDF( L, L, V, V )
Basic Idea:Approximate the 4D function with lower dimensional functions“Separate” the BRDF into products of simpler functionsBRDF(L,V) G1(L)*H1(V) + G2(L)*H2(V) + …
Minnaert Reflections are a little easierOnly encodes (L * N) and (V * N)
Multi-Texture BRDF Approximations
BRDF Examples
References
Computer Graphics at University of Leeds, http://www.comp.leeds.ac.uk/cuddles/hyperbks/Rendering/index.htmlPaul E. Debevec and Jitendra Malik. Recovering High Dynamic Range Radiance Maps from Photographs. In SIGGRAPH 97, August 1997.
Questions…
?www.nvidia.com/Developer