27
Volume 24 (2005), number 1 pp. 83–109 COMPUTER GRAPHICS forum Acquisition, Synthesis, and Rendering of Bidirectional Texture Functions G. M ¨ uller, J. Meseth, M. Sattler, R. Sarlette and R. Klein University of Bonn, Institute for Computer Science II Abstract One of the main challenges in computer graphics is still the realistic rendering of complex materials such as fabric or skin. The difficulty arises from the complex meso structure and reflectance behavior defining the unique look-and-feel of a material. A wide class of such realistic materials can be described as 2D-texture under varying light- and view direction, namely, the Bidirectional Texture Function (BTF). Since an easy and general method for modeling BTFs is not available, current research concentrates on image-based methods, which rely on measured BTFs (acquired real-world data) in combination with appropriate synthesis methods. Recent results have shown that this approach greatly improves the visual quality of rendered surfaces and therefore the quality of applications such as virtual prototyping. This state-of-the-art report (STAR) will present the techniques for the main tasks involved in producing photo-realistic renderings using measured BTFs in details. Keywords: bidirectional texture function, reflectance measurement, image-based rendering, surface appearance. ACM CCS: I.3.3 Picture/Image Generation: Digitizing and scanning, I.4.1 Digitization and Image Capture: Reflectance, I.3.7 [Three-Dimensional Graphics and Realism]: Color, shading, shadowing, and texture. 1. Introduction The realistic visualization of materials and object surfaces is one of the main challenges in today’s computer graphics. Traditionally, the geometry of a surface is modeled explic- itly (e.g., with triangles) only up to a certain scale while the microstructure responsible for the reflectance behavior of a material is simulated using relatively simple analytic BRDF models. For special groups of materials such as metals or shiny plastics, these models are able to generate a more or less correct approximation of real appearance, but they are neither suited nor designed for all materials out there in the real world. Furthermore, these models are not easy to pa- rameterize in order to achieve a desired effect, because the parameters may not correspond to some intuitive or physi- cal meaning. Things become even more complicated, if one wants to simulate the so-called mesostructure of a complex inhomogeneous material. These structures are in-between the macro-scale geometry modeled by triangles and the micro- scale geometry modeled by analytical BRDF models, and play a very important role in defining and transporting the unique look-and-feel of a material. If they are not reproduced accurately, the images will look artificial. Driven by these limitations of traditional modeling and the enormous growth of memory capacity (256 MB of random access memory are almost standard equipment of current off- the-shelf graphics hardware) image-based methods become increasingly important. The idea behind image-based render- ing and modeling is to generate images from measurements of the real world (e.g., photographs) in order to simplify or even avoid the timeconsuming and artistic process of model- ing reality by hand. However, pure image-based representa- tions lack the kind of control over the scene that traditional modeling offers and are therefore still limited to special do- mains. This explains the fact that approaches mixing clas- sical geometric and image-based modeling such as texture mapping and its extensions such as bump-mapping remain the most popular techniques for rendering mesostructure and are used in any field from real-time rendering to global il- lumination. Nevertheless, textures and bump-maps are able to represent only a fraction of the appearance of complex c The Eurographics Association and Blackwell Publishing Ltd 2005. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. 83 Submitted June 2004 Reviewed December 2004 Revised January 2005 Accepted January 2005

Acquisition, Synthesis and Rendering of Bidirectional Texture Functions

Embed Size (px)

Citation preview

Volume 24 (2005), number 1 pp. 83–109 COMPUTER GRAPHICS forum

Acquisition, Synthesis, and Rendering of BidirectionalTexture Functions

G. Muller, J. Meseth, M. Sattler, R. Sarlette and R. Klein

University of Bonn, Institute for Computer Science II

AbstractOne of the main challenges in computer graphics is still the realistic rendering of complex materials such asfabric or skin. The difficulty arises from the complex meso structure and reflectance behavior defining the uniquelook-and-feel of a material. A wide class of such realistic materials can be described as 2D-texture under varyinglight- and view direction, namely, the Bidirectional Texture Function (BTF). Since an easy and general method formodeling BTFs is not available, current research concentrates on image-based methods, which rely on measuredBTFs (acquired real-world data) in combination with appropriate synthesis methods. Recent results have shownthat this approach greatly improves the visual quality of rendered surfaces and therefore the quality of applicationssuch as virtual prototyping. This state-of-the-art report (STAR) will present the techniques for the main tasksinvolved in producing photo-realistic renderings using measured BTFs in details.

Keywords: bidirectional texture function, reflectance measurement, image-based rendering, surface appearance.

ACM CCS: I.3.3 Picture/Image Generation: Digitizing and scanning, I.4.1 Digitization and Image Capture:Reflectance, I.3.7 [Three-Dimensional Graphics and Realism]: Color, shading, shadowing, and texture.

1. Introduction

The realistic visualization of materials and object surfacesis one of the main challenges in today’s computer graphics.Traditionally, the geometry of a surface is modeled explic-itly (e.g., with triangles) only up to a certain scale while themicrostructure responsible for the reflectance behavior of amaterial is simulated using relatively simple analytic BRDFmodels. For special groups of materials such as metals orshiny plastics, these models are able to generate a more orless correct approximation of real appearance, but they areneither suited nor designed for all materials out there in thereal world. Furthermore, these models are not easy to pa-rameterize in order to achieve a desired effect, because theparameters may not correspond to some intuitive or physi-cal meaning. Things become even more complicated, if onewants to simulate the so-called mesostructure of a complexinhomogeneous material. These structures are in-between themacro-scale geometry modeled by triangles and the micro-scale geometry modeled by analytical BRDF models, andplay a very important role in defining and transporting the

unique look-and-feel of a material. If they are not reproducedaccurately, the images will look artificial.

Driven by these limitations of traditional modeling and theenormous growth of memory capacity (256 MB of randomaccess memory are almost standard equipment of current off-the-shelf graphics hardware) image-based methods becomeincreasingly important. The idea behind image-based render-ing and modeling is to generate images from measurementsof the real world (e.g., photographs) in order to simplify oreven avoid the timeconsuming and artistic process of model-ing reality by hand. However, pure image-based representa-tions lack the kind of control over the scene that traditionalmodeling offers and are therefore still limited to special do-mains. This explains the fact that approaches mixing clas-sical geometric and image-based modeling such as texturemapping and its extensions such as bump-mapping remainthe most popular techniques for rendering mesostructure andare used in any field from real-time rendering to global il-lumination. Nevertheless, textures and bump-maps are ableto represent only a fraction of the appearance of complex

c© The Eurographics Association and Blackwell Publishing Ltd2005. Published by Blackwell Publishing, 9600 GarsingtonRoad, Oxford OX4 2DQ, UK and 350 Main Street, Malden,MA 02148, USA. 83

Submitted June 2004Reviewed December 2004

Revised January 2005Accepted January 2005

84 G. Muller et al./Acquisition, Synthesis, and Rendering of BTFs

Figure 1: In principle, there is no conceptual difference between using BTFs and 2D textures. The compression steps can beomitted, but in fact they are absolutely necessary to achieve acceptable frame-rates and processing times.

materials. In fact, it is assumed that the material is nearlyflat, and visually important light- and view-dependent effectssuch as shadowing, masking, and self-interreflections are notcaptured.

These effects are included in the Bidirectional TextureFunction (BTF), a 6D texture representation introduced byDana et al. [1], which extends the common textures by de-pendence on light- and view direction. Using densely sam-pled BTFs instead of ordinary surface representations like 2Dtextures makes no real conceptual difference and thus con-sists of the following main steps that are also illustrated inFigure 1:

� Texture acquisition,� Synthesis,� Rendering.

Between these steps, compression can be inserted to reducethe storage requirements of the texture. These compressionsteps are optional for 2D texturing, but it will become ob-vious in the following that they are absolutely necessary forBTFs to achieve acceptable framerates and processing times.As shown in Figure (2), the visual impression of complexmaterials generated by rendering from a sampled BTF is im-proved significantly compared to simple diffuse texture map-ping. So why is the usage of ordinary 2D textures still morewidespread?

First, current acquisition systems are expensive and themeasurement process is time consuming as the direction-dependent parameters (light- and view direction) have to becontrolled very accurately. Otherwise the resulting data willbe incorrect. Furthermore, the size of measured BTFs lies in arange from hundreds of megabytes to several gigabytes. Thishampers both synthesis and rendering so that only effectivecompression techniques can provide a solution.

Because of these limitations, BTF rendering is still not ma-ture enough for industrial applications. Nevertheless, there isa growing demand for interactive photo-realistic material vi-sualization in the industry. For special applications, such ashigh end virtual reality environments, BTF rendering can al-ready satisfy these demands. Simple material representationslike 2D texture or bump maps sooner or later will be replacedby more complex representations that reproduce all the sub-tle effects of general light material interaction. This STARintends to give an overview over to what extent current BTFresearch can contribute to this process.

This paper is structured as follows: Sections 2–5 will coverin detail the before mentioned main steps of using image-based material representations, i.e., acquisition, compression,synthesis, and rendering of BTFs. In Section 6, current andfuture applications of BTFs are discussed and the last sectiongives a summary and concludes the STAR.

2. Acquisition

As shown by Dischler [2], who introduced BTFs to the com-puter graphics community for the first time, BTFs can beused for the efficient and high-quality rendering of modeledmesostructure. In this sense, BTFs serve the same purposeas other material representations that also intend to reducethe geometric complexity of modeled surface detail such asbump mapping, relief texture mapping, and horizon mapping,to name a few. But beyond that the BTF is a general image-based representation for the physical appearance of real worldmaterials and captures effects that are very difficult or evenimpossible to model and simulate (e.g., light transport withina complex surface). Therefore, for the rest of this paper wewill focus on measured BTFs.

The acquisition of the BTF of real world materials requiresa complex and controlled measurement environment. As BTF

c© The Eurographics Association and Blackwell Publishing Ltd 2005

G. Muller et al./Acquisition, Synthesis, and Rendering of BTFs 85

Figure 2: Comparing simple texture mapping and rendering from a measured BTF.

Figure 3: The parameters of general light–materialinteraction.

acquisition is physical measurement of real-world reflection,special attention has to be paid to the device calibration andimage registration. Otherwise, the measurements will containinaccuracies that may generate visible rendering artifacts. Inthis section, current techniques for accurate measurementsof BTFs will be presented. As an introduction, the follow-ing subsection will give theoretical background and a shortoverview of related image-based techniques for physical re-flectance measurement.

2.1. Principles of reflectance measurement

To measure the reflectance behavior of a surface, a set ofparameters that describe the process of light scattering on asurface has to be defined. For this purpose, let us imagine thepath of a photon through a surface as illustrated in Figure 3. Ithits the surface at time ti, position xi, and with an associated

wavelength λi. Given a fixed local coordinate frame at everysurface point, the incoming direction of the photon motioncan be determined as (θ i , φ i ). The photon travels through thematerial and leaves the surface at position xr, time tr, withpossibly changed wavelength λr in the direction (θ r , φ r ).

According to this description, the scattering of light is aprocess involving 12 parameters! Since the measurement ofa 12-dimensional function is currently not practical, somesimplifying assumptions have to be made. Typically, theseare the following:

� light transport takes zero time (ti = tr),� reflectance behavior of the surface is time invariant

(t 0 = ti = tr),� interaction does not change wavelength (λi = λr ),� wavelength is discretized into the three color bands, i.e.,

red, green, and blue (consider only λr,g,b).

We now have arrived at an 8-dimensional function, which iscalled the bidirectional scattering-surface reflection distribu-tion function (BSSRDF) [3]. It describes the light transportbetween every point on the surface for any incoming and out-going direction. Figure 4 shows further simplifications of theBSSRDF by fixing some of the parameters. For these simplermodels, measurement systems have been proposed and sincethis work is related to BTF measurement, a short overviewover some of them is given in the following paragraphs.

Inclusion of subsurface scattering. A practical model forthe rendering of the BSSRDF has been proposed by Jensenet al. [4]. Their model is based on the dipole approxima-tion to a diffusion model. For homogeneous materials, theparameters of this model (absorption cross-section σ a and

c© The Eurographics Association and Blackwell Publishing Ltd 2005

86 G. Muller et al./Acquisition, Synthesis, and Rendering of BTFs

Figure 4: A hierarchy of reflectance functions.

reduced scattering cross-section σ ′s) can be derived from a

single HDR image. They achieved good results for materi-als such as marble and even fluids such as milk. Recently,Goesele et al. [5] presented a measurement setup for translu-cent inhomogeneous objects, which take multiple images ofan object illuminated with a narrow laser beam. They had toassume diffuse surface reflection and strong subsurface scat-tering because they did not measure the angular dependency.

Uniform and opaque materials. For many applications, itis convenient to drop spatial dependence and consider reflec-tion taking place on an infinitesimal surface element. Thisprocess is described by the 4D BRDF and the classical mea-surement device for this quantity is the gonioreflectometer,which samples the angular dependency sequentially by po-sitioning a light source and a detector at various directionsfrom the sample [3]. Several methods attempted to reducemeasurement times by exploiting CCD chips for taking sev-eral BRDF samples at once. Ward [6] used a hemisphericalhalf-silvered mirror and a camera with a fish-eye lens to ac-quire the whole exitant hemisphere of the flat probe at once.Alternatively, one could take images from a curved sampleas it was done by Marschner et al. [7] and Matusik et al. [8].The latter work was tailored to measuring isotropic BRDFs.It also demonstrates how measurement times can be signif-icantly reduced by using sophisticated learning algorithmsand a database of densely acquired BRDFs.

Reflectance fields. Pure image-based rendering considersthe flow of light independent of a physical surface, and be-came popular with the works on light fields [9] and lumi-graphs [10]. They observed that the 5D plenoptic function(pencil of light rays flowing through points in space) can bedescribed by a 4D function if the viewer moves in unoccluded

space outside or inside a virtual surface (e.g., a cube) overwhich the light field is parameterized. The variation of thislight field according to a 4D light field of radiance incidentat the virtual surface is described by the 8D reflectance field[11]. If parameterized over a physical surface, this structureis equivalent to the BSSRDF. Fixing the incident light fieldgives the light field as originally introduced by Levoy et al.[9] and Gortler et al. [10], and which has been sampled usingan array of cameras. Parameterized over physical surfaces, itis called the surface light field [12,13]. The surface light field,is measured using many images from different viewpoints ofan object with known geometry.

In order to capture the lighting variability of the reflectancefield, many images of the scene under varying lighting con-ditions have to be taken. Debevec et al. [11] built a so-calledlight-stage, which records the appearance of a human facewhile a light source is rotating around the face. They as-sumed infinitely distant lighting and fixed the view endingup with spatially varying 2D reflectance functions to reducethe dimensionality of the reflectance field. For measuringthe reflectance functions of small and nearly planar objects,Malzbender et al. [14] constructed a hemispherical gantryattached with 50 strobe light sources. They also introducedPolynomial Texture Maps (PTM), a compact representationfor the acquired data that is especially suited for diffuse mate-rials. A very complex acquisition setup was built by Matusiket al. [15]. It captures the object also from varying viewpointsand handles objects with complex silhouettes using multi-background matting techniques and thus actually (sparsely)samples a 6D slice of the reflectance field. Masselus et al.[16] fixed the viewpoint again but instead used a spatiallylocated light basis which enabled the relighting of the sceneby the full 4D incident light field.

c© The Eurographics Association and Blackwell Publishing Ltd 2005

G. Muller et al./Acquisition, Synthesis, and Rendering of BTFs 87

Bump maps. Bump mapping, first introduced by Blinn[17], is a very popular extension to simple texture mapping.Bump maps consist of an array of normals that are used in-stead of the normals of the base geometry and can be usedto simulate a bumpy surface with small height variations.The normals of a bumpy and not too specular surface can bereliably measured using photometric stereo. This techniquecaptures three or more images from a single viewpoint ofa surface illuminated from different directions. Rushmeieret al. [18], for example, built a setup that takes five images litfrom different directions. Even low-cost graphics boards arenow able to render bump maps in real-time and they are usedextensively for example in video games. However, due to itsinherent limitations many visually important effects are notreproduced by bump mapping.

2.2. BTF measurement

After shortly introducing general reflectance measurement,we will now concentrate on the BTF as a measured 6D sliceof the general light scattering function of a surface S:

BTFrgb(x, θi , φi , θr , φr )

:=∫

SBSSRDFrgb(xi , x, θi , φi , θr , φr ) dxi

It this sense the BTF integrates subsurface scattering fromneighboring surface locations, as it is done by the most exist-ing measurement setups. Nevertheless, this definition alsoallows for the mathematical interpretation of the BTF asspatially varying BRDF. The existing work can be groupedroughly into two categories.

The first group captures the geometry of a small objectand its reflection properties parameterized over the surface,i.e., the spatially varying BRDF. The works of Lensch et al.[19] and Furukawa et al. [20] fall into this category. Theycapture the geometry of the object using laser scanners andtake several images under varying light and viewing condi-tions. The methods differ in their data representation. WhileFurukawa et al. map the images onto the triangles of themodel and compress the appropriately reparameterized datausing tensor product expansion, Lensch et al. fitted the datato a parametric reflection model. In order to cope with in-sufficient sample density, they used an iterative clusteringprocedure.

The second group aims at capturing the appearance of anopaque material independently from geometry. These meth-ods have in common that they capture the BTF of a planar ma-terial sample. The acquired data can be used instead of simple2D textures and mapped onto arbitrary geometry. Since thisSTAR is concerned with BTFs as a generalized texture rep-resentation, in the following these methods are described indetail.

Figure 5: Capturing the BTF of a planar sample using agonioreflectometer-like setup with a fixed light source, sam-pleholder and a moving camera.

Figure 6: One image of the sample ROUGH PLASTIC fromthe CUReT database.

2.2.1. Gonioreflectometer-like setup with CCD chips

The most common approaches use a gonioreflectometer-likesetup with a CCD chip instead of a spectrometer in order tocapture the spatial variation of reflection. This approach hasproved to be reliable and several variations of it have beenpublished. However, its drawback are the long measurementtimes.

The first measurement system that used such agonioreflectometer-like setup as depicted in Figure 5 waspresented in the pioneering work of Dana et al. [1]. Theirsystem takes 205 images of isotropic materials, which is atoo sparse sampling for high-quality rendering, in partic-ular for rough surfaces and materials with strong specularpikes. Even though they mentioned the possibility of usingthe data for rendering, the original intent of the work wasbuilding up a material database for computer vision relatedtasks such as texture recognition, texture segmentation, andshape-from-texture. They measured 61 real-world surfacesand made them available through the CUReT database [21].Figure 6 shows a sample from the database. Similar, but

c© The Eurographics Association and Blackwell Publishing Ltd 2005

88 G. Muller et al./Acquisition, Synthesis, and Rendering of BTFs

Figure 7: Measured BTF samples; from left to right (top row): CORDUROY, PROPOSTE, STONE, WOOL, and WALLPAPER.Bottom row: Perspective views (θ = 60, φ = 144) of the material and sample holder with illumination from (θ = 60, φ = 18).Note the mesostructure and changing colors.

improved versions of the measurement system were de-scribed in [22] and [23]. Some measurements of the latter sys-tem are now also publicly available through the BTF databaseBonn [24]. We will describe this system in greater detail inthe following.

Measurement setup. The measurement setup is designedto conduct an automatic measurement of a BTF that alsoallows the automatic alignment and postprocessing of thecaptured data. A high-end digital still camera is used as imagesensor. The complete setup, especially all metallic parts of therobot, are covered with black cloth or matte paint, with strongdiffuse scattering characteristics.

The system uses planar samples with a maximum size of10 × 10 cm2. In spite of these restrictions, measurementof a lot of different material types, e.g., fabrics, wallpapers,tiles, and even car interior materials is possible. As shown inFigure 8, the laboratory consists of an HMI (HydrargyrumMedium Arc Length Iodide) bulb, a robot holding the sam-ple, and a rail-mounted CCD camera (Kodak DCS Pro 14N).Table 1 shows the sampling density of the upper hemispherefor light and view direction, which results in n = 81 uniquedirections for camera and light position. Hence, 6561 picturesof a sample are taken.

Figure 7 shows several measured samples. The top rowshows frontal views of the different samples, whereas thebottom row shows oblique views. In the latter case, especiallythe mesostructure of the samples becomes visible. Each rawimage is 12 megabytes in size (lossless compression) with aresolution of 4500 × 3000 pixels (Kodak DCR 12-bit RGBformat). With this setup, the measuring time is about 14 hours,where most of the time is needed for the data transfer fromthe camera to the host computer.

Figure 8: Measurement setup of the Bonn-System consistingof an HMI lamp, a CCD camera, and a robot with a sampleholder.

Calibration. To achieve high-quality measurements, theequipment has to be calibrated.

� To compensate the positioning error due to the robotand the rail system, one has to track the sample holdermounted on the robot arm using the camera. Experimentsdetermined that these errors are small in the describedsetup. Therefore, marker points, which are placed onthe sample holder, are detected only during the post-processing phase, allowing a software jitter correction.

� A geometric calibration has to be applied to the camera toreduce geometric distortion, caused by the optical systemof the camera. Details and further references to cameracalibration can be found, e.g., in [25].

� For each sample to be measured, the aperture of the cam-era is adjusted in such a way that the number of saturatedor dark pixels in the pictures is minimized given a fixedaperture during the measurement process.

c© The Eurographics Association and Blackwell Publishing Ltd 2005

G. Muller et al./Acquisition, Synthesis, and Rendering of BTFs 89

Table 1: Sampling of viewing and illumination anglesof the BTF database Bonn. ∗ = only one image taken atφ = 0◦.

θ [◦] �φ[◦] No. of images

0 −∗ 115 60 630 30 1245 20 1860 18 2075 15 24

� To achieve the best possible color reproduction, the com-bination of the camera and the light source has to becolor calibrated. For the measurement of the camera colorprofile, a special CCD-Camera standard card (GretagMacbeth—Color Checker DC) is used.

Data postprocessing. After the measurement, the raw im-age data are converted into a set of rectified, registered imagescapturing the appearance of the material for varying light andview directions. Now a complete set of discrete reflectancevalues for all measured light and viewing directions can beassigned to each texel of a 2D texture.

Registration is done by projecting all sample images ontothe plane which is defined by the frontal view (θ = 0, φ = 0).To be able to conduct an automatic registration, borderlinemarkers were attached to the sample holder plate (Figure 7).After converting a copy of the raw data to a binary image,standard image-processing tools are used to detect the mark-ers. In the following steps, the mapping (which maps thesemarkers to the position of the markers in the frontal view) iscomputed and utilized to fill the registered image with appro-priate colors.

To convert the 12-bit RGB images stored in the proprietaryformat of the camera manufacturer to standard 8-bit RGB fileformats, the standard color profiles provided with the Cam-era SDK (look and output profile) and camera (tone curveprofile) are applied to the image. The most appropriate 8-bitcolor range is extracted after applying an exposure gain tothe converted data.

After this postprocessing step, the final textures are cutout of the raw reprojected images and resized appropriately(256 × 256 pixels in size for probes in the database, up toabout 800 × 800 in principle). A final dataset with 256 ×256 pixels spatial resolution has a data amount of 1.2 GB.

2.2.2. Using video cameras

Koudelka et al. [26] presented a system resembling thebefore-mentioned ones, but it fixes the image sensor (a Canon

XL-1 digital video camera) and moves the light source (awhite LED mounted on a robot arm). The employed hemi-sphere sampling results in a dataset of about 10,000 images.Due to the use of a video camera with relatively low reso-lution compared to a high-end still camera, a measurementtakes about 10 hours. Samples from this system are publiclyavailable for research purposes and include interesting natu-ral materials such as lichen or moss and man-made materialssuch as carpet or even LEGOTM bricks.

2.2.3. Using mirrors

Inspired by BRDF measurement techniques, it has also beenproposed to use mirrors for BTF measurement in order toavoid hemispherical movements or to make several measure-ments in parallel.

An approach using a concave parabolic mirror has beenpublished by Dana and Wang [27]. In this device, theparabolic mirror is focused on the sample and, thus, an imageof the mirror captures the appearance of the surface point infocus as seen from different viewing directions. Spatial vari-ation of the sample is captured by planar movement of themirror (or the sample). Illumination from different directionsis achieved by focusing a light beam on the appropriate spotin the mirror. With this setup, no hemispherical movement isrequired and the resulting data are of high quality. But a highspatial resolution (comparable to the gonioreflectometer-likedevices that achieve about 300DPI) requires an enormousamount of images. Please also note that interreflections andsubsurface scattering from neighboring parts of the surfaceare not integrated. Furthermore, for a substantial range of az-imuthal angles φ the covered range of the polar angle θ isrestricted by the size of the mirror. For example for φ = π

the presented prototype can capture polar angles only up to22.6◦.

Han et al. [28] presented a measurement system based ona kaleidoscope, which allows to capture several images ofthe whole sample at once. The advantages in measurementtime and registration precision (no moving parts) are accom-panied by a number of disadvantages. Multiple reflections onmirrors (not perfect reflectors) cause low image quality andlead to a difficult color calibration. Slight asymmetries in theconfiguration of the mirrors result in registration errors. An-gular sampling and spatial resolution are often coupled, i.e., ahigher angular resolution leads to a lower spatial resolution.Radloff [29] has analyzed different kaleidoscope configura-tions by simulation and built several prototypes. But due tothe mentioned difficulties in building a perfect kaleidoscopethe quality of the results turned out to be rather low.

2.2.4. Using a camera array

For a fast high quality acquisition of BTFs in our recentapproach we propose an array of 151 digital still camerasmounted on a hemispherical gantry (see Figure 9 for a sketch

c© The Eurographics Association and Blackwell Publishing Ltd 2005

90 G. Muller et al./Acquisition, Synthesis, and Rendering of BTFs

Figure 9: Sketch of the proposed camera array. 151 digitalcameras with built-in flash lights are mounted on a gantry,focusing on the sample, which is placed in the center of thehemisphere.

of the setup). A similar gantry with mounted light sources wasused by Malzbender et al. [14] to capture PTMs. Althoughthe setup is costly to build, a camera array is capable of mea-suring many samples in a short time. Because of the parallelstructure of the acquisition, the example setup would be ca-pable of capturing a BTF dataset of 1512 = 22801 images inless than 1 hour. No moving parts are needed. Therefore, theregion of interest (ROI) is known for every camera and canbe extracted at subpixel precision. Hence, there is no need fora time-consuming detection of the ROI, the post-processing(reprojection, geometric correction and color correction) isfast enough to be done in parallel to the measurement. Theangular resolution depends on the number of cameras and thespatial resolution on the imaging chips. This will result in ahigh angular resolution; every measured direction representsan average solid angle of only 0.04161 steradians. The spatialresolution would be up to 280DPI for a resulting BTF texturesize of 1024 × 1024 pixels. As light sources, the built-in flashlights of the cameras will be used.

2.2.5. Discussion

Currently, only the standard gonioreflectometer-like mea-surement setups have proven that they can be used to capturehigh-quality BTFs reliably. Their drawback is the speed—several hours is too long and make measured BTFs an expen-sive resource. Using mirrors may be a promising approach inthe future, but the current systems are far from reaching theresolution and quality of the gonioreflectometer-like setups.Using a camera array will greatly reduce measurement timeswhile keeping quality and resolution at the expense of thecosts for a large number of cameras.

3. Compression

Because of its huge size the pure image-based representationof a BTF consisting of the thousands of images taken during

Figure 10: Two arrangements of the BTF data as set of im-ages (left) and as set of ABRDFs (right).

the measurement process is neither suitable for rendering norfor synthesis. In order to achieve real-time frame rates andacceptable synthesis times, some sort of data-compressionhas to be applied. Such a method should of course preserveas much of the relevant features of the BTF as possible, butshould also exploit the redundancy in the data in an efficientway and provide a fast, preferably real-time decompressionstage. An optimal method would achieve high compressionrates with low error and real-time decompression. For integra-tion into current real-time rendering systems, an implemen-tation of the decompression stage on modern GPUs wouldalso be of great value.

Most existing compression techniques interpret the BTFas shown in Figure 10, as a collection of discrete textures{

T(v,l)}

(v,l)∈M

where M denotes the discrete set of measured view- andlight-directions, or as a set of spatially varying apparentBRDFs (ABRDF, the term was introduced in a paper of Wonget al. [30]):

{Bx}x∈I⊂N2 .

Note that ABRDFs do not fulfill physically demanded prop-erties such as reciprocity, since they include scattering effectsfrom other parts of the surface. As illustrated in Figure 11,they can also contain a factor (n · l) between incident direc-tion and surface normal and strong effects from meso-scaleshadowing and masking.

3.1. Fitting Analytical BRDF Models

As mentioned in the introduction to Section 3 already, BTFscan be understood as spatially varying ABRDFs. Therefore,a natural approach for compressing BTFs is via a pixel-wiserepresentation using BRDF models which are fitted to thesynthetic or measured BTF data. Candidate BRDF mod-els need to be efficiently computable to achieve real-time

c© The Eurographics Association and Blackwell Publishing Ltd 2005

G. Muller et al./Acquisition, Synthesis, and Rendering of BTFs 91

Figure 11: An ABRDF (right) from the PLASTERSTONEBTF (left). While the reflectance of the white plaster alone isquite regular, the holes introduce strong meso-scale shadow-ing and masking.

capabilities. Therefore, fitting either the widely used Phong[31] model, the Blinn [32] model, the model of Ward [6] orthe Generalized Cosine-Lobe model of Lafortune et al.[33]to the measured data leads to straightforward extensions fromBRDF to BTF representations.

3.3.1. Lafortune lobes

The simplest BTF model based on analytic function fittingwas published by McAllister et al. [22] and is directly basedon the Lafortune [33] model. Lafortune et al. propose to ap-proximate the BRDF by a sum of lobes

s(v, l) = (vt · M · l)n (1)

with v and l denoting local view and light direction, respec-tively, while the general 3 × 3 matrix M and the exponent ndefine the lobe.

To fit these parameters to the reflectance properties of asynthetic or measured BRDF, nonlinear fitting methods suchas the Levenberg–Marquardt algorithm [34] are employed.Fitting the complete matrix allows for very general BRDFsbut is extremely time consuming. Therefore, McAllisteret al. decided to employ a more restricted, diagonal matrixD, since fitting and rendering efforts are significantly reducedwithout major loss in rendering quality. Thus, they use thefollowing spatially varying lobes

sx(v, l) = (vt · Dx · l

)nx (2)

This results in the following BTF approximation:

BT F(x, v, l) ≈ ρd,x +k∑

j=1

ρs,x, j · sx, j (v, l) (3)

whereρ d andρ s denote diffuse and specular albedo (specifiedas RGB values) and k is the number of lobes.

The model requires only a few parameters to be storedper pixel resulting in a very compact material representation(about 2 MB per material depending on the spatial resolutionand number of lobes). Because of the expensive nonlinearminimization, the number of Lafortune lobes is practicallylimited to about 4 lobes. Therefore, the method achieves sat-isfactory results only for a very limited range of materialswith minor surface height variation.

3.1.2. Scaled lafortune lobes

BRDF models were not designed for the spatially varyingABRDFs, which can contain strong effects from meso-scaleshadowing and masking (Figure 11). Therefore, more spe-cialized models for ABRDFs were developed which also tryto model some of these effects.

Daubert et al. [35] proposed a material representation,which is also based on the Lafortune model but includesan additional, multiplicative term T x(v) modeling occlusion.Following their proposal, the BTF is evaluated as follows:

BT F(x, v, l) ≈ Tx(v)

(ρd,x +

k∑j=1

sx,i (v, l)

). (4)

The view-dependent lookup-table T is defined per pixeland, therefore, the model requires significantly more param-eters to be stored. It is, thus, necessary to combine this methodwith quantization approaches when handling materials thatrequire significant spatial resolution. The model, as presentedoriginally, was intended to independently represent the threechannels of the RGB model by fitting individual Lafortunelobes and lookup tables for each color channel.

3.1.3. Reflectance functions

As a qualitative improvement over the previous method,Meseth et al. [36] published an approach to BTF renderingbased on fitting a set of functions to the spatially varying re-flectance functions of the BTF only and performing a simplelinear interpolation for view directions not contained in themeasured set. Following this proposal, the BTF is evaluatedas follows:

BT F(x, v, l) ≈∑

v∈N (v)

wx,v · RFx,v(l) (5)

Here, N (v) denotes the set of closest view directions (a sub-set of the measured view directions), wx,v denotes the spa-tially varying interpolation weight, and RFx,v is the spatiallyvarying reflectance function for view direction v, which isapproximated either by a biquadratic polynomial followingthe Polynomial Texture Map approach of Malzbender et al.

c© The Eurographics Association and Blackwell Publishing Ltd 2005

92 G. Muller et al./Acquisition, Synthesis, and Rendering of BTFs

[14] or adopting Lafortune lobes as follows:

RFx,v(l) ≈ ρd,x + ρs,x,v(l) ·k∑

i=1

sx,v(l), (6)

with s x,v similar to a spatially varying Lafortune lobe but forfixed view direction and k being the number of lobes.

Since the reflectance functions are fitted per pixel and mea-sured view direction, the amount of parameters necessary toevaluate the model is higher than for the scaled Lafortunelobes model. Like the model of McAllister et al. [22], the ap-proach is designed for efficient rendering and, therefore, thelobes are intended to compute luminance values that scale theRGB color albedo instead of fitting individual lobes for eachcolor channel. Unlike previous methods based on functionfitting, the approach requires interpolation between view di-rections, since the reflectance functions are defined for fixedview directions.

3.1.4. Reflectance function polynomials

Recently, Filip and Haindl [37] suggested an even more ac-curate model based on the idea of Lafortune lobes. Instead ofapproximating reflectance functions by summing lobe con-tributions as Meseth et al. [36], they interpolate view andlight-dependent polynomials

RFx,v(l) ≈∑

l∈N (l)

wl

k∑i=1

ai,x,v,l

(ρs,v · sx,v(l)

)i−1(7)

Here, a denotes the coefficients of the polynomial, s x,v is de-fined as in Equation (6) and wl denotes interpolation weightsfor the contributions of the nearest light directions N (l) whichis a subset of the measured light directions.

Although approximation quality is superior to the previ-ously mentioned approaches based on analytic function fit-ting and the data requirements are comparable to those ofMeseth et al. [36], the evaluation of the BTF requires sub-stantially more computation time due to the necessary inter-polation of both view and light direction. Especially if ap-plied to each color channel individually, as intended by theauthors, this drawback severely limits use in real-time appli-cation. Other application areas, such as texture synthesis—forwhich the model was intended—or offline rendering, mightstill find this method useful.

3.2. Linear Basis Decomposition

Using parametric BRDF models fitted to the measured dataper pixel has some drawbacks concerning realism. Manymodels were originally designed to model only a particu-lar class of uniform materials and all models are only anapproximation of real reflectance using some simplifying as-sumptions about the underlying physical process (refer to therecent work of Matusik et al. [38] on data-driven reflectance

modeling for a more detailed discussion of this topic). Thesituation becomes even worse for the apparent BRDFs sincethey contain additional complex effects resulting from thesurrounding mesostructure.

One way to overcome this problem would be the relaxationof the restricting assumptions of BRDF modeling and the in-terpretation of the measured data as a multi dimensional sig-nal. Then general signal processing techniques such as prin-cipal component analysis (PCA) [39] can be applied. PCAminimizes the variance in the residual signal and provides, ina least-squares sense, the optimal affine-linear approximationof the input signal. We use the terms PCA and Singular ValueDecomposition (SVD) synonymously during the rest of thissection, since the principal components of the centered datamatrix X are the columns of the matrix V with X = U�VT

being the SVD of X.

PCA has been widely used in the field of image-basedrendering to compress the image data. For example, Nishinoet al. [40] applied PCA to the reparameterized images of anobject viewed from different poses and obtained so-calledeigen-textures. Matusik et al. [15] compressed the pixels ofthe captured reflectance field applying PCA to 8 × 8 imageblocks.

The several BTF compression methods that use PCA dif-fer mainly in two points: (i) the slices of the data to whichPCA is applied independently and (ii) how these slices areparameterized.

3.2.1. Per-texel matrix factorization

One approach especially suited for real-time rendering ap-plies PCA to the per-texel ABRDFs. Such methods weredeveloped in the context of real-time rendering of arbitraryBRDFs at the time when the Phong model was the stateof the art in real-time rendering. The original idea as intro-duced by Kautz and McCool [41] can be stated as follows:Given the 4D BRDF B x , find a factorization into a set of 2Dfunctions:

Bx(v, l) ≈c∑j

gx, j (π1(v, l))hx, j (π2(v, l)). (8)

The functions π 1 and π 2 are projection functions that mapthe 4D BRDF parameters (v, l) to a 2D space. These pro-jection functions have to be chosen carefully, because theparameterization significantly affects the quality of low-termfactorizations. Given such a factorization real-time recon-struction of the BRDF using graphics hardware becomeseasy, since the functions g x, j and h x, j can be stored in texturemaps and combined during rendering. A trade-off betweenquality and speed is possible by controlling the number ofterms c.

Several methods to find such factorizations have been pro-posed. Given the sampled values of the BRDF arranged in a

c© The Eurographics Association and Blackwell Publishing Ltd 2005

G. Muller et al./Acquisition, Synthesis, and Rendering of BTFs 93

2D-matrix X the SVD of X provides the solution with the low-est RMS error. But the resulting functions contain negativevalues that may be problematic for a GPU implementationand the RMS error may not be the perceptually optimal er-ror metric. As an alternative, McCool et al. [42] presented atechnique called homomorphic factorization (HF). Instead ofusing a sum of products they approximate the BRDF by anarbitrary number of positive factors

Bx(v, l) ≈c∏j

px, j (π j (v, l)). (9)

A solution is computed by minimizing RMS error in thelogarithmic space which in fact minimizes relative RMS-error in the original minimization problem which was foundto be perceptually more desirable. Furthermore, the result-ing factors are positive and an arbitrary number of projec-tion functions π i can be used which allows for highly cus-tomized factors that capture certain BRDF features. This isof special importance for the ABRDFs from a measuredBTF. They contain horizontal and vertical features such asshadowing and masking, and also diagonal features such asspecular peaks. Depending on the parameterization, a sim-ple single term expansion can capture only the one or theother.

Recently, Suykens et al. [43] presented a method calledchained matrix factorization (CMF) which encompasses bothprevious factorization methods by accommodating the fol-lowing general factorization form:

Bx(v, l) ≈d∏j

c j∑k

Px, j,k(π j,1(v, l))Qx, j,k(π j,2(v, l)).

(10)

Such a chained factorization is computed by a sequence ofsimple factorizations, for example SVD, but each time in adifferent parameterization. As a comparison the authors ap-proximated the ABRDFs of synthetic BTFs using CMF andHF. In floating precision they reported similar approxima-tion errors, but the factors computed from CMF had a muchsmaller dynamic range and thus could be safely discretizedinto 8-bit textures used for rendering on graphics hardware.Furthermore, they stated CMF to be easier to compute andimplement.

The compression ratio of per-texel matrix factorizationdepends on the size of the matrix X, i.e., the sampling ofthe angular space, and the number of factors. Please notethat these techniques where originally designed for BRDFrendering and for scenes containing a few (maybe somehundred) BRDFs. A single BTF with 2562 texels contains64k ABRDFs! Hence, to reduce the memory requirementsfor real-time rendering a clustering of the factors may benecessary.

Figure 12: RMS error of the full BTF-matrix factorizationdepending on the the number of terms c. There is a directcorrespondence between the magnitude and decrease of errorand the BTF complexity.

3.2.2. Full BTF-matrix factorization

The per-texel factorization methods from the previous sec-tion have the disadvantage that they do not exploit inter-texelcoherence. This can be accomplished by applying a PCA tothe complete BTF-data arranged in a |M| · |I | matrix

X = (Bx0 , Bx1 , . . . , Bx|I |

)Keeping only the first c eigenvalues results in the followingBTF approximation:

BTF(x, v, l) ≈c∑j

g j (x)h j (v, l) (11)

This approach was used in the works of Liu et al. [44] andKoudelka et al. [26].

The remaining issue is how to choose c. RaviRamamoorthi [45] showed, by an analytic PCA construc-tion, that using about five components is sufficient to recon-struct lighting variability in images of a convex object withLambertian reflectance. Therefore, for nearly diffuse and rel-atively flat samples a good reconstruction quality can be ex-pected for low c. However, as illustrated in Figure 12, thiswill not be sufficient for complex BTFs containing consider-able self-shadowing, masking, and obviously for nondiffusereflectance. Therefore, Koudelka et al. chose c = 150 to rep-resent all significant effects with enough fidelity. To reducethe size of the resulting dataset even further they stored thebasis-vectors as JPG images resulting in very high compres-sion rates. But of course real-time reconstruction from thisrepresentation is not possible.

An alternative approach for full BTF-matrix factoriza-tion was presented by Vasilescu and Terzopoulos [46]. Theyarranged the BTF data in a 3-mode tensor and applied multi

c© The Eurographics Association and Blackwell Publishing Ltd 2005

94 G. Muller et al./Acquisition, Synthesis, and Rendering of BTFs

linear analysis (3-mode SVD), which corresponds to the ap-plication of standard SVD to different arrangements of thedata. It would be beyond the scope of this paper to introducethe necessary concepts of tensor algebra, but it is worth not-ing that the resulting reconstruction formula provides a moreflexible dimensionality reduction by allowing to reduce therepresented variation in view and lighting directions indepen-dently. Compared to standard PCA with the same number ofcomponents the methods lead to a higher RMS-error, but theauthors claim that keeping more variation in the viewing di-rection gives perceptually more satisfactory results.

A serious problem of the full BTF-matrix factorizationmethods is the size of the matrix X, which could easily reachseveral gigabytes in float precision. In this case, even thecomputation of the covariance matrix XXT would be a verylengthy operation and special out-of-core routines have tobe implemented. As an alternative, the factorization can becomputed for a subset of the data only.

3.2.3. Per-view factorization

The full BTF-matrix factorization suffers from memory prob-lems during computation and reconstruction is only fast andcorrect for relatively simple materials. Therefore, Sattler et al.[23, 47] published a method that deals with these problems.Because their original intention was to visualize cloth BTFsthat exhibit a significant amount of depth variation and hencehighly nonlinear view dependence, they use an approach sim-ilar to the work of Meseth et al. [36] in the sense that theyapplied PCA to slices of the BTF with fixed view-direction.This leads to the following set of data matrices:

Xvi :=(

T(vi ,l0), T(vi ,l1), . . . , T(vi ,lMvi)

)with Mvi denoting the number of sampled light directionsfor the given view direction vi . The PCA is applied to allmatrices Xvi independently, which poses no computationalproblems compared to the full BTF matrix. Keeping only thefirst c eigenvalues gives the following BTF approximation:

BTF(x, v, l) ≈c∑

j=1

gv, j (l)hv, j (x). (12)

Compared to Equation 11, the value of c can be set muchlower (the authors chose c between 4 and 16) which enablesinteractive or even real-time rendering frame rates with goodvisual quality. However, the memory requirements are muchhigher. For example, c = 16 and Mvi = 81 lead to more than1200 terms that have to be stored.

3.2.4. Per-cluster factorization

As already mentioned, a complex BTF contains highlynonlinear effects such as self-shadowing, self-occlusion, andnon diffuse reflectance. Nevertheless, many high dimensionaldatasets exhibit a local linear behavior. Applying per-texel

Figure 13: Comparing full matrix factorization and per-cluster matrix factorization. From left to right: Originalfrontal view of PROPOSTE BTF, reconstruction from fullmatrix factorization (FMF) with c = 8 terms, reconstruc-tion from per-cluster matrix factorization (PCMF) with 32clusters and 8 terms per cluster. Second row: Enhanced andinverted difference images.

or per-view factorization implicitly exploits this observa-tion by selecting fixed subsets of the data and approximat-ing these subsets with an affine-linear subspace. A moregeneral approach would choose these subsets depending onthe data. This is the idea behind the local PCA method,which was introduced by Kambhatla and Leen [48] to themachine-learning community in competition to classical nonlinear/neural-network learning algorithms. It combines clus-tering and PCA using the reconstruction error as metric forchoosing the best cluster.

Recently, Muller et al. [49] applied this method to BTFcompression and proposed the following approximation:

BTF(x, v, l) ≈c∑j

gk(x), j (x)hk(x), j (v, l) (13)

The operator k(x) is the cluster index look-up given a po-sition x. In this case, clustering is performed in the spaceof ABRDFs, which was found being better suited for real-time rendering than clustering in the space of images. Nowthe number of clusters can be chosen according to compu-tational resources and quality requirements. Figure 13 com-pares per-cluster factorization to full matrix factorization withthe same number of terms c. Good results were obtained forcluster counts between 16 and 32, which is much smaller thanthe fixed cluster number (e.g., Mvi = 81) used in per-viewfactorization.

3.3. Discussion

The following discussion is based on Figure 14.

Obviously, the quality of the reconstruction from linear ba-sis decompositions is better than from parametric reflectance

c© The Eurographics Association and Blackwell Publishing Ltd 2005

G. Muller et al./Acquisition, Synthesis, and Rendering of BTFs 95

Figure 14: Comparison of selected BTF-compression methods (see Section 3 for details). Top row: Original and reconstructedABRDFs. Second row: Inverted difference images. Bottom row: Storage requirements for the compressed representations usingparameters suitable for interactive rendering. From left to right: Original ABRDF of PLASTERSTONE BTF, Scaled LafortuneLobes with two lobes (SLAF), reflectance functions with per-view polynomial (RF), Per-Texel Chained Matrix Factorization withfour factors (CMF), Per-View Matrix Factorization with eight terms (PVMF), Per-Cluster Matrix Factorization with 32 clustersand 8 terms (PCMF). The original dataset consists of 6561 8-bit RGB-images with 2562 pixels in size. Numerical precision is16 bit for PCMF and 8 bit for all other methods.

models even if additional parameters such as scaling valuesper view or even a full fit per view are used. Furthermore, theincreased quality achieved by using this additional complex-ity does not legitimate the increased memory requirements.The qualitatively best result is achieved using per-view factor-ization but unfortunately the memory requirements are veryhigh, since for every measured view direction a set of tex-tures and weights has to be stored. Using per-texel matrixfactorization does not exploit spatial coherence and thus thequality is not as good as per-view or per-cluster factoriza-tion even for considerable memory requirements. Further-more, the chained resampling steps can introduce additionalresampling error. Suykens et al. [43] propose k-means clus-tering of the factors across the spatial dimension to reducememory requirements. This obviously could be applied toevery per-texel BTF compression method, but for complexmaterials that do not contain uniform areas this will intro-duce cluster artifacts. These cluster artifacts are reduced usingPCA in each cluster as done in the per-cluster factorizationmethod. Hence, this method seems to offer a good compro-mise among reconstruction cost, visual quality, and memoryrequirements.

4. Synthesis

Current acquisition systems as presented in Section 2 cancapture only small material samples up to a few centimeters inextent. In practical applications, such a sample should covera much larger geometry, e.g., a fabric BTF covering a seat.The simplest way to accomplish this would be a simple tilingof the sample, but not all materials are suitable for tilingand typically the resulting repetitive patterns are not visually

pleasing. A more sophisticated approach would generate anew larger texture that locally resembles, i.e., looks like, theoriginal texture but contains no obvious repetition artifacts.

Such an approach is called texture synthesis by exampleand has been subject of ongoing research for the last 10 years.

4.1. 2D-Texture Synthesis by Example

The first attempts in using texture synthesis for computergraphics tried to model the underlying stochastic processand sample from it. For example, the methods of Heegerand Bergen [50] and DeBonet [51] are of this kind. Unfor-tunately, only a limited kind of textures could be reproducedadequately using this technique, due to the potential com-plexity of the stochastic process.

Therefore, recently, methods based on heuristic samplingbecame popular because they are faster and much simplerto implement. They do not try to model the global texturestatistics but instead attempt to enforce the statistics locallyfor a single texel or small patch.

In practice, the conditional distribution for a texel or patchto synthesize given its already synthesized neighborhood isapproximated by finding texels or patches with similar neigh-borhoods in the sample texture and choosing from this can-didate set. This approach has been popularized by the workof Efros et al. [52] and Wei et al. [53].

These methods synthesize the new texture texel-by-texeland are well suited for textures with small and not too regu-lar features. A natural extension to this texel-based textures

c© The Eurographics Association and Blackwell Publishing Ltd 2005

96 G. Muller et al./Acquisition, Synthesis, and Rendering of BTFs

Figure 15: In the left image the stone BTF is tiled resulting in visible repetitive patterns. On the right pixel-based synthesis wasapplied.

synthesis is patch-based texture synthesis, which was intro-duced by Efros and Freeman [54] and copies texture patchesat once into the synthesized texture while trying to minimizethe visible seams between the patches.

The before-mentioned methods work surprisingly well formany types of textures and can easily be applied to higher-dimensional textures, such as BTFs. In order to achieve feasi-ble running times dimensionality reduction techniques suchas Principal Component Analysis (PCA) have to be applied.Figure 15 shows the result of applying pixel-based synthesisto a stone BTF whose dimension was reduced using PCA, incomparison to simple tiling.

4.2. Application to BTF synthesis

A first approach to BTF synthesis was made by Liu et al. [55].They recovered an approximate height field of the measuredsample by shape-from-shading methods. After synthesizinga new height field with similar statistical properties usingstandard 2D texture synthesis methods, they render a grayimage given a desired view and light direction. With the helpof a reference image from the BTF measurement with thesame view and light direction, they generate a new image bycombining the rendered gray image and the reference image.A serious problem of the method is the required height field,which is difficult to generate for BTFs with complex surfacestructures and strong nondiffuse reflectance components.

A following publication of Tong et al. [56] targeted thisdrawback by synthesizing the whole BTF at once. To reducethe huge computational costs related to the high dimension-

ality of BTFs, they computed textons [57], i.e., texels withrepresentative features obtained by clustering. Synthesis wasafterward done in a per-pixel manner based on a precompu-tated dot-product matrix, which was used to efficiently com-pute distances between textons and linear combinations oftextons.

In a most recent publication, Koudelka et al. [26] appliedthe image quilting algorithm of Efros and Freeman [54] todimension reduced BTFs.

5. Rendering

Generally accurate and physically plausible rendering algo-rithms have to compute a solution of the rendering equationat every surface point x (neglecting emissive effects):

Lr (x, v) =∫

i

ρx(v, l)Li (x, l)(nx · l) dl (14)

Here, ρ x denotes the BRDF, Li denotes incoming radiance,n x is the surface normal, and i is the hemisphere over x.

Including measured BTFs into the rendering Equation (14)is very simple:

Lr (x, v) =∫

i

BT F(x, v, l)Li (x, l)(nx · l) dl (15)

Now the measured BRDF at point x is simply looked up fromthe BTF. Here we assume that a mapping from the 3D-surfaceto the 2D spatial texture domain already exists. Please notethat the BTF also models meso-scale geometry. However,

c© The Eurographics Association and Blackwell Publishing Ltd 2005

G. Muller et al./Acquisition, Synthesis, and Rendering of BTFs 97

Figure 16: Path-traced image of a Max-Planck bust coveredwith a LEATHER BTF compressed using per-cluster matrixfactorization. The walls are covered with PLASTERSTONEand the floor with FLOOR-COVERING.

since this information is projected into the BTF the renderingwill not be correct for example at object silhouettes.

5.1. Solving the rendering equation

Currently there are two popular approaches that are primarilyused to solve the rendering equation.

5.1.1. Monte Carlo sampling

The first approach tries to solve Equation 14 accurately us-ing Monte Carlo sampling methods. Many variants of thisapproach, such as path tracing [58], distribution ray tracing[59] and photon mapping [60], to mention a few, have beendeveloped over the years. Despite recent advances toward in-teractive global illumination (e.g., [61]) accurate solutions ofarbitrary complex scenes can still take hours or even days tocompute.

Obviously Equation (15) can be solved using Monte Carlosampling as well. In this case, the renderer simply has to beextended to support a particular BTF compression schemewhich in fact corresponds to the implementation of the de-compression stage on the CPU. This is possible for any com-pression method introduced in Section 3. Figure 16 shows apath-traced image of a scene containing objects covered withcompressed BTFs.

5.1.2. Approximate solutions for real-time rendering

The second approach makes a priori several simplifying as-sumptions so that the integral can be solved more quickly

and is amenable to hardware implementation. The goal is toreduce the integral in Equation 14 to a finite sum containingonly very few terms.

Point lights. The most popular simplification is using onlya set � = {l j} of point or directional light-sources and dis-carding general interreflections (i.e., the recursion in Equa-tion 14). For notational simplicity the term l j encodes bothintensity and direction of the light source. Since these lightsare given in global coordinates and the BRDF is usually givenin the local frame at x we also need the local coordinates l:

Lr (x, v) ≈|�|∑

j

ρx(v, l j )G(x, l j )(nx · l j )l j (16)

The geometry term G(x, l j ) contains the visibility functionand an attenuation factor depending on the distance. For therest of this STAR we will neglect the visibility function inthe geometry term, because interactive large-scale shadow-ing is an independent research topic in its own right (fora survey refer to, e.g., [62]). Using Equation (16), a scenecan be rendered in real-time using todays consumer graph-ics hardware. Arbitrary BRDFs can be implemented usingthe programmable vertex- and fragment-shader capabilities.In the case of BTF rendering, the following sum has to becomputed:

Lr (x, v) ≈|�|∑

j

BT F(x, v, l j )G(x, l j )(nx · l j )l j (17)

In fact, the challenge of developing a fast BTF-rendering al-gorithm for point lights is reduced to the implementation ofa fragment program that evaluates the BTF for given param-eters. For several of the compression schemes presented inSection 3, such an implementation has been proposed and wewill go into detail in Section 5.4.

Infinitely distant illumination. Another widely used sim-plification assumes infinitely distant incident illuminationand no interreflections. In this case, the dependency of in-cident lighting on surface position x can be removed:

Lr (x, v) ≈∫

i

ρx(v, l)Li (l)(nx · l) dl (18)

For special types of BRDFs, this integral can be precomputed(e.g., [63]) and the results can be displayed in real-time. An-other approach is to reduce the integral to a finite sum byprojecting light and BRDF onto a basis such as SphericalHarmonics (SH) [64] or wavelets [65] and keeping only afinite number of terms. Using this approach even transporteffects such as shadows and interreflections can be precom-puted and projected onto the basis.

Special care has to be taken while transferring such anapproach to BTF rendering. The methods were originallydesigned only for special types of BRDFs or the results areonly computed per vertex. Hence, only few algorithms for

c© The Eurographics Association and Blackwell Publishing Ltd 2005

98 G. Muller et al./Acquisition, Synthesis, and Rendering of BTFs

BTF rendering using distant illumination have been publishedso far. The details will be discussed in Section 5.5.

5.2. BTF rendering using Real-Time Raytracing

The recent advances in computation power and improvedalgorithms allow for interactive ray-tracing even on a sin-gle desktop PC [61]. Real-time performance can be achievedusing PC clusters. As in Section 5.1.1, the integration of aparticular BTF compression scheme into such a system cor-responds to the implementation of the decompression stageon the CPU. Figure 20 shows a car interior covered withmeasured BTFs and rendered by the OpenRT real-time ray-tracing engine [66].

5.3. BTF rendering using graphics hardware

To incorporate BTFs into current real-time renderingsystems, the evaluation of the BTF should be done on thegraphics processing unit (GPU), i.e., integrated into thefragment-shading process. To achieve this, the compressedrepresentation of the BTF must be stored in textures and thereconstruction is performed using the programmable units ofthe graphics board. The parameters for BTF evaluation arethe standard 2D-texture coordinates x, the local view direc-tion v, and in the case of point light sources the local lightdirection l.

A crucial point in all BTF rendering methods is interpola-tion. Since a measured BTF is discrete in all six dimensions,smooth renderings require interpolation in each dimension.To achieve high frame-rates, it is indispensable, that at leastsome of these dimensions are interpolated using built-in hard-ware interpolation. For the other dimensions either the nearestneighbor must be chosen or the interpolation of the nearestneighbors has to be executed explicitly in the fragment shader.In both cases, there has to be an operator N (·) that supplies thenearest samplings. Such a look up can be performed on theGPU using dependent texture look ups. To perform explicitinterpolation in the fragment shader, the corresponding inter-polation weights τ i should also be precomputed and storedin textures.

In the following sections, we will present and discuss theexisting BTF rendering methods that achieve interactive oreven real-time frame-rates exploiting the capabilities of cur-rent graphics hardware.

5.4. Interactive rendering of BTFs with point lights

5.4.1. Parametric reflectance models

Efficient implementations of the parametric reflectance mod-els from Section 3.1 have been presented by various publi-cations. McAllister et al. [22] describe a real-time evaluationscheme for Equation (3). Coefficients of the spatially vary-ing Lafortune lobes are stored in 2D textures. Evaluation can

efficiently be done using vertex and fragment shaders. SinceLafortune lobes are continuous in the angular domain, nointerpolation is required in the angular domain. Spatial in-terpolation has to be done explicitly in the fragment shader(magnification) or is left to the multisampling and built-inMIP-mapping capabilities of graphics hardware, althoughMIP-maps have to be built manually (e.g., by fitting newsets of Lafortune lobes for each BTF resolution).

As an extension to the model of McAllister et al., the BTFmodel of Daubert et al. [35] only requires an additional evalu-ation of the view-dependent visibility term. Although signif-icant numbers of parameters are required to store this term, itcan easily be encoded in a texture. Interpolation of the view-direction can be achieved using graphics hardware. Spatialinterpolation is done as in the previous approach.

The even more complex model of Meseth et al. [36] isevaluated very similarly to the previous two ones with thesignificant difference that the discretization of the view direc-tion requires an additional manual interpolation (as denotedin Equation (5)). Therefore, two cube maps are utilized whichstore for each texel in the cube map (representing a certainview direction) the three closest view directions and respec-tive interpolation weights. Interpolation is achieved by eval-uating the reflectance functions for the three closest viewdirections and interpolating the results based on the interpo-lation factors stored in the cube map. Spatial interpolationcan be done exactly as in the previous approaches.

Efficiently evaluating the BTF model of Filip and Haindl[37] constitutes an even more complex problem since it re-quires interpolation of both view and light direction, effec-tively requiring the evaluation of the basic model nine times.Since the model was not intended for real-time rendering, nosuch algorithm was proposed yet and it is questionable if theimproved rendering quality can compensate the significantlyincreased rendering costs.

5.4.2. Per-Texel matrix factorization

Generally, the rendering algorithms for BRDF factorizationscan be used [41, 42] with the difference, that the factors nowchange in every fragment. Suykens et al. [43] detailed howthis can be accomplished:

Every factor is reparameterized and stored into a parabolicmap [67]. Then all these factors are stacked into 3D texturesand normalized to a range between 0 and 1. The resultingscale factors are stored in a scale map. A particular factor isselected in its 3D texture using transformed 2D texture co-ordinates or, if clustered factors are employed, by the valuesfrom an index map. To avoid mixing of neighboring factors inthe 3D-texture, the z-coordinate has to be chosen carefully.A value within a particular factor is indexed using the ap-propriately reparameterized local view and light directions.While interpolation inside a factor is supported by the hard-ware, spatial interpolation between neighboring texels is not

c© The Eurographics Association and Blackwell Publishing Ltd 2005

G. Muller et al./Acquisition, Synthesis, and Rendering of BTFs 99

Figure 17: Schematic rendering using the per-view matrixfactorization.

implemented. Equation (10) can now be executed in a frag-ment shader.

5.4.3. Per-View matrix factorization

Sattler et al. [47] demonstrated how Equation 12 can be im-plemented using a combination of CPU and GPU computa-tions.

Figure 17 shows the basic approach: Combination of theeigen-textures h v, j (x) with the appropriate weights gv, j (l)using the multi-texturing capabilities of modern graphicsboards. This combination is done for every triangle-vertex. Asmooth transition is ensured by blending the resulting threetextures over the triangle using a fragment program [68].

While the interpolation in the spatial domain is done by thegraphics hardware, light and view direction are interpolatedexplicitly. To interpolate between the nearest measured lightdirections, the term

BTF(x, v, l) ≈∑

l∈N (l)

τl

c∑j=1

gv, j (l)hv, j (x) (19)

has to be evaluated. In order to speed up this computationthe weights λv, j (l) = ∑

l∈N (l) τl · gv, j (l) are computed on theCPU per frame and sent to the GPU resulting in the term

BTF(x, v, l) ≈c∑

j=1

λv, j (l)hv, j (x) (20)

To perform view-interpolation, different sets of eigen-textures have to be combined resulting in an expensive com-bination:

BTF(x, v, l) ≈∑

v∈N (v)

τv

c∑j=1

λv, j (l)h v, j (x). (21)

5.4.4. Full BTF-Matrix factorization

To evaluate Equation (11), the factors gj(x) and hj(v, l) haveto be evaluated, whereas the factors gj(x) can be stored assimple 2D textures and the factors hj(v, l) as 4D textures. Butunfortunately neither 4D textures nor their full interpolationis currently supported by graphics hardware.

Therefore, Liu et al. [44] store the factors hj(v, l) intostacks of 3D-textures. The tri-linear filtering capabilities of

graphics hardware now can be exploited to interpolate theview direction and the polar angle of the light direction. Thefinal step for 4D filtering is performed manually by blendingtwo, tri-linearly filtered values with closest azimuth angles inlight direction. As usually, the values of hj(v, l) parameter-ized over the hemispheres of view and light directions haveto be resampled and reparameterized in order to be stored intextures. In order to avoid the fragment shader’s online ef-fort of calculating the reparameterized local light and viewdirections, which are necessary for accessing the 3D-texturesand interpolation weights, the mapping is precomputed andstored in a cube map.

5.4.5. Per-Cluster matrix factorization

Apart from the additional cluster look-up, evaluating Equa-tion (13) is essentially the same as evaluating (11). Hence thereal-time rendering algorithm for Equation (13) presented bySchneider [69] is similar in style to the approach presentedin the previous section. He also stores the factors hk(x), j (v, l)in stacks of 3D-textures and accesses them through reparam-eterized local light and view directions. The cluster indexintroduces an additional dependent texture look-up. Sinceexisting graphics boards only support hardware supportedinterpolation of fixed point values, every factor is quantizedto 8-bit separately yielding scaling factors s j,k . As compen-sation the factors gj(x), which can be stored as floating-pointvalues since they require no interpolation, have to be dividedby the corresponding scaling factor s j,k .

Mipmapping the BTF can simply be implemented byexecuting the shader instructions twice (once for the cur-rently best mipmap level and once for the next best mipmaplevel) and interpolating the resulting colors. Bilinear, spa-tial interpolation is currently not supported, since the ad-ditional overhead is prohibitive. Fortunately, hardware sup-ported full-screen antialiasing can reduce potential artifactssignificantly.

5.5. Interactive rendering of BTFs with distantillumination

We consider relighting of BTFs with distant illumination ac-cording to Equation (18). Typically, this distant illuminationis represented by an environment map. Such an environmentmap can either be computed by the graphics hardware or cap-tured from a natural environment by taking pictures (e.g., ofa metallic sphere) [70,71].

5.5.1. Parametric reflectance models

Combining parametric reflectance models for BTFs withimage-based lighting relies on the concept of prefiltered en-vironment maps, which was first applied to the diffuse reflec-tion model by Miller and Hoffman [72] and Greene [73]. For

c© The Eurographics Association and Blackwell Publishing Ltd 2005

100 G. Muller et al./Acquisition, Synthesis, and Rendering of BTFs

a diffuse BTF with spatially varying reflection coefficients(BTF(x, v, l) = ρ d (x)), Equation (18) reduces to

Lr (x, v) =∫

i

ρd (x)Li (l)(nx · l) dl

= ρd (x)∫

i

Li (l)(nx · l) dl

= ρd (x)D(nx ). (22)

The prefiltered environment map D(nx ) can be precom-puted on the CPU, stored in a cube texture map and usedduring rendering. Kautz and McCool [74] extended the con-cept to isotropic BRDFs. Since for nondiffuse cases the pre-filtered result becomes view-dependent, they approximatedEquation (18) as follows:

Lr (x, v) ≈ (nx · p(x, v))∫

i

BT F(x, v, l)Li (l) dl (23)

≈ (nx · p(x, v)) S(x, v) (24)

where p(x, v) denotes the peak direction, i.e., the light direc-tion with maximum influence on Lr(x, v). The accuracy ofthis approximation increases with the specularity of the spa-tially varying ABRDFs. With this assumption, the specularprefiltered environment S(x, v) can be computed analogouslyto the diffuse case.

McAllister et al. [22] applied this concept to Equation (3).The diffuse and specular terms are considered individuallyresulting in two prefiltered environment maps: the diffuseD x(nx ) and a specular one, which is computed based on theideas of Kautz and McCool [74]. Note that whenever writingx as index we refer to instances discretely sampled in thespatial domain. First, the peak direction

px(v) = vt · Dx

of each lobe is computed. Then the specular illumination part(for ease of notation, a 1-lobe approximation is assumed) isrewritten as follows

Lr ,s(x, v) ≈∫

i

ρs,x · (px(v) · l)nx Li (l)(nx · l) dl

≈ ρs,x · (nx · px(v))∫

i

(px(v) · l)nx Li (l) dl

≈ ρs,x · (nx · px(v)) ||px(v)||nx S (px(v), nx)

(25)

with

S(px(v), nx) =∫

i

(px(v)

||px(v)|| · l

)nx

Li (l) dl (26)

S(p, n) denotes the specular prefiltered environment map.

Evaluating Approximation (25) can efficiently be done us-ing fragment shaders by first computing px(v), looking up therespective specular prefiltered value from a cube map, eval-uating the specular part and adding the diffuse contribution.Special care has to be taken only concerning the specular ex-ponent, which is represented as discrete versions in S(p, n)only. One can either choose the closest exponent or interpo-late from the two closest ones.

Like for point-like sources, the continuity of the Lafortunelobes in the angular domain requires additional interpolationin the spatial domain only. Again, graphics hardware featuressuch as multisampling and MIP-mapping are employed forthis task.

Whereas the extension of this approach to the model ofDaubert et al. [35] requires an additional evaluation of thevisibility term only, extending it to the reflectance function-based model of Meseth et al. [36] requires interpolation fromthe results of the reflectance functions corresponding to theclosest measured view directions. Nevertheless, all three ap-proaches allow for real-time rendering.

5.5.2. Bi-Scale radiance transfer

Precomputed Radiance Transfer (PRT), as originally intro-duced by Sloan et al. [64], is evaluated per vertex only andinterpolated across the triangle. In a follow-up work, Sloanet al. [75] extended PRT to also support spatially varyingreflectance across the triangle. They projected the BTF perpixel and fixed view direction onto the SH basis generating aRadiance Transfer Texture (RTT), which now represents theper-view response of the material to the spherical harmonicsbasis. To cover large geometry with the memory-intensivetexture, they used the synthesis algorithm of Tong et al. [56]to generate an ID-map over the mesh which references intothe RTT.

Generation of the RTT and the ID-map. The generationof the RTT B(x, v) is accomplished by projecting the BTFonto the first c elements of the SH basis {Yj} j∈N :

B(x, v) j =∫

i

BT F(x, v, l)Y j (l) dl (27)

Thus, the RTT is a 4D array of c Spherical Harmonics coef-ficients (typically c = 25).

The ID-map is generated using BTF synthesis over denselyresampled geometry called a meso-mesh. This step assignsevery vertex an ID into the RTT. Then a texture atlas for theoriginal coarse mesh is generated and for every texel in thisatlas the nearest vertex in the meso-mesh is found and thecorresponding ID is assigned to the texel.

Real-Time rendering. Standard PRT-rendering is per-formed per-vertex. That means computation of a matrix-vector multiplication between the transfer matrix and the

c© The Eurographics Association and Blackwell Publishing Ltd 2005

G. Muller et al./Acquisition, Synthesis, and Rendering of BTFs 101

incident lighting SH-coefficients. The resulting transferredlighting vector is interpolated across the triangle. To com-pute exitant radiance per-fragment the interpolated trans-ferred lighting vector is dot multiplicated with the corre-sponding vector in the RTT, which is accessed by the IDmap and the local view direction. This is done in a fragmentprogram. Each texel of the RTT is stored in an 8 × 8 textureblock encoding the view dependence. This allows smoothbilinear interpolation across views using built-in hardwareinterpolation. Interpolation between spatial samples is notperformed.

Since the RTTs are not compressed, the method supportsonly sparse samplings. They used 8 × 8 view-samples and64 × 64 spatial samples which results in 643 ∗ 25 SH-coefficients that have to be stored per color band.

5.5.3. Per-View matrix factorization

Sattler et al. [47] also proposed a method to relight the per-view factorized BTFs by environment maps. The main ideais to discretize the integral in Equation (18) using a hemicube[76], which leads to

Lr (x, v) ≈∑l∈Hi

ρx(v, l)Hi,x(l)(nx · l) (28)

where H i,x denotes the discretized hemicube. Hi,x(l) returnsthe color in the hemicube over x at direction l or zero ifthe direction is occluded. The hemicube is precomputed andstored in a visibility map as follows:

Hemicube precomputation. The hemicube H i,x stores adiscretization of the hemisphere at the vertex x. Figure 18(left) shows an unfolded hemicube. Using a color-coded en-vironment map (Figure 18 middle), a look-up table into ahigh dynamic range map (Figure 18 right) is created. Thisallows easy exchange of the environment map. By renderingthe geometry in white color into the hemicube the visibilityfunction is computed and self-shadowing can be supported.Since the directions in the visibility map not necessarily cor-respond to the measured light directions in the BTF, the mapfurthermore stores the four nearest measured light directionsand the corresponding interpolation weights.

A hemicube pixel now stores the following information:

� visibility of a pixel of the environment map and if it isvisible, the position of this pixel in the map,

� four nearest measured directions with respect to the di-rection represented by this pixel,

� corresponding interpolation weights.

Rendering algorithm. Given the nearest measured viewdirection v at vertex x and substituting the BRDF in Equa-tion (28) by the per-view factored BTF representation includ-

Figure 18: Hemicube computation. Visibility map (left) withrendered color-coded lookup environment map (middle).White color in the visibility map stands for occlusion causedby the mesh. On the right side a HDR environment is shown,which is mapped onto the color-coded one.

ing the foreshortening term yields the following sum:

Lr (x, v) ≈∑l∈Hi

(c∑

j=1

λv, j (l)hv, j (x)

)Hi,x(l) (29)

The factors λv, j (l) are from Equation (20). As in Equa-tion (19), the factors γv, j = ∑

l∈Hiλv, j (l)Hi,x(l) are precom-

puted per vertex and sent to the GPU where the final expres-sion is evaluated:

Lr (x, v) ≈c∑

j=1

γv, j hv, j (x) (30)

which again is only a linear combination of basis textures.For view interpolation, the same calculations as in Equation(21) have to be applied.

Since γ v, j is computed for all vertices x of the geometry avector U holding all γ x,v, j can be introduced:

U = (γx0,v0,1, . . . , γx0,v0,c, . . .) (31)

This vector has do be calculated once per environment mapand allows real-time changes of the viewing position. A draw-back of this method is that changing the environment map andeven a simple rotation implies a complete recalculation of U.Depending on the visibility map resolution and the numberof vertices, this computation can take too long for interactivechange of lighting. Reducing the visibility map resolutionadaptively to achieve interactive changing rates introducesunder-sampling artifacts of the environment map during mo-tion, which can be compensated if the change stops, by usingan adaptively higher resolution for the visibility map.

5.5.4. Per-Cluster matrix factorization

In Section 5.5.2, the Bi-Scale Radiance Transfer method forimage-based lighting of models covered with uncompressedBTFs was reviewed. To remove the limitation on the BTFresolution — both in the angular and spatial domain—Mulleret al. [77] presented a combination of local PCA compressedBTFs with image based lighting.

Data representation. Similar to Sloan et al. [75], they com-pute Radiance Transfer Textures but encode them using thelocal PCA method [48]. In addition, similar to Sloan et al.

c© The Eurographics Association and Blackwell Publishing Ltd 2005

102 G. Muller et al./Acquisition, Synthesis, and Rendering of BTFs

[78], they apply local PCA to the transfer matrices of the meshvertices. An approximate solution to Equation (15) can nowbe computed by the weighted sum of dot products betweenthe PCA-factors of the RTT and the transfer matrices.

Real-Time rendering. Since the dot products remain con-stant as long as only the camera is moving in the scene (i.e.,neither the mesh nor the environment nor the BTF is chang-ing), they can efficiently be precomputed on the CPU andafterward be stored in a texture. Since precomputation andupload times of the dot products do not allow interactive ren-dering for very high quality settings, the number of RTT andtransfer matrix components is reduced in dynamic situations.These reductions in quality do not significantly influence theperceived quality of the rendered results as long as high qual-ity solutions are presented in static cases.

As in Sloan et al. [78], rendering requires clusters of trian-gles to be rendered independently, where a triangle belongsto a certain cluster if at least one of its vertices belongs to therespective clusters. This increases the rendering time slightlybut the overhead is negligible for smooth meshes.

Most rendering power is spent in the fragment program,which computes the weighted sum. Interpolation of the angu-lar domain of the BTF can be achieved by standard filteringfeatures, spatial interpolation and filtering can be achievedusing standard multisampling and MIP mapping of the BTF.

5.6. Discussion

To our experience, almost all real-time BTF rendering meth-ods pose a great challenge to the current graphics boards andthe performance differences vary greatly depending on theboard and driver versions. Therefore, a rigorous comparisonof the different rendering methods using for example frame-rates seems not possible currently and will be out of the scopeof this STAR. Instead, we will give some general “hardware-independent” properties of the algorithms, which might helpjudging the strengths and weaknesses of the methods.

The method of McAllister et al. [74] is mainly suited fornearly flat and specular materials with spatially varying re-flection properties. Since it only uses slightly more memorythan ordinary diffuse textures and can be rendered fastly, it fitsinto current rendering systems. This is not true for the meth-ods that use additional data to also capture depth-varyingBTFs such as [35–37]. Their memory consumption and in-creased rendering cost restrict them to special domains andas pointed out in Section 3 the visual quality of the usedanalytical models still remains questionable.

Far better visual quality is offered by the methods basedon matrix factorization. But using PCA alone as done by Liuet al. [44] is only real-time for very few terms, and thus onlyapplicable for simple materials. Therefore, a segmentation ofthe data into subsets may be necessary. Using a segmentation

per fixed view direction as done by Sattler et al. [47] pro-vides excellent visual quality but requires too much texturememory to be used in complex scenes with many materials.Spatial clustering as in the method of Muller et al. [49] or [43]reduces the memory requirements drastically while retaininghigh quality but these methods face the not sufficiently solvedproblem of spatial interpolation and MIP-mapping. This canresult in decreased quality for texture magnification and mini-fication. A problem of all matrix factorization-based methodsis the required random access to many textures, which canform a bottleneck on current graphics architectures.

6. Applications

Complex materials are part of everyone’s day-to-day life.Since only few of them can adequately be modeled by a singleBRDF, the applicability of BTFs in computer graphics relatedareas such as entertainment and movie industry or industrialprototyping is very wide, although the requirements on BTFrendering vary drastically. In the following two subsections,application areas of realistic and predictive BTF renderingwill be presented, special requirements and suitable existingor not yet discovered algorithms will be identified.

6.1. Realistic BTF Rendering

Although the word “realistic” is known to everyone, verydifferent understandings of the meaning exist. By “realistic”BTF rendering” we refer to rendering methods based on BTFsthat produce believable images, ones that are very likely tobe realistic. In this sense, the term “realistic rendering” iscontrary to “predictive rendering,” which aims at producingimages that reflect the result observed in reality. In the con-text of BTFs, the degree of correctness of the prediction isevaluated for example based on the difference between thevisualization of measured materials and the appearance oftheir actual counterparts from reality.

In accordance with this definition, realistic BTF render-ing is suitable for applications that produce purely virtualresults like the game or movie industry whose products arenot expected to be ever seen but using a projection device.

Today, computer games feature elaborate graphics alreadyleading to astonishing effects, which is expected to improveeven further in the future due to the continuous growth ofcomputation power of graphics processing units (GPUs).Since the strictest requirement in computer games is real-timeperformance, games are just starting to employ true BRDFrendering due to the increased rendering efforts comparedto standard texturing combined with simple, hardware accel-erated light models. Nevertheless, upcoming games such as“Universal Combat: Edge to Edge” by 3000AD, Inc., willprofit from enhanced rendering quality and realism and arevery likely to achieve better sales records due to this feature.Obviously, replacing BRDF rendering by BTF rendering will

c© The Eurographics Association and Blackwell Publishing Ltd 2005

G. Muller et al./Acquisition, Synthesis, and Rendering of BTFs 103

Figure 19: Photorealistic textile visualization. The fabrics were rendered from measured BTFs and lit by the environment.

increase image quality even further and therefore it appearsto be just a question of time until the release of the first gameutilizing BTF rendering.

Despite the ever-growing capabilities of CPUs and espe-cially GPUs, BTF rendering algorithms for computer gamesneed to fulfill two criteria: first and most important theyneed to be simple to achieve real-time frame rates. Sec-ond, the compression rate of the BTF model needs to behigh since games typically utilize lots of materials simulta-neously. Therefore, BTF rendering methods such as the onesof McAllister et al. [22] or intermediate methods betweenbump-mapping and BTF-rendering like the Polynomial Tex-ture Maps [14] or the approach of Kautz et al. [79] might bethe first to be utilized at least for some kinds of materials forwhich their approximation quality turns out to be sufficient.

The movie industry, in contrast to game industry, is muchmore focused on high-quality images and less concernedabout rendering time. Nevertheless, since movies often fea-ture visually rich scenes of huge extent with a very large num-ber of actors or objects, special requirements are put on thememory consumption of material representations. Therefore,rendering algorithms based on representations offering high

compression rates, high quality reconstructions, and easy ad-justability of quality like the one of Muller et al. [49] willmost likely be included in future movie production tools.

Unfortunately, existing movies do not feature BTF ren-dering yet, most probably due to missing acquisition systemsbuilt for industrial use and the lack of companies offering BTFacquisition or modeling services. An approximate impressionof the realism achieved by employing BTFs is conveyed bymovies featuring BRDF rendering, which has become a verycommon feature in special effects for movies. For the movie“Matrix Reloaded” even measured BRDFs have been used[80].

Examples of BTF applications targeting realistic render-ing, e.g., for movie production include rendering of treeswith correct shading and shadowing [81], rendering of feath-ers and birds [82] or textiles [35, 47, 79]. Figure 19 shows anexample of visualizing cloths in an HDR environment.

6.2. Predictive BTF Rendering

In contrast to the above realistic rendering approach, the termpredictive rendering denotes a much more difficult task: the

c© The Eurographics Association and Blackwell Publishing Ltd 2005

104 G. Muller et al./Acquisition, Synthesis, and Rendering of BTFs

Figure 20: Visualization of the interior of a Mercedes C class using measured BTFs. This image was rendered using the OpenRTreal-time ray-tracing engine [66].

results need not only be believable and possible, but they needto be closely related to a tangible counterpart. A prominentexample application area of predictive BTF rendering is prod-uct design: designers want to be able to judge the appearanceof the developed object already in early design phases beforea real prototype was built. Renderings produced for thesekinds of users need to faithfully predict the real appearanceof the object in order to be useful.

Industrial products aiming in this direction already exist onthe market. Maya from Alias System offers a combination ofmodeling tools and sophisticated rendering, which createshighly realistic images for materials such as plastic. Delcamplc offers a complete CAD solution including the possibilityto render images using a material library including highlyreflective, transparent, and bumpy materials. Lumiscaphe of-fers a software solution called Patchwork 3D for creatingimages from CAD models, which are textured by the userusing a provided selection of materials that approximate theBTF of real materials.

To fully achieve predictive results, many other require-ments apart from correct material representations need to be

met. In Klein et al. [83], an acquisition and rendering pipelinefor realistic reflectance data in Virtual Reality (VR) is de-scribed. Among the necessary stages leading to predictiverendering are material and light source acquisition, texturing(since materials need to be applied to surface models as inreality), realistic lighting simulation including global illumi-nation, realistic rendering, and real-time tone mapping. Thepaper describes existing partial solutions but concludes thatmany issues are still to be solved to achieve the final goal.As an example, Figure 20 shows a high-quality visualiza-tion of the interior of a Mercedes C class including measuredBTFs. Although the image looks highly realistic in someparts, inadequate texturing, missing color and tone mapping,and artificial point-light illumination make it appear far fromsimilar to the real interior.

In terms of existing BTF rendering algorithms, a com-parison and evaluation of existing algorithms with respectto suitability for VR applications was published by Mesethet al. [84]. Unfortunately, due to the number of recentpublications in this area the comparison is far from com-plete. Another problem is that the paper measures render-ing quality as simple RGB color differences, which does not

c© The Eurographics Association and Blackwell Publishing Ltd 2005

G. Muller et al./Acquisition, Synthesis, and Rendering of BTFs 105

comply with the human judgment of similarity. Due to thelarge number of factors influencing apparent color similar-ity and their complex dependencies (which are studied in-tensively by color scientists), more accurate psychophys-ical evaluations of such kind require time-intense experi-ments. Such experiments—although targeting only compara-ble questions—were, for example conducted by McNamaraet al. [85], Drago and Myszkowski [86], and Drago et al.[87].

To conclude, currently BTF rendering can improvethe quality of rendered images significantly, represent-ing an important yet not by itself sufficient steps towardpredictive image generation. Despite the fast advances incomputer graphics, we do not expect achieving sufficientquality for predictive rendering in the nearest future due tomissing methods, evaluation tests and integrated products.Nevertheless, the target does not seem to be fatuous andtherefore will definitely be tackled in coming years due tothe immense industrial needs.

7. Summary and conclusion

In this STAR, we have presented research aiming at usingBTFs as a generalized, image-based material representationfor rendering. We have demonstrated that important stepstoward this goal have been made already:

� The acquisition of BTFs using gonioreflectometer-likesetups is reliable and results are now publicly available.But since the measurement time for one material liesin the range of several hours, BTFs are still a rare andexpensive resource. Therefore, the development of fastand cheap methods for BTF measurement will be a veryinteresting field for future research.

� Numerous methods for BTF-compression are available.Depending on the desired application area, almost everymethod has its right to exist. Methods based on parametricreflectance models are suited for real-time rendering inlarge and complex scenes since they can be rendered fastand have minor memory requirements. For high-qualitydemands, methods based on more general compressiontechniques such as linear basis decomposition should beused. Although even in this case real-time rendering isalready possible, the optimal BTF-model for renderinghas not been found yet.

� Synthesis of BTFs can be accomplished by applyingwell-known algorithms for 2D-texture synthesis.

� High-quality BTF rendering even with complex image-based illumination is now possible in real-time. A prob-lem remaining for future work is the fast interpolation ofall six BTF-dimensions on graphics hardware.

Finally, we gave an overview of current and future appli-cation areas for BTFs. There is no doubt that simple material

representations such as 2D-texture or bump-maps sooner orlater will be replaced by more complex representations thatcapture all the subtle effects of general light-material inter-action. This STAR has shown that the BTF has the potentialto be such a representation.

Acknowledgments

This work was partially funded by the European Union underthe project RealReflect (IST-2001-34744). Some of the usedhigh-dynamic range images were taken from Paul Debevecswebpage [71]. The textiles shown in this work were gener-ated in scope of the BMBF project “Virtual Try-On” with thecloth simulation engine TuTex and provided by M. Wacker,M. Keckeisen, and S. Kimmerle from the WSI/GRIS at theUniversity of Tubingen. For further details we refer to [88,89]. The car model was kindly provided by DaimlerChrysler.Special thanks go to Roland Wahl, Marcin Novotni, and Fer-enc Kahlesz for proof reading.

References

1. K. J. Dana, B. van Ginneken, S. K. Nayarand J. J. Koen-derin. Reflectance and texture of real-world surfaces.In IEEE Conference on Computer Vision and PatternRecognition, pp. 151–157. 1997.

2. J.-M. Dischler. Efficiently rendering macrogeometricsurface structures using bi-directional texture functions.In Rendering Techniques 98 (1998), Springer ComputerScience.

3. F. Nicodemus, J. Richmond, J. Hsia, I. Ginsberg and T.Limperis. Geometric considerations and nomenclaturefor reflectance. Monograph 160. October 1977.

4. H. W. Jensen, S. R. Marschner, M. Levoy and P. Hanra-han. A practical model for subsurface light transport. InProceedings of SIGGRAPH 2001, pp. 511–518. August2001.

5. M. Goesele, H. P. Lensch, J. Lang, C. Fuchs and H.-P. Seidel. DISCO-acquisition of translucent objects. InProceedings of SIGGRAPH, August 2004.

6. G. J. Ward. Measuring and modeling anisotropic re-flection. In Proceedings of SIGGRAPH, ACM Press,pp. 265–272. 1992.

7. S. R. Marschner, S. H. Westin, E. P. F. Lafortune, K.E. Torrance and D. P. Greenberg. Image-based BRDFmeasurement including human skin. In Proceedings ofEGRW ’99, pp. 139–152. 1999.

8. W. Matusik, H. Pfister, M. Brand and L. McMillan. Ef-ficient isotropic BRDF measurement. In Proceedings ofthe EGRW ’03, pp. 241–247. 2003.

c© The Eurographics Association and Blackwell Publishing Ltd 2005

106 G. Muller et al./Acquisition, Synthesis, and Rendering of BTFs

9. M. Levoy and P. Hanrahan. Light field rendering. Com-puter Graphics 30, Annual Conference Series 31–42.1996.

10. S. J. Gortler, R. Grzeszczuk, R. Szeliski and M. F. Cohen.The lumigraph. Computer Graphics 30, Annual Confer-ence Series, 43–54. 1996.

11. P. Debevec, T. Hawkins, C. Tchou, H.-P. Duiker, W.Sarokin and M. Sagar. Acquiring the reflectance field ofa human face. Proceedings of ACM SIGGRAPH 2000.2000.

12. G. S. P. Miller, S. M. Rubin and D. Ponceleon. Lazydecompression of surface light fields for precomputedglobal illumination. In Proceedings of EGRW ’98, pp.281–292. 1998.

13. D. N. Wood, D. I. Azuma, K. Aldinger, B. Curless,T. Duchamp, D. H. Salesin and W. Stuetzle. Surfacelight fields for 3D photography. In Proceedings of SIG-GRAPH, pp. 287–296. 2000.

14. T. Malzbender, D. Gelb and H. Wolters. Polynomial tex-ture maps. In Proceedings of SIGGRAPH, ACM Press,pp. 519–528. 2001.

15. W. Matusik, H. Pfister, A. Ngan, P. Beardsley, R. Zieglerand L. McMillan. Image-based 3d photography usingopacity hulls. In Proceedings of SIGGRAPH, ACMPress, pp. 427–437. 2002.

16. V. Masselus, P. Peers, P. Dutr#233; and Y. D. Willems.Relighting with 4d incident light fields. ACM Trans.Grap. 22(3):613–620, 2003.

17. J. F. Blinn. Simulation of wrinkled surfaces. In Pro-ceedings of SIGGRAPH78, ACM Press, pp. 286–292.1978.

18. H. E. Rushmeier, G. Taubin and A. Gueziec. Applyingshape from lighting variation to bump map capture. InProceedings of EGRW ’97, Springer-Verlag, pp. 35–44.1997.

19. H. P. Lensch, J. Kautz, M. Goesele, W. Heidrich2 andH.-P. Seidel. Image-based reconstruction of spatiallyvarying materials. Proceedings of EGRW ’01. 2001.

20. R. Furukawa, H. Kawasaki, K. Ikeuchi and M.Sakauchi. Appearance based object modeling using tex-ture database: acquisition, compression and rendering.In Proceedings of EGRW ’02, Eurographics Association,pp. 257–266. 2002.

21. Columbia-Utrecht Reflectance and Texture Database:http://www1.cs.columbia.edu/cave/curet/.

22. D. McAllister, A. Lastra and W. Heidrich. Efficient ren-dering of spatial bi-directional reflectance distributionfunctions. Graphics Hardware 2002. 2002.

23. M. Hauth, O. Etzmuss, B. Eberhardt, R. Klein, R. Sar-lette, M. Sattler, K. Dauber and J. Kautz. Cloth Ani-mation and Rendering. In Eurographics 2002 Tutorials,2002.

24. BTF Database Bonn: http://btf.cs.uni-bonn.de.

25. Z. Zhang. A flexible ne technique for camera calibration.IEEE Transactions on Pattern Analysis and Machine In-telligence 22(11):1330–1334. 2000.

26. M. L. Koudelka, S. Magda, P. N. Belhumeur and D. J.Kriegman. Acquisition, compression and synthesis ofbidirectional texture functions. In Texture 2003. 2003.

27. K. J. Danaand, J. Wang. Device for convenient measure-ment of spatially varying bidirectional reflectance. Jour-nal of the Optical Society of America A, 1–12. January2004.

28. J. Y. Han and K. Perlin. Measuring bidirectional tex-ture reflectance with a kaleidoscope. ACM Trans. Graph.22(3):741–748, 2003.

29. J. Radloff. Obtaining the Bidirectional Texture Re-flectance of Real-World Surfaces by means of a Kalei-doscope. Tech. rep., Department of Computer Science,Rhodes University, 2004.

30. T.-T. Wong, P.-A. Heng, S.-H. Or and W.-Y. Ng. Image-based rendering with controllable illumination. In Pro-ceedings of EGRW ’97, Springer-Verlag, pp. 13–22.1997.

31. B. T. Phong. Illumination for computer generated pic-tures. Communications of the ACM 18, 6: 311–317, 1975.

32. J. F. Blinn. Models of light reflection for computer syn-thesized pictures. Computer Graphics 11(3), 192–198.1977.

33. E. P. F. Lafortune, S.-C. Foo, K. E. Torrance andD. P. Greenberg. Non-linear approximation ofreflectance functions. In Proceedings of SIG-GRAPH, ACM Press/Addison-Wesley Publishing Co.,pp. 117–126. 1997.

34. W. H. Press, B. P. Flannery, S. A. Teukolsky and W. T.Vetterling. Numerical Recipes in C: The Art of ScientificComputing. Cambridge University Press, 1992.

35. K. Daubert, H. Lensch, W. Heidrich and H.-P. Seidel.Efficient cloth modeling and rendering. In Proceedingsof EGRW ’01, pp. 63–70. 2001.

c© The Eurographics Association and Blackwell Publishing Ltd 2005

G. Muller et al./Acquisition, Synthesis, and Rendering of BTFs 107

36. J. Meseth, G. Muller and R. Klein. Reflectance fieldbased real-time, high-quality rendering of bidirectionaltexture functions. Computers and Graphics 28(1):103–112, February 2004.

37. J. Filip and M. Haindl. Non-linear reflectance model forbidirectional texture function synthesis. In ICPR 2004.2004.

38. W. Matusik, H. Pfister, M. Brand and L. McMillan.A data-driven reflectance model. ACM Trans. Graph.22(3):759–769. 2003.

39. W. Press, S. Teukolsky, W. Vetterling and B. Flannery.Numerical recipes in C—The art of scientific computa-tion, 2nd ed. ISBN 0-521-43108-5. Cambridge Univer-sity Press, 1992.

40. K. Nishino, Y. Sato and K. Ikeuchi. Eigen-texturemethod: appearance compression based on 3d model.In Proceedings of IEEE Conference on Computer Visionand Pattern Recognition (CVPR’99), vol. 1, pp. 618–624. 1999.

41. J. Kautz and M. McCool. Interactive Rendering withArbitrary BRDFs using Separable Approximations. InTenth Eurographics Workshop on Rendering. pp. 281–292. 1999.

42. M. D. McCool, J. Ang and A. Ahmad. HomomorphicFactorization of BRDFs for High-Performance Render-ing. In Proceedings of SIGGRAPH, ACM Press, pp. 171–178. 2001.

43. F. Suykens, K. vom Berge, A. Lagae and P. Dutre.Interactive Rendering of Bidirectional Texture Func-tions. In Eurographics 2003, pp. 463–472. September2003.

44. X. Liu, Y. Hu, J. Zhang, X. Tong, B. Guo and H.-Y.Shum. Synthesis and Rendering of Bidirectional TextureFunctions on Arbitrary Surfaces. IEEE Transactions onVisualization and Computer Graphics 10(3):278–289.2004.

45. R. Ramamoorthi. Analytic pca construction fortheoretical analysis of lighting variability in im-ages of a lambertian object. PAMI Oct 2002.2002.

46. M. A. O. Vasilescu and D. Terzopoulos. Tensortextures:Multilinear image-based rendering. In Proceedings ofSIGGRAPH. August 2004.

47. M. Sattler, R. Sarlette and R. Klein. Efficient and realisticvisualization of cloth. Proceedings of the EurographicsSymposium on Rendering 2003. 2003.

48. N. Kambhatla and T. K. Leen. Dimension reductionby local principal component analysis. Neural Comput.9(7):1493–1516. 1997.

49. G. Muller, J. Meseth and R. Klein. Compression and real-time Rendering of Measured BTFs using local PCA. InVision, Modeling and Visualisation 2003, pp. 271–280.November 2003.

50. D. J. Heeger and J. R. Bergen. Pyramid-based textureanalysis/synthesis. In Proceedings of SIGGRAPH ACMPress, pp. 229–238. 1995.

51. J. S. De Bonet. Multiresolution sampling procedure foranalysis and synthesis of texture images. In Proceedingsof SIGGRAPH, ACM Press/Addison-Wesley PublishingCo., pp. 361–368. 1997.

52. A. A. Efros and T. K. Leung. Texture synthesis by non-parametric sampling. In Proceedings of the InternationalConference on Computer Vision-Volume 2, IEEE Com-puter Society, p. 1033. 1999.

53. L.-Y. Wei and M. Levoy. Fast texture synthesis us-ing tree-structured vector quantization. In Proceedingsof SIGGRAPH, ACM Press/Addison-Wesley PublishingCo., pp. 479–488. 2000.

54. A. A. Efrosand W. T. Freeman. Image quilting for texturesynthesis and transfer. In Proceedings of SIGGRAPH,ACM Press, pp. 341–346. 2001.

55. X. Liu, Y. Yu and H.-Y. Shum. Synthesizing bidi-rectional texture functions for real-world surfaces. InProceedings of SIGGRAPH, ACM Press, pp. 97–106.2001.

56. X. Tong, J. Zhang, L. Liu, X. Wang, B. Guo andH.-Y. Shum. Synthesis of bidirectional texture functionson arbitrary surfaces. In Proceedings of SIGGRAPH,ACM Press, pp. 665–672. 2002.

57. T. Leung and J. Malik. Recognizing surfaces using three-dimensional textons. In Proceedings of the InternationalConference on Computer Vision-Volume 2 (1999), IEEEComputer Society, p. 1010.

58. J. T. Kajiya. The rendering equation. In Proceedings ofSIGGRAPH, ACM Press, pp. 143–150. 1986.

59. R. L. Cook, T. Porter and L. Carpenter. Distributed raytracing. In Proceedings of SIGGRAPH, ACM Press,pp. 137–145. 1984.

60. H. W. Jensen. Global illumination using photon maps. InProceedings of EGRW ’96, Springer-Verlag, pp. 21–30.1996.

c© The Eurographics Association and Blackwell Publishing Ltd 2005

108 G. Muller et al./Acquisition, Synthesis, and Rendering of BTFs

61. I. Wald, T. J. Purcell, J. Schmittler, C. Benthin and P.Slusallek. Realtime Ray Tracing and its use for Inter-active Global Illumination. In Eurographics State of theArt Reports. 2003.

62. J.-M. Hasenfratz, M. Lapierre, N. Holzschuch andF. Sillion. A survey of real-time soft shadows al-gorithms. In Eurographics. State-of-the-Art Report.2003.

63. J. Kautz, P.-P. Vazquez, W. Heidrich and H.-P. Seidel.A Unified Approach to Prefiltered Environment Maps.In 11th Eurographics Workshop on Rendering, pp. 185–196. 2000.

64. P.-P. Sloan, J. Kautz and J. Snyder. PrecomputedRadiance Transfer for Real-Time Rendering in Dy-namic, Low-Frequency Lighting Environments. In Pro-ceedings of SIGGRAPH, ACM Press, pp. 527–536.2002.

65. R. Ng, R. Ramamoorthi and P. Hanrahan. All-FrequencyShadows using Non-Linear Wavelet Lighting Approxi-mation. ACM Transactions on Graphics 22(3):376–381.2003.

66. I. Wald, C. Benthin and P. Slusallek. OpenRT—A Flex-ible and Scalable Rendering Engine for Interactive 3DGraphics. Tech. Rep. TR-2002-01, Saarland University,2002.

67. W. Heidrich. View-independent environ-ment maps. In Proceedings of Eurographics/SIGGRAPH Workshop on Graphics Hardware ’98.1998.

68. W.-C. Chen, J.-Y. Bouguet, M. H. Chu and R.Grzeszczuk. Light field mapping: Efficient repre-sentation and hardware rendering of surface lightfields. In Proceedings of ACM SIGGRAPH 2002.2002.

69. M. Schneider. Real-Time BTF Rendering. In Proceed-ings of CESCG 2004 (2004).

70. P. E. Debevec and J. Malik. Recovering high dynamicrange radiance maps from photographs. In SIGGRAPH97 Conference Proceedings, ACM SIGGRAPH, Addi-son Wesley, pp. 369–378. ISBN 0-89791-896-7. August1997.

71. HDR. http://www.debevec.org/probes/.

72. G. S. Miller and C. R. Hoffman. Illumination andReflection Maps: Simulated Objects in Simulatedand Real Environments. In SIGGRAPH ’84 Ad-vanced Computer Graphics Animation seminar notes.1984.

73. N. Greene. Environment Mapping and Other Applica-tions of World Projections. IEEE computer Graphics andApplications 6(11):21–29, 1986.

74. J. Kautz and M. D. McCool. Approximation ofGlossy Reflection with Prefiltered EnvironmentMaps. In Graphics Interface 2000, pp. 119–126.2000.

75. P.-P. Sloan, X. Liu, H.-Y. Shum and J. Snyder. Bi-Scale Radiance Transfer. ACM Transactions on Graph-ics 22(3):370–375. 2003.

76. M. F. Cohen and D. P. Greenberg. The hemicube: Aradiosity solution for complex environments. Sympo-sium on Computational Geometry 19, 3 (July 1985), 31–40.

77. G. Muller, J. Meseth and R. Klein. Fast Environmen-tal Lighting for Local-PCA Encoded BTFs. In Com-puter Graphics International 2004 (CGI2004). June2004.

78. P.-P. Sloan, J. Hall, J. Hart and J. Snyder. ClusteredPrincipal Components for Precomputed Radiance Trans-fer. ACM Transactions on Graphics 22(3):382–391.2003.

79. J. Kautz, M. Sattler, R. Sarlette, R. Klein and H.-P.Seidel. Decoupling BRDFs from surface mesostructures.In Graphics Interface, pp. 177–184. 2004.

80. G. Borshukov. Measured BRDF in Film Production -Realistic Cloth Appearance for ‘The Matrix Reloaded.’In SIGGRAPH 2003 Sketches. 2003.

81. A. Meyer, F. Neyret and P. Poulin. Interactive Renderingof Trees with Shading and Shadows. In 12th Eurograph-ics Workshop on Rendering. July 2001.

82. Y. Chen, Y. Xu, B. Guo and H.-Y. Shum. Modeling andrendering of realistic feathers. In Proceedings of SIG-GRAPH, ACM Press, pp. 630–636. 2002.

83. R. Klein, J. Meseth, G. Muller, R. Sarlette, M. Gutheand A. Balazs. RealReflect: Real-time Visualization ofComplex Reflectance Behaviour in Virtual Prototyping.In Proceedings of Eurographics 2003 Industrial andProject Presentations. 2003.

84. J. Meseth, G. Muller, M. Sattler and R. Klein. Btf render-ing for virtual environments. In Virtual Concepts 2003,pp. 356–363. November 2003.

85. A. McNamara, A. Chalmers, T. Troscianko and I.Gilchrist. Comparing real & synthetic scenes usinghuman judgements of lightness. In Proceedings ofEGRW’00, Springer-Verlag, pp. 207–218. 2000.

c© The Eurographics Association and Blackwell Publishing Ltd 2005

G. Muller et al./Acquisition, Synthesis, and Rendering of BTFs 109

86. F. Drago and K. Myszkowski. Validation Proposal forGlobal Illumination and Rendering Techniques. Com-puters and Graphics 25(3): 511–518. 2001.

87. F. Drago, W. Martens, K. Myszkowski and H.-P. Seidel.Perceptual Evaluation of Tone Mapping Operators withRegard to Similarity and Preference. Research ReportMPI-I-2002-4-002, Max-Planck-Institut fur Informatik,August 2002.

88. O. Etzmuss, M. Keckeisen and W. Strasser.A Fast Finite Element Solution for ClothModelling. Proceedings of Pacific Graphics,2003.

89. S. Kimmerle, M. Keckeisen, J. Mezger and M.Wacker. TuTex: A Cloth Modelling System forVirtual Humans. In Proceedings 3D Modelling.2003.

c© The Eurographics Association and Blackwell Publishing Ltd 2005