12
Eurographics Symposium on Rendering (2005) Kavita Bala, Philip Dutré (Editors) Reflectance Sharing: Image-based Rendering from a Sparse Set of Images Todd Zickler 1 , Sebastian Enrique 2 , Ravi Ramamoorthi 2 , and Peter Belhumeur 2 1 Division of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA 2 Computer Science Department, Columbia University, New York, NY, USA Abstract When the shape of an object is known, its appearance is determined by the spatially-varying reflectance function defined on its surface. Image-based rendering methods that use geometry seek to estimate this function from image data. Most existing methods recover a unique angular reflectance function (e.g., BRDF) at each surface point and provide reflectance estimates with high spatial resolution. Their angular accuracy is limited by the number of available images, and as a result, most of these methods focus on capturing parametric or low-frequency angular reflectance effects, or allowing only one of lighting or viewpoint variation. We present an alternative approach that enables an increase in the angular accuracy of a spatially-varying reflectance function in exchange for a decrease in spatial resolution. By framing the problem as scattered-data interpolation in a mixed spatial and angular domain, reflectance information is shared across the surface, exploiting the high spatial resolution that images provide to fill the holes between sparsely observed view and lighting directions. Since the BRDF typically varies slowly from point to point over much of an object’s surface, this method enables image-based rendering from a sparse set of images without assuming a parametric reflectance model. In fact, the method can even be applied in the limiting case of a single input image. 1. Introduction Given a set of images of a scene, image-based rendering (IBR) methods strive to build a representation for synthe- sizing new images of that scene under arbitrary illumination and viewpoint. One effective IBR representation consists of the scene geometry coupled with a reflectance function de- fined on that geometry. At a given point under given illu- mination conditions, the reflectance function assigns a radi- ance value to each exitant ray, so once the geometry and re- flectance function are known, realistic images can be synthe- sized under arbitrary viewpoint and (possibly complex and near-field) illumination. This approach to IBR involves two stages: recovery of both geometry and reflectance. Yet, while great strides have been made at recovering object shape (e.g., laser- scanners and computer vision techniques), less progress has been made at recovering reflectance properties. Recover- ing reflectance is difficult because that is where the high- dimensionality of IBR is rooted. At each surface point, re- flectance is described by a four-dimensional function of the view and lighting directions, termed the bi-directional re- flectance distribution function (BRDF). The BRDF gener- ally changes spatially over an object’s surface, and recov- ering this spatially-varying BRDF (or 6D SBRDF) with- out further assumptions generally requires a set of images large enough to densely sample high-frequency radiometric events, such as sharp specular highlights, at each point on the surface. This set consists of a near exhaustive sampling of images of the scene from all viewpoints and lighting di- rections, which can be tens-of-thousands of images or more. In previous work, recovering spatial reflectance has been made tractable in two different ways. The first is to approx- imate reflectance using an analytic BRDF model, thereby simplifying the problem from that of recovering a 4D func- tion at each point to that of estimating of a handful of param- eters (e.g., [MLH02, SWI97, YDMH99, Geo03]). Since they only recover a few parameters at each point, these methods are able to provide reflectance estimates from a small num- ber of images. They require the selection of a specific para- metric BRDF model a priori, however, which limits their accuracy and generality. c The Eurographics Association 2005.

Reectance Sharing: Image-based Rendering from a Sparse ...zickler/download/sharing_egsr05...varies slowly from point to point over much of an object’s surface, this method enables

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Reectance Sharing: Image-based Rendering from a Sparse ...zickler/download/sharing_egsr05...varies slowly from point to point over much of an object’s surface, this method enables

Eurographics Symposium on Rendering (2005)Kavita Bala, Philip Dutré (Editors)

Reflectance Sharing: Image-based Renderingfrom a Sparse Set of Images

Todd Zickler1, Sebastian Enrique2, Ravi Ramamoorthi2, and Peter Belhumeur2

1Division of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA2Computer Science Department, Columbia University, New York, NY, USA

AbstractWhen the shape of an object is known, its appearance is determined by the spatially-varying reflectance functiondefined on its surface. Image-based rendering methods that use geometry seek to estimate this function from imagedata. Most existing methods recover a unique angular reflectance function (e.g., BRDF) at each surface point andprovide reflectance estimates with high spatial resolution. Their angular accuracy is limited by the number ofavailable images, and as a result, most of these methods focus on capturing parametric or low-frequency angularreflectance effects, or allowing only one of lighting or viewpoint variation. We present an alternative approachthat enables an increase in the angular accuracy of a spatially-varying reflectance function in exchange for adecrease in spatial resolution. By framing the problem as scattered-data interpolation in a mixed spatial andangular domain, reflectance information is shared across the surface, exploiting the high spatial resolution thatimages provide to fill the holes between sparsely observed view and lighting directions. Since the BRDF typicallyvaries slowly from point to point over much of an object’s surface, this method enables image-based renderingfrom a sparse set of images without assuming a parametric reflectance model. In fact, the method can even beapplied in the limiting case of a single input image.

1. IntroductionGiven a set of images of a scene, image-based rendering(IBR) methods strive to build a representation for synthe-sizing new images of that scene under arbitrary illuminationand viewpoint. One effective IBR representation consists ofthe scene geometry coupled with a reflectance function de-fined on that geometry. At a given point under given illu-mination conditions, the reflectance function assigns a radi-ance value to each exitant ray, so once the geometry and re-flectance function are known, realistic images can be synthe-sized under arbitrary viewpoint and (possibly complex andnear-field) illumination.

This approach to IBR involves two stages: recovery ofboth geometry and reflectance. Yet, while great strideshave been made at recovering object shape (e.g., laser-scanners and computer vision techniques), less progress hasbeen made at recovering reflectance properties. Recover-ing reflectance is difficult because that is where the high-dimensionality of IBR is rooted. At each surface point, re-flectance is described by a four-dimensional function of theview and lighting directions, termed the bi-directional re-

flectance distribution function (BRDF). The BRDF gener-ally changes spatially over an object’s surface, and recov-ering this spatially-varying BRDF (or 6D SBRDF) with-out further assumptions generally requires a set of imageslarge enough to densely sample high-frequency radiometricevents, such as sharp specular highlights, at each point onthe surface. This set consists of a near exhaustive samplingof images of the scene from all viewpoints and lighting di-rections, which can be tens-of-thousands of images or more.

In previous work, recovering spatial reflectance has beenmade tractable in two different ways. The first is to approx-imate reflectance using an analytic BRDF model, therebysimplifying the problem from that of recovering a 4D func-tion at each point to that of estimating of a handful of param-eters (e.g., [MLH02, SWI97, YDMH99, Geo03]). Since theyonly recover a few parameters at each point, these methodsare able to provide reflectance estimates from a small num-ber of images. They require the selection of a specific para-metric BRDF model a priori, however, which limits theiraccuracy and generality.

c© The Eurographics Association 2005.

Page 2: Reectance Sharing: Image-based Rendering from a Sparse ...zickler/download/sharing_egsr05...varies slowly from point to point over much of an object’s surface, this method enables

T. Zickler, S. Enrique, R. Ramamoorthi & P. Belhumeur / Reflectance Sharing

The second category of methods avoids the restrictions ofparametric BRDF models by: i) using a much larger numberof input images, and ii) recovering only a subset of the re-flectance function. For example, Wood et al. [WAA+00] useover 600 images of an object under fixed (complex) illumi-nation to estimate the 2D view-dependent reflectance varia-tion, and Debevec et al. [DHT+00] use 2048 images to mea-sure the 2D lighting-dependent variation with fixed view-point. A full 4D (view and illumination) reflectance functionis measured by Matusik et al. [MPBM02], who use morethan 12,000 images of an object with known visual hull;but even this large number of images provides only a sparsesampling of the appearance variation at each point, and asa result, images of the object can be synthesized using onlylow-frequency illumination environments.

This paper presents an alternative approach to estimatingspatial reflectance—one that combines the benefits of bothparametric methods (i.e., sparse images) and non-parametricmethods (i.e., arbitrary reflectance functions.) In developingthis approach, it makes the following technical contributions.

• SBRDF estimation is posed as a scattered-data interpola-tion problem, with images providing dense 2D slices ofdata embedded in the mixed spatial and angular domain.

• This interpolation problem is solved by introducing: i) anew parameterization of the BRDF domain, and ii) a non-parametric representation of reflectance based on radialbasis functions (RBFs).

• This representation is easily adapted to handle: i) homo-geneous BRDF data, ii) spatially-varying reflectance frommultiple images, and iii) spatially-varying reflectancefrom a single input image.

Since it is non-parametric, the proposed method does notassume a priori knowledge of the SBRDF and is flexibleenough to represent arbitrary reflectance functions, includ-ing those with high-frequency specular effects. This ap-proach is also very different from previous non-parametrictechniques (e.g., [WAA+00, DHT+00, MPBM02]) that in-terpolate reflectance only in the angular dimensions, esti-mating a unique reflectance function at each point. Instead,we simultaneously interpolate in both the spatial and angu-lar dimensions. This enables a controlled exchange betweenspatial and angular information, effectively giving up someof the spatial resolution in order to fill the holes betweensparsely observed view and illumination conditions. Sincereflectance typically varies slowly from point to point overmuch of an object’s surface, this means that we can often ob-tain visually pleasing results from a drastically reduced setof images. Additionally, the method degrades gracefully asthe number of input images is reduced (see Fig. 6), and asshown in Fig. 13, it can even be applied in the extreme caseof a single input image.

2. Main Ideas and Related WorkThe proposed method for SBRDF estimation builds on threeprincipal observations.

Smooth Spatial Variation. Most existing methods recover aunique BRDF at each point and thereby provide an SBRDFwith very high spatial resolution. Many parametric methodshave demonstrated, however, that the number of input im-ages can be reduced if one is willing to accept a decrease inspatial resolution. This has been exploited, for example, byYu et al. [YDMH99] and Georghiades [Geo03], who assumethat specular BRDF parameters are constant across a surface.Similarly, Sato et al. [SWI97] estimate the specular parame-ters at only a small set of points, later interpolating these pa-rameters across the surface. Lensch et al. [LKG+01] presenta novel technique in which reflectance samples at clusters ofsurface points are used to estimate a basis of (1-lobe) Lafor-tune models. The reflectance at each point is then uniquelyexpressed as a linear combination of these basis BRDFs.

Similar to these approaches, our method trades spatial res-olution for an increase in angular resolution. The difference,however, is that we implement this exchange using a non-parametric representation. We begin by assuming that theSBRDF varies smoothly in the spatial dimensions, but wealso demonstrate how this can be relaxed to handle rapidspatial variation in terms of a multiplicative texture. (In caseswhere the shape of the BRDF itself changes rapidly, we cur-rently assume that discontinuities are given as input.)

Curved Surfaces. Techniques for image-based BRDF mea-surement [LKK98,MWL+99,MPBM03] exploit the fact thata single image of a curved, homogeneous surface repre-sents a very dense sampling of a 2D slice of the 4D BRDF.In this paper, we extend this idea to the spatially-varyingcase, where an image provides a 2D slice in the higher-dimensional SBRDF domain. Our results demonstrate that,like the homogeneous case, surface curvature (along withsmooth spatial variation) can be exploited to increase theangular resolution of the SBRDF. (For near-planar surfaceswhere curvature is not available, more angular reflectance in-formation can be obtained using near-field illumination andperspective views [KMG96].)

Angular Compressibility. While it is a multi-dimensionalfunction, a typical BRDF varies slowly over muchof its angular domain. This property has been ex-ploited for 3D shape reconstruction [HS03], efficientBRDF acquisition [MPBM03], efficient rendering ofBRDFs [MAA01, JM03], and efficient evaluation of envi-ronment maps [CON99, RH02]. Here, we exploit compress-ibility by assuming that the BRDF typically varies rapidlyonly in certain dimensions, such as the half-angle.

The three ideas of this section have been developed invery different contexts, and this paper combines and expandsthem to solve a novel problem: estimating non-parametricSBRDFs from sparse images. The fusion of these ideas isenabled by the BRDF parameterization of Sect. 3, and an in-terpolation approach that unifies the treatment of spatial andangular dimensions (Sects. 4–6).

c© The Eurographics Association 2005.

Page 3: Reectance Sharing: Image-based Rendering from a Sparse ...zickler/download/sharing_egsr05...varies slowly from point to point over much of an object’s surface, this method enables

T. Zickler, S. Enrique, R. Ramamoorthi & P. Belhumeur / Reflectance Sharing

2.1. AssumptionsWe exploit scene geometry to reduce the number of input im-ages required to accurately represent appearance. Thus, un-like pure light field techniques [GGSC96,LH96], the methodrequires a set of images of an object with known geometry,viewpoint, and either point-source or directional illumina-tion. A number of suitable acquisition systems have beendescribed in the literature. (See, e.g., [SWI97, DHT+00,MPBM02].)

In addition, global effects such as sub-surface scatter-ing and interreflection are not explicitly considered in ourformulation. For directional illumination and orthographicviews, however, some of these effects will be absorbed intoour representation and can be reproduced when rendered un-der the same conditions. (See Sect. 7.) In this case, our useof the term SBRDF is synonymous with the non-local re-flectance field defined by Debevec et al. [DHT+00].

Finally, in this paper we restrict our attention to isotropicBRDFs. While the ideas of exploiting spatial coherence andusing RBFs to interpolate scattered reflectance samples canbe applied to the anisotropic case, this would require aparameterization which is different from that presented inSect. 3 and is left for future work.

3. Notation and BRDF ParameterizationAt the core of our approach is the interpolation of scattereddata in multiple (3-6) dimensions. The success of any in-terpolation technique depends heavily on how the SBRDFis parameterized. This section introduces some notation andpresents one possible parameterization. Based on this pa-rameterization, our interpolation technique is discussed inSects. 5 and 6.

The SBRDF is a function of six dimensions and is writ-ten f (~x,~θ), where ~x = (x,y) ⊂ R

2 is the pair of spatialcoordinates that parameterize the surface geometry (a sur-face point is written ~s(x,y)), and ~θ ∈ Ω×Ω are the angu-lar coordinates that parameterize the double-hemisphere ofview/illumination directions in a local coordinate frame de-fined on the tangent plane at a surface point (i.e., the BRDFdomain.) A common parameterization of the BRDF domainis ~θ = (θi,φi,θo,φo), which represent the spherical coor-dinates of the light and view directions in the local frame.When the BRDF is isotropic, the angular variation reducesto a function of three dimensions, commonly parameterizedby (θi,θo,φo − φi). In this work, we restrict ourselves tothis isotropic case and consider the SBRDF to be a func-tion defined on a 5D domain. In the special case when theSBRDF is a constant function of the spatial dimensions (i.e.,f (~x,~θ) = f (~θ)) we say that the surface is homogeneous andis described by a 4D (or isotropic 3D) function.

The BRDF domain can be parameterized in a num-ber of ways, and as discussed below, one good choiceis Rusinkiewicz’s halfway/difference parameteriza-tion [Rus98], shown in Fig. 1(a). Using this parame-terization in the isotropic case, the BRDF domain is

h d

d

h

h

lv

t

n

φ

θθ φ θ

2φd θhsin

v

w

u

(a) (b)Figure 1: (a) The halfway/difference parameterization ofRusinkiewicz. In the isotropic case, the BRDF domain isparametrized by (θh,φd ,θd) [Rus98]. (b) The mapping defined byEq. (1) that creates a parameterization suitable for interpolation.

~θ = (θh,φd ,θd) ⊂ [0, π2 )× [0,π)× [0, π

2 ). Note that φd isrestricted to [0,π) since φd 7−→ φd +π by reciprocity.

The existence of singularities at θh = 0 and θd = 0and the required periodicity (φd 7−→ φd + π) make thestandard halfway/difference parameterization unsuitable formost interpolation techniques. Instead, we define the map-ping (θh,φd ,θd) 7−→ (u,v,w), as

(u,v,w) =

(

sinθh cos2φd ,sinθh sin2φd ,2θd

π

)

. (1)

This mapping is shown in Fig. 1(b). It eliminates the singu-larity at θh = 0 and ensures that the BRDF f (u,v,w) satisfiesreciprocity. In addition, the mapping is such that the remain-ing singularity occurs at θd = 0 (i.e., where the light andview directions are equivalent). This configuration is diffi-cult to create in practice, making it unlikely to occur duringacquisition. During synthesis, it must be handled with care.

3.1. Considerations for Image-based AcquisitionThe halfway/difference parameterization has been shown toincrease compression rates since common features such asspecular and retro-reflective peaks are aligned with the co-ordinate axes [Rus98]. The modified parameterization ofEq. (1) maintains this property, since specular events clus-ter along the w-axis, and retro-reflective peaks occur in theplane w = 0.

These parameterizations are useful in IBR for an ad-ditional reason: for image-based data, they separate thesparsely- and densely-sampled dimensions of the BRDF.(Another parameterization that shares this property is pro-posed by Marschner [Mar98].) To see this, note that for or-thographic projection and distant lighting—or more gener-ally, when scene relief is relatively small—a single image ofa curved surface provides BRDF samples lying in a plane ofconstant θd , since this angle is independent of the surfacenormal. The image represents a nearly continuous samplingof θh and φd in this plane. Thus, a set of images providesdense sampling of (θh,φd) but only as many samples of θdas there are images. (The orthographic/directional case isconsidered for illustrative purposes; it is not required by themethod.)

Conveniently, the irregular sampling obtained fromimage-based data corresponds well with the behavior of gen-

c© The Eurographics Association 2005.

Page 4: Reectance Sharing: Image-based Rendering from a Sparse ...zickler/download/sharing_egsr05...varies slowly from point to point over much of an object’s surface, this method enables

T. Zickler, S. Enrique, R. Ramamoorthi & P. Belhumeur / Reflectance Sharing

eral BRDFs, which vary slowly in the sparsely sampled θd-dimension, especially when θd is small. At the same time, byimaging curved surfaces, we ensure that the sampling rate ofthe half-angle θh is high enough to accurately recover thehigh-frequency variation (e.g., due to specular highlights)that is generally observed in that dimension.

4. Scattered Data InterpolationRecall that our goal is to estimate a continuous SBRDFf (~x,~θ) from a set of samples fi ∈ R

5 drawn from imagesof a surface with known geometry. Our task is complicatedby the fact that, as discussed in the previous section, the in-put samples are very non-uniformly distributed.

There are many methods for interpolating scattered datain this relatively high-dimensional space, but for our prob-lem, interpolation using radial basis functions provides themost attractive choice. Given a set of samples, an RBF in-terpolant is computed simply by solving a linear systemof equations, and the existence and uniqueness is guaran-teed with few restrictions on the sample points. Thus, un-like homogeneous BRDF representations such as sphericalharmonics, Zernike polynomials, wavelets and the basis ofMatusik et al. [MPBM03], an RBF representation does notrequire a local preprocessing step to resample the input dataat regular intervals.

Additional important properties of this method are: i)the cost of computing an RBF interpolant is dimension-independent, and ii) the size of the representation doesnot grow substantially as the dimension increases. Thisis in direct contrast to piecewise polynomial splines (e.g.,[Ter83]) and local methods like polynomial regression (e.g.,[MWL+99]) and the push/pull algorithm of Gortler etal. [GGSC96]. These other methods require either a trian-gulation of the domain or a tabulation of function values,both of which become computationally prohibitive in highdimensions. (Jaroszkiewicz and McCool [JM03] handle thisby approximating the high-dimensional SBRDF by a prod-uct of 2D functions, each of which is triangulated indepen-dently.)

4.1. Radial Basis FunctionsTo briefly review RBF interpolation (see, e.g., [Pow92,Buh03]), consider a general function g(~x), ~x ∈ R

d fromwhich we have N samples gi at sample points ~xi. Thisfunction is approximated as a sum of a low-order polyno-mial and a set of scaled, radially symmetric basis functionscentered at the sample points;

g(~x) ≈ g(~x) = p(~x)+N

∑i=1

λiψ(‖~x−~xi‖), (2)

where p(~x) is a polynomial of order n or less, ψ : R+ →R is

a continuous function, and ‖ · ‖ is the Euclidean norm. Thesample points~xi are referred to as centers, and the RBF inter-polant g satisfies the interpolation conditions g(~xi) = g(~xi).

Given a choice of n, an RBF ψ , and a basis for the poly-nomials of order n or less, the coefficients of the interpolantare determined as the solution of the linear system

[

Ψ PPT 0

][

~λ~c

]

=

[

~g0

]

, (3)

where Ψi j = ψ(‖~xi −~x j‖), ~λi = λi, ~gi = gi, Pi j = p j(~xi)where p j are the polynomial basis functions, and ~ci = ciare the coefficients in this basis of the polynomial termin g. This system is invertible (and the RBF interpolantis uniquely determined) in arbitrary dimensions for manychoices of ψ , with only mild conditions on n and the lo-cations of the data points [Duc77, Mic86].

In many cases we can benefit from using radially asym-metric basis functions (which are stretched in certain direc-tions), and here we use them to manage the irregularity inour sampling pattern. (Recall from Fig. 1 that the (u,v) di-mensions are sampled almost continuously while we haveonly as many samples of w as we have images.) FollowingDinh et al. [DTS01], an asymmetric radial function is cre-ated by scaling the Euclidean distance in Eq. (2) so that thebasis functions become

ψ(‖M(~x−~xi)‖), (4)

where M ∈Rd×d . In our case we choose M = diag(1,1,mw).

For mw < 1, the basis functions are elongated in the w dimen-sion, which is appropriate since our sampling rate is muchlower in that dimension. The appropriate value of this pa-rameter depends on the angular density of the input images,and empirically we have found that typical values for mw arebetween 0.1 and 0.5.

When the number of samples is large (i.e., N > 10,000),solving Eq. (3) requires care and can be difficult (or impossi-ble) using direct methods. This limitation has been addressedquite recently, and iterative fitting methods [BP95], and fastmultipole methods (FMMs) for efficient evaluation [BN92]have been developed for many choices of ψ in many dimen-sions. In some cases, solutions for systems with over half amillion centers have been reported [CBC+01]. The next sec-tions include investigations of the number of RBF centers re-quired to accurately represent image-based reflectance data,and we find this number to be sufficiently small to allow theuse of direct methods.

5. Homogeneous SurfacesIn this section, we apply RBF interpolation to homogeneoussurfaces, where we seek to estimate a global BRDF thatis not spatially-varying. The resulting BRDF representationmay be useful for interpolating image-based BRDF data(e.g., [LKK98, MWL+99, MPBM03]).

As discussed in Sect. 3, in the case of homogeneousBRDF data, reflectance is a function of three dimensions,(u,v,w). In R

3, a good choice for ψ is the linear (or bihar-monic) RBF, ψ(r) = r, with n = 1, since in this case, the

c© The Eurographics Association 2005.

Page 5: Reectance Sharing: Image-based Rendering from a Sparse ...zickler/download/sharing_egsr05...varies slowly from point to point over much of an object’s surface, this method enables

T. Zickler, S. Enrique, R. Ramamoorthi & P. Belhumeur / Reflectance Sharing

interpolant from Eq. (3) exists for any non-coplanar data,is unique, minimizes a generalization of the thin-plate en-ergy, and is therefore the smoothest in some sense [Duc77,CBC+01]. The BRDF is expressed as

f (~θ) = c1 + c2u+ c3v+ c4w+N

∑i=1

λi‖~θ −~θi‖, (5)

where ~θi = (ui,vi,wi) represents a BRDF sample point fromthe input images, and~λ and~c are found by solving Eq. (3).

As a practical consideration, since each pixel representsa sample point ~θi, even with modest image resolution, us-ing all available samples as RBF centers is computationallyprohibitive. Much of this data is redundant, however, and anaccurate BRDF representation can be achieved using only asmall fraction of these centers. A sufficient subset of cen-ters could be chosen using knowledge of typical reflectancephenomena. (To represent sharp specular peaks, for exam-ple, RBF centers are generally required near θh = 0.) Alter-natively, Carr et al. [CBC+01] present an effective greedyalgorithm for choosing this subset without assuming priorknowledge, and a slightly modified version of the same al-gorithm is applied here. The procedure begins by randomlyselecting a small subset of the sample points ~θi and fitting anRBF interpolant to these. Next, this interpolant is evaluatedat all sample points and used to compute the radiance resid-uals, εi = ( fi − f (~θi))cosθi, where θi is the angle betweenthe surface normal at the sample point and the illuminationdirection. Finally, points where εi is large are appended asadditional RBF centers, and the process is repeated until thedesired fitting accuracy is achieved.

It should be noted that an algorithmic choice of centerlocations could increase the efficiency of the resulting repre-sentation, since center locations would not necessarily needto be stored for each material. This would require assump-tions about the function being approximated, however, andhere we choose to emphasize generality over efficiency byusing the greedy algorithm.

5.1. EvaluationTo evaluate the BRDF representation in Eq. (5), we performan experiment in which we compare it to both parametricBRDF models and to a non-linear basis (the isotropic Lafor-tune model [LFTG97].) The models are fit to synthetic im-ages of a sphere, and their accuracy is measured by theirability to predict the appearance of the sphere under novelconditions. (Many other representations, such as waveletsand the Matusik bases are excluded from this comparisonbecause they require dense, uniform samples.)

The input images simulate data from image-basedBRDF measurement systems such as those described inRefs. [LKK98, MWL+99, MPBM03]. They are ortho-graphic, directional-illumination images with a resolutionof 100× 100, and are generated such that θd is uniformlydistributed in [0, π

2 ]. The accuracy of the recovered mod-els is measured by the relative RMS radiance error over

0 25 50 750

0.05

0.1

0.15

0.2

0.25

0.3

Rel

ativ

e R

MS

radi

ance

err

or

100 200 300 400 500 600 700 800 900 1000

# Centers/lobes

RBF

Lafortune (computed)

Lafortune (projected)

Figure 2: Accuracy of the RBF representation as the number ofcenters is increased using a greedy algorithm. The input is 10 imagesof a sphere synthesized using the metallic-blue BRDF measured byMatusik et al. This is compared to the isotropic Lafortune represen-tation with an increasing number of lobes. Less than 1000 centersare sufficient to represent the available reflectance information us-ing RBFs, whereas the limited flexibility of the Lafortune basis andthe existence of local minima in the non-linear fitting process limitthe accuracy of the Lafortune representation. (See text for details.)

21 images—also uniformly spaced in θd—that are not usedas input. For these simulations, we use both specular anddiffuse reflectance, one drawn from measured data (themetallic-blue BRDF, courtesy of Matusik et al. [MPBM03]),and the other generated using the physics-based Oren-Nayarmodel [ON94].

Figure 2 shows the accuracy of increasingly complex RBFand Lafortune representations fit to ten input images. Thecomplexity of the RBF representation is measured by thenumber of centers selected by the greedy algorithm, and thatof the Lafortune model is measured by the number of gen-eralized cosine lobes. An unusually large number of lobesare shown (two or three lobes is typical) so that the re-sulting Lafortune and RBF representations have compara-ble degrees of freedom. It is important to note, however,that the size of each representation is different for equivalentcomplexities; an N-lobe isotropic Lafortune model requires3N + 1 parameters, while an N-center RBF interpolant re-quires 4N +4.

Since the basis functions of the Lafortune model are de-signed for representing BRDFs (and are therefore embeddedwith knowledge of general reflectance behavior), they pro-vide a reasonably good fit with a small number of lobes. Forexample, a 6-lobe Lafortune model (19 parameters) yieldsthe same RMS error as a 300-center RBF model (1204 pa-rameters.) In addition to being compact, the Lafortune modelhas the advantage of being more suitable for direct render-ing [MLH02]. But the accuracy of this representation is fun-damentally limited; the lack of flexibility and the existenceof local minima in the required non-linear fitting process pre-vent the Lafortune model from accurately representing thereflectance information available in the input images.

In contrast, RBFs provide a general linear basis, andgiven a sufficient number of centers, they can repre-sent any ‘smooth’ function with arbitrary accuracy (see,

c© The Eurographics Association 2005.

Page 6: Reectance Sharing: Image-based Rendering from a Sparse ...zickler/download/sharing_egsr05...varies slowly from point to point over much of an object’s surface, this method enables

T. Zickler, S. Enrique, R. Ramamoorthi & P. Belhumeur / Reflectance Sharing

2 4 6 8 10 120

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

# Images

Rel

ativ

e R

MS

radi

ance

err

or

RBFWardLafortune

ACTUAL RBF LAFORTUNE WARDFigure 3: Top: Error in the estimated BRDF for an increasing num-ber of images of a metallic-blue sphere. As the number of imagesincreases, the RBF representation (with 1000 centers) approachesthe true BRDF, whereas the isotropic Ward model [War92] and theLafortune representation are too restrictive to provide an accuratefit. Bottom: Synthesized images using the three BRDF representa-tions estimated from 12 input images. The angle between the sourceand view directions is 140.

e.g., [Buh03].) In this example, the RBF representation con-verges with less than 1000 centers, suggesting that only asmall fraction of the available centers are required to sum-marize the reflectance information in the ten input images.

Similar conclusions can be drawn from a second exper-iment in which we investigate the accuracy of these (andother) representations with a fixed level of complexity andan increasing number of input images. The results are shownin Figs. 3 and 4 for predominantly specular and diffuse re-flectance. (Here, six lobes are used in the Lafortune repre-sentation since the results do not change significantly withadditional lobes.) Since RMS error is often not an accu-rate perceptual metric, these figures also include syntheticspheres rendered with the recovered models. This experi-ment demonstrates the flexibility of the RBF representation,which captures both the Fresnel reflection in Fig. 3 and theretro-reflection in Fig. 4. Parametric models do not typicallyafford this flexibility—while it may be possible to find aparametric model that fits a specific BRDF quite well, itis very difficult to find a model that accurately fits generalBRDFs.

6. Inhomogeneous SurfacesThe previous section suggests that RBFs can provide a use-ful representation for homogeneous BRDFs. In this section,we show that this same representation can be adapted to han-dle spatially-varying reflectance as well. In this case, it en-ables an exchange between spatial and angular resolution (aprocess we refer to as reflectance sharing), and it can drasti-cally reduce the number of required input images. We beginby assuming that the 5D SBRDF varies smoothly in the spa-tial dimensions, and in the next section, we show how thiscan be generalized to handle rapid spatial variation in termsof a multiplicative albedo or texture.

2 4 6 8 10 120

0.04

0.08

0.12

0.16

0.2

# Images

Rel

ativ

e R

MS

radi

ance

err

or

RBFLambertianLafortune

ACTUAL RBF LAFORTUNE LAMBERTIANFigure 4: Top: Error in the estimated BRDF for an increasing num-ber of input images of a diffuse Oren-Nayar sphere. Again, the 1000-center RBF representation approaches the true BRDF, whereas theLafortune and Lambertian BRDF models are too restrictive to ac-curately represent the data. Bottom: Synthesized images comparingthe three BRDF representations estimated from 12 input images.The angle between the source and view directions is 10.

In the homogeneous case, the BRDF is a function of threedimensions, and the linear RBF ψ(r) = r yields a unique in-terpolant that minimizes a generalization of the thin-plateenergy. Although optimality cannot be proved, this RBFhas shown to be useful in higher dimensions as well, sinceit provides a unique interpolant in any dimension for anyn [Pow92]. In the spatially-varying case, the SBRDF is afunction of five dimensions, and we let ~q = (x,y,u,v,w) bea point in its domain. Using the linear RBF with n = 1, theSBRDF is given by

f (~q) = p(~q)+N

∑i=1

λi‖~q−~qi‖, (6)

where p(~q) = c1 + c2x+ c3y+ c4u+ c5v+ c6w.

We can use any parameterization of the surface ~s, andthere has been significant recent work on determininggood parameterizations for general surfaces (e.g., [LSS+98,GGH02]). The ideal surface parameterization is one that pre-serves distance, meaning that ‖~x1 −~x2‖ is equivalent to thegeodesic distance between ~s(~x1) and ~s(~x2). For simplicity,here we treat the surface as the graph of a function, so that~s(x,y) = (x,y,s(x,y)), (x,y) ⊂ [0,1]× [0,1].

The procedure for recovering the parameters in Eq. (6) isalmost exactly the same as in the homogeneous case. The co-efficients of f are found by solving Eq. (3) using a subset ofthe SBRDF samples from the input images, and this subsetis chosen using a greedy algorithm. Radially asymmetric ba-sis functions are realized using M = diag(mxy,mxy,1,1,mw),where mxy controls the exchange between spatial and angu-lar reflectance information. When mxy 1, the basis func-tions are elongated in the spatial dimensions, and the recov-ered reflectance function approaches a single BRDF (i.e., ahomogeneous representation) with rapid angular variation.

c© The Eurographics Association 2005.

Page 7: Reectance Sharing: Image-based Rendering from a Sparse ...zickler/download/sharing_egsr05...varies slowly from point to point over much of an object’s surface, this method enables

T. Zickler, S. Enrique, R. Ramamoorthi & P. Belhumeur / Reflectance Sharing

0 200 400 600 800 1000 1200 1400 1600 1800 2000

0.05

0.1

0.15

0.2

# Centers

Rel

ativ

e R

MS

radi

ance

err

or

Figure 5: Top: Accuracy of the SBRDF recovered by reflectancesharing using the RBF representation in Eq. (6) as the number ofcenters is increased using a greedy algorithm. The input is 10 syn-thetic images of a hemisphere (five of which are shown) with lin-early varying roughness.

When mxy 1, we recover a near-Lambertian representationin which the BRDF at each point approaches a constant func-tion of ~θ . Appropriate values of mxy depend on the choice ofsurface parameterization, and we found typical values to bebetween 0.2 and 0.4 for the examples in this paper.

6.1. EvaluationThe SBRDF representation of Eq. (6) can be evaluated us-ing experiments similar to those for the homogeneous case.Here, spatial variation is simulated using images of a hemi-sphere with a Cook-Torrance BRDF [CT81] with a linearlyvarying roughness parameter. Five images of the hemisphereare shown in Fig. 5, and they demonstrate how the highlightsharpens from left to right across the surface.

The graph in Fig. 5 shows the accuracy of the recoveredSBRDF as a function of the number of RBF centers when itis fit to images of the hemisphere under ten uniformly dis-tributed illumination directions. The error is computed over40 images that are not used as input. Fewer than 2000 centersare needed to accurately represent the spatial reflectance in-formation available in the input images. This is a reasonablycompact representation, requiring roughly 12,000 parame-ters. For comparison, an SBRDF representation for a 10,000-vertex surface consisting of two unique Lafortune lobes ateach vertex is roughly five times as large.

Figure 6 contrasts reflectance sharing with conventionalmethods that interpolate only in the angular dimensions, es-timating a separate BRDF at each point. This ‘no sharing’technique is used by Matusik et al. [MPBM02], and is sim-ilar in spirit to Wood et al. [WAA+00], who also estimate aunique view-dependent function at each point. (In this dis-cussion, angular interpolation in the BRDF domain is as-sumed to require known geometry, which is different fromlighting interpolation (e.g., [DHT+00]) that does not. Forthe particular example in Fig. 6, however, the ‘no sharing’

ACTUAL

REFLECTANCE SHARING

2

5

NO SHARING

5

50

150

Figure 6: Estimating the spatially-varying reflectance functionfrom a sparse set of images. Top frame: five images of a hemisphereunder illumination conditions not used as input. Middle frame: ap-pearance predicted by reflectance sharing with two and five inputimages. (The input is shown in Fig. 5; the two left-most images areused for two-image case.) Bottom frame: appearance predicted byinterpolating only in the angular dimensions with 5, 50 and 150 in-put images. At least 150 images are required to obtain a result com-parable to the five-image reflectance sharing result.

result can be obtained without geometry, since it is a specialcase of fixed viewpoint.)

For both the reflectance sharing and ‘no sharing’ cases,the SBRDF is estimated from images with fixed viewpointand uniformly distributed illumination directions such asthose in Fig. 5, and it is used to predict the appearance ofthe surface under novel lighting. The top frame of Fig. 6shows the actual appearance of the hemisphere under fivenovel conditions, and the lower frames show the reflectancesharing and ‘no sharing’ results obtained from increasingnumbers of input images. Note that many other methods—most notably that of Lensch et al. [LKG+01]—are excludedfrom this comparison because they require the selection of aspecific parametric model and therefore suffer from the lim-itations discussed in Sect. 5.

In this example, reflectance sharing reduces the numberof required input images by more than an order of magni-tude. Five images are required for good visual results usingthe RBF representation, whereas at least 150 are needed ifone does not exploit spatial coherence. Figure 6 also showshow differently the two approaches degrade with sparse in-

c© The Eurographics Association 2005.

Page 8: Reectance Sharing: Image-based Rendering from a Sparse ...zickler/download/sharing_egsr05...varies slowly from point to point over much of an object’s surface, this method enables

T. Zickler, S. Enrique, R. Ramamoorthi & P. Belhumeur / Reflectance SharingA

CTU

AL

RE

FLE

CTA

NC

ES

HA

RIN

G(5

)

Figure 7: Actual and predicted appearance of the hemisphere un-der fixed illumination and changing view. Given five input imagesfrom a single view (bottom Fig. 5), the reflectance sharing methodrecovers a full SBRDF, including view-dependent effects.

put. Reflectance sharing provides a smooth SBRDF whoseaccuracy gradually decreases away from the convex hull ofinput samples. (For example, the sharp specularity on theright side of the surface is not accurately recovered whenonly two input images are used.) In contrast, when interpo-lating only in the angular dimensions, a small number of im-ages provides only a small number of reflectance samples ateach point; and as a result, severe aliasing or ‘ghosting’ oc-curs when the surface is illuminated by high-frequency en-vironments like the directional illumination shown here.

Even when the input images are captured from a singleviewpoint, our method recovers a full SBRDF, and as shownin Fig. 7, view-dependent effects can be predicted. This ismade possible by spatial sharing (since each surface point isobserved from a unique view in its local coordinate frame)and by reciprocity (since we effectively have observations inwhich the view and light directions are exchanged.)

7. Generalized Spatial VariationThis section considers generalizations of the radial basisfunction SBRDF model by softening the requirement forspatial smoothness, and applies the model to image-basedrendering of a human face.

Rapid spatial variation can be handled using a multiplica-tive albedo or texture by writing the SBRDF as

f (~x,~θ) = a(~x)d(~x,~θ),

where a(~x) is an albedo map for the surface and d(~x,~θ) is asmooth function of five dimensions. As an example, considerthe human face in Fig. 8(a). The function a(~x) accounts forrapid spatial variation due to pigment changes, while d(~x,~θ)models the smooth spatial variation that occurs as we transi-tion from a region where skin hangs loosely (e.g., the cheek)to where it is taut (e.g., the nose.)

In some cases, it is advantageous to express the SBRDFas a linear combination of 5D functions. For example, Satoet al. [SWI97] and many others use the dichromatic model ofreflectance [Sha85] in which the BRDF is written as the sumof an RGB diffuse component and a scalar specular compo-nent that multiplies the source color. We employ the dichro-matic model here, and compute the emitted radiance using

Ik(~x,~θ) = sk

(

ak(~x)dk(~x,~θ)+g(~x,~θ))

cosθi, (7)

(a) (b) (c)Figure 8: (a,b) Specular and diffuse components of a single inputimage. (c) Geometry used for SBRDF recovery and rendering.

where s = skk=RGB is an RGB unit vector that describesthe color of the light source. In Eq. (7), a single function gis used to model the specular reflectance component, whileeach color channel of the diffuse component is modeled sep-arately. This is significantly more general than the usual as-sumption of a Lambertian diffuse component, and it can ac-count for changes in diffuse color as a function of ~θ , suchas the desaturation of the diffuse component of skin at largegrazing angles witnessed by Debevec et al. [DHT+00].

Finally, although not used in our examples, more generalspatial variation can be modeled by dividing the surface intoa finite number of regions, where each region has spatial re-flectance as described above. This technique is used, for ex-ample, in Refs. [LKG+01, JM03].

7.1. Data Acquisition and SBRDF RecoveryFor real surfaces, we require geometry and a set of imagestaken from known viewpoint and directional illumination. Inaddition, in order to estimate the separate diffuse and spec-ular reflection components in Eq. (7), the input images mustbe similarly decomposed. Specular/diffuse separation can beperformed in a number of ways (e.g., [SWI97,NFB97]), oneof which is by placing linear polarizers on both the cam-era and light source and exploiting the fact that the specularcomponent preserves the linear polarization of the incidentradiance. Two exposures are captured for each view/lightingconfiguration, one with the polarizers aligned (to observethe sum of specular and diffuse components), and one withthe source polarizer rotated by 90 (to observe the diffusecomponent only.) The specular component is then givenby the difference between these two exposures. (See, e.g.,[DHT+00].)

Geometry can also be recovered in a number of differentways, and one possibility is photometric stereo, since it pro-vides the precise surface normals required for reflectometry.Figure 8 shows an example of a decomposed image alongwith the corresponding geometry, which is recovered usinga variant of photometric stereo.

Given the geometry and a set of decomposed images, therepresentation in Eq. (7) can be fit as follows. First, the ef-fects of shadows and shading are computed, shadowed pixelsare discarded, and shading effects are removed by dividingby cosθi. The RGB albedo a(~x) in Eq. (7) is estimated asthe median of the diffuse samples at each surface point, and

c© The Eurographics Association 2005.

Page 9: Reectance Sharing: Image-based Rendering from a Sparse ...zickler/download/sharing_egsr05...varies slowly from point to point over much of an object’s surface, this method enables

T. Zickler, S. Enrique, R. Ramamoorthi & P. Belhumeur / Reflectance Sharing

normalized diffuse reflectance samples are computed by di-viding by a(~x). The resulting normalized diffuse samples areused to estimate the three functions dk(~x,~θ) in Eq. (7) usingthe RBF techniques described in Sect. 4.1. The samples fromthe specular images are similarly used to compute g.

7.2. RenderingIn order to synthesize images under arbitrary view and illu-mination, the SBRDF coordinates~q at each surface point aredetermined by the spatial coordinates~x, the surface normal,and the view and lighting directions in the local coordinateframe. The radiance emitted from that point toward the cam-era is then given by Eq. (7). Because Eq. (7) involves sumsover a large number of RBF centers for each pixel, imagesynthesis can be slow. This process can be accelerated, how-ever, using programmable graphics hardware and precompu-tation.

Hardware Rendering Using the GPU. Equation (7) is wellsuited to implementation in graphics hardware because thesame calculations are done at each pixel. For example, a ver-tex program can compute each ~q and these can be interpo-lated as texture coordinates for each pixel. A fragment pro-gram can then perform the computation in Eq. (6), whichis simply a sequence of distance calculations. Implementingthe sum in Eq. (7) is straightforward, since it is simply amodulation by the albedo map and source color.

For the results in this section, we take this approach anduse one rendering pass for each RBF center, accumulat-ing their contributions. On a GeForce FX 5900, renderinga 512× 512 image with 2000 centers (and 2000 renderingpasses) is reasonably efficient, taking approximately 30s.Further optimizations are possible, such as considering thecontributions of multiple RBF centers in each pass.

Real-Time Rendering with Precomputed Images. To en-able real-time rendering with complex illumination, imagessynthesized using the RBF representation can be used as in-put to methods based on precomputed images. One exampleis the double-PCA image relighting technique of Nayar etal. [NBB04]. This algorithm is conceptually similar to clus-tered PCA methods like those in Refs. [CBCG02,SHHS03],and allows real-time relighting of specular objects with com-plex illumination and shadows (obtained in the input imagesusing shadow mapping in graphics hardware.)

We emphasize that despite these gains in efficiency, theRBF representation does not compete with parametric rep-resentations for rendering purposes. Instead, it should beviewed as a useful intermediate representation between ac-quisition and rendering.

7.3. ResultsAs a demonstration, the representation of Eq. (7) was usedto model a human face, which exhibits diffuse texture in ad-dition to smooth spatial variation in its specular component.

The spatially-varying reflectance function estimated from

10

20

ACTUAL REFLECTANCE SHARING

Figure 9: Actual and synthetic images for a novel illumination di-rection. The image on the right was rendered using the reflectancerepresentation in Eq. (7) fit to four input images. Left inset shows apolar plot of the input (+) and output () lighting directions, withconcentric circles representing angular distances of 10 and 20

from the viewing direction.

four camera/source configurations is shown in Figs. 9–11. For these results, two polarized exposures were cap-tured in each configuration, and the viewpoint remainedfixed throughout. For simplicity, the subject’s eyes remainedclosed. (Accurately representing the spatial discontinuity atthe boundary of the eyeball would require the surface tobe segmented as mentioned in Sect. 6.) The average angu-lar separation of the light directions is 21, spanning a largearea of frontal illumination. (See Fig. 9.) This angular sam-pling rate is considerably less dense than in previous work;approximately 150 source directions would be required tocover the sphere at this rate compared to over 2000 sourcedirections used by Debevec et al. [DHT+00].

In the recovered SBRDF, 2000 centers were used for eachdiffuse color channel, and 5000 centers were used for thespecular component. (Figure 11 shows scatter plots of thespecular component as the number of RBF centers is in-creased.) Each diffuse channel requires the storage of 2006coefficients—the weights for 2000 centers and six polyno-mial coefficients—and 2000 sample points~qi, each with fivecomponents. This could be reduced, for example, by us-ing the same centers locations for all three color channels.The specular component requires 5006 coefficients and 5000centers, so the total size is 66,024 single precision floating-point numbers, or 258kB. This is a very compact represen-tation of both view and lighting effects.

Figure 9 shows real and synthetic images of the surfaceunder novel lighting conditions, and shows how a smoothSBRDF is recovered despite the extreme sparsity of the in-put images. Most importantly, the disturbing ghosting ef-fects observed in the ‘no sharing’ results of Fig. 6 areavoided. (The accompanying video includes animations ofview and lighting variation.) Figure 10 shows that the recov-ered SBRDF is indeed spatially-varying. The graph in theright of this figure is a (θh,φd) scatter plot of the specular

c© The Eurographics Association 2005.

Page 10: Reectance Sharing: Image-based Rendering from a Sparse ...zickler/download/sharing_egsr05...varies slowly from point to point over much of an object’s surface, this method enables

T. Zickler, S. Enrique, R. Ramamoorthi & P. Belhumeur / Reflectance Sharing

Figure 10: Spatial variation in the estimated specular reflectancefunction. Left: synthesized specular component used to generate theimage in the right of Fig. 9. Right: magnitude of the estimated spec-ular SBRDF at two surface points. Plots are the SBRDF as a func-tion of (θh,φd) for θd = 5, with red and transparent-blue plots rep-resenting the indicated points on the cheek and nose. (Large valuesof θh near the origin are outside the convex hull of input samples andare not displayed.) For comparison, the inset shows a Cook-Torrancelobe fit to the reflectance of the nose.

SBRDF on the tip of the nose (in transparent blue) and onthe cheek (in red), and it shows that the recovered specularlobe on the nose is substantially larger.

While this synthetic result is plausible, careful examina-tion of Fig. 9 reveals that it deviates from the actual image(the relative RMS difference is 9.5%.) For example, the spa-tial discontinuity in the specular component at the boundaryof the lips is smoothed over due to the assumption of smoothspatial variation; and more generally, with such a limitednumber of input samples, the representation is sensitive tonoise caused by extreme interreflection and subsurface scat-tering, motion of the subject during acquisition, calibrationerrors in the source positions and relative strengths, and er-rors in the geometry. The accuracy could be improved, forexample, by using a high speed acquisition system such asthat of Debevec et al. [DHT+00], and by identifying spa-tial discontinuities (perhaps using clustering techniques orby using diffuse color as a cue for segmentation.)

We emphasize, however, that only four input images areused, and it would be difficult to improve the results withoutfurther assumptions. Even a parametric method like that ofLensch et al. [LKG+01] may not perform well in this case,since little more than a Lambertian albedo value could be fitreliably from the four (or less) reflectance samples availableat each point.

Finally, Fig. 12 shows synthetic images with a novel view-point, again demonstrating that a full SBRDF is recovereddespite the fact that only one viewpoint is used as input.

7.4. A Special Case: One Input ImageDue to its dimension-independence, the RBF representationcan also be adapted to the extreme case when only one inputimage is available. In one (orthographic, directional illumi-nation) image all reflectance samples lie on a hyperplane ofconstant w, reducing the dimension of the SBRDF by one.

3500 4500 5000

Figure 11: Estimated SBRDF on the cheek (red) and nose (blue) asthe number of RBF centers is increased using the greedy algorithm.The 5000-center plots are the same as those on the right of Fig. 10.

Thus, we can use a simplified SBRDF representation, com-puting the surface radiance according to

Ik(~q) = sk

(

ak(~x)+N

∑i=1

λi‖~q−~qi‖

)

cosθi, (8)

where ~q = (x,y,u,v).

In this case, the diffuse component is modeled as Lam-bertian, and the albedo a(~x) is estimated directly from thereflectance samples in the diffuse component of the inputimage (after shading and shadows are removed.) The spec-ular component is estimated from the specular reflectancesamples using the same fitting procedure as the multi-imagecase. Figure 13 shows an example of a 2000-center SBRDFunder natural lighting from environment maps. These wererendered using precomputation [NBB04] as discussed inSect. 7.2, and the accompanying video demonstrates real-time manipulation of complex lighting.

Since a single image is used as input, only a 2D subset ofthe angular variation is recovered, and Fresnel effects are ig-nored. (As done by Debevec et al. [DHT+00], this represen-tation could be enhanced to approximate Fresnel effects byusing a data-driven microfacet model with an assumed indexof refraction.) Also, by using a complete environment map,we necessarily extrapolate the reflectance function beyondthe convex hull of input samples, where it is known to beless accurate. Despite these limitations, the method obtainsreasonable results, and they would be difficult to improvewithout assuming a specific parametric BRDF model (as in,e.g., Ref. [BG01].)

8. Conclusions and Future WorkThis paper presents a method for exploiting spatial co-herence to estimate a non-parametric, spatially-varying re-flectance function from a sparse set of images of known ge-ometry. Reflectance estimation is framed as a scattered-datainterpolation problem in a joint spatial/angular domain, anapproach that allows the exchange of spatial resolution foran increase in angular resolution of the reflectance function.

This paper also presents a flexible representation of re-flectance based on radial basis functions (RBFs), and shows

c© The Eurographics Association 2005.

Page 11: Reectance Sharing: Image-based Rendering from a Sparse ...zickler/download/sharing_egsr05...varies slowly from point to point over much of an object’s surface, this method enables

T. Zickler, S. Enrique, R. Ramamoorthi & P. Belhumeur / Reflectance Sharing

Figure 12: Synthesized images for two novel viewpoints. Eventhough the input images are captured from a single viewpoint, acomplete SBRDF is recovered, including view-dependent effects.

Figure 13: Images synthesized using the SBRDF representationin Eq. (8) estimated from the single (decomposed) image shownin Fig. 8. These were rendered in real-time using the methods dis-cussed in Sect. 7.2.

how this representation can be adapted to handle: i) homo-geneous BRDF data, ii) smooth spatially-varying reflectancefrom multiple images, iii) spatial variation with texture, andiv) a single input image. When using this representation, therecovered reflectance model degrades gracefully as the num-ber of input images decreases.

The most immediate practical issue for future work in-volves computational efficiency. We have demonstrated thatthe RBF representation is a useful intermediate represen-tation of spatially-varying reflectance, since it can be usedin combination with current rendering techniques based onprecomputed information. To improve this, it may be possi-ble to develop real-time rendering techniques directly fromthe RBF representation. For example, fast multipole meth-ods can be used to reduce the evaluation of Eq. (2) fromO(N2) to O(N logN) [BN92]. This may be a viable alterna-tive to using factored forms of BRDFs [MAA01, JM03] and

may provide a practical approach for real-time rendering ofsurfaces with spatially-varying, non-parametric reflectance.

The increasing use of measured reflectance data in com-puter graphics requires efficient methods to acquire and rep-resent such data. In this context, appropriate parameteriza-tions, representations and signal-processing techniques arelikely to be crucial. This paper presents a step in this direc-tion by providing a method for using sparse, image-baseddatasets to accurately recover reflectance.

References

[BG01] S. Boivin, A. Gagalowicz. Image-based renderingof diffuse, specular and glossy surfaces from a single im-age. In Proc. ACM SIGGRAPH, pp. 107–116, 2001. 10

[BN92] R.K. Beatson and G.N. Newsam. Fast evaluationof radial basis functions: I. Comput. Math. Appl., 24:7–19, 1992. 4, 11

[BP95] G. Beatson, R. Goodsell and M. Powell. On multi-grid techniques for thin plate spline interpolation in twodimensions. Lect. Appl. Mathematics, 32:77–97, 1995. 4

[Buh03] M.D. Buhmann. Radial basis functions. Cam-bridge University Press, 2003. 4, 6

[CBC+01] J. Carr, R. Beatson, J. Cherrie, T. Mitchell,W. Fright, B. McCallum, T. Evans. Reconstruction andrepresentation of 3D objects with radial basis functions.In Proc. ACM SIGGRAPH, pp. 67–76, 2001. 4, 5

[CBCG02] W. Chen, J. Bouguet, M. Chu, andR. Grzeszczuk. Light field mapping: Efficient rep-resentation and hardware rendering of surface lightfields. ACM Trans. Graphics (Proc. ACM SIGGRAPH),21(3):447–456, 2002. 9

[CON99] B. Cabral, M. Olano, and P. Nemec. Reflectionspace image based rendering. In Proc. ACM SIGGRAPH,pp. 165–170, 1999. 2

[CT81] R. Cook and K. Torrance. A reflectance modelfor computer graphics. Computer Graphics (Proc. ACMSIGGRAPH), 15(3):307–316, 1981. 7

[DHT+00] P. Debevec, T. Hawkins, C. Tchou, H.P.Duiker, W. Sarokin, and M. Sagar. Acquiring the re-flectance field of a human face. In Proc. ACM SIG-GRAPH, pp. 145–156, 2000. 2, 3, 7, 8, 9, 10

[DTS01] H. Dinh, G. Turk, and G. Slabaugh. Reconstruct-ing surfaces using anisotropic basis functions. In Proc.IEEE Int. Conf. Computer Vision, pp. 606–613, 2001. 4

[Duc77] J. Duchon. Splines minimizing rotation-invariantsemi-norms in Sobelev spaces. In W. Schempp andK. Zeller, eds., Constructive theory of functions of severalvariables, pp. 85–100. Springer-Verlag, 1977. 4, 5

[Geo03] A.S. Georghiades. Recovering 3-D shape and re-flectance from a small number of photographs. In Ren-

c© The Eurographics Association 2005.

Page 12: Reectance Sharing: Image-based Rendering from a Sparse ...zickler/download/sharing_egsr05...varies slowly from point to point over much of an object’s surface, this method enables

T. Zickler, S. Enrique, R. Ramamoorthi & P. Belhumeur / Reflectance Sharing

dering Techniques 2003 (Proc. Eurographics Symposiumon Rendering), pp. 230–240, 2003. 1, 2

[GGH02] Xianfeng Gu, Steven Gortler, and HuguesHoppe. Geometry images. ACM Trans. Graphics(Proc. ACM SIGGRAPH), 21(3):355–361, 2002. 6

[GGSC96] S.J. Gortler, R. Grzeszczuk, R. Szeliski, andM.F. Cohen. The lumigraph. In Proc. ACM SIGGRAPH,pp. 43–54, 1996. 3, 4

[HS03] A. Hertzmann and S. Seitz. Shape and material byexample: A photometric stereo approach. In Proc. IEEEConf. Computer Vision and Pattern Recognition, 2003. 2

[JM03] R. Jaroszkiewicz and M. McCool. Fast extrac-tion of BRDFs and material maps from images. In Proc.Graphics Interface, pp. 1–10, 2003. 2, 4, 8, 11

[KMG96] K. Karner, H. Mayer, M. Gervautz. An imagebased measurement system for anisotropic reflectance.Computer Graphics Forum, 15(3):119–128, 1996. 2

[LFTG97] E. Lafortune, S. Foo, K. Torrance, D. Green-berg. Non-linear approximation of reflectance functions.In Proc. ACM SIGGRAPH, pp. 117–126, 1997. 5

[LH96] M. Levoy and P. Hanrahan. Light field rendering.In Proc. ACM SIGGRAPH, pp. 31–42, 1996. 3

[LKG+01] H. Lensch, J. Kautz, M. Goesele, W. Heidrich,and H.-P. Seidel. Image-based reconstruction of spatiallyvarying materials. In Rendering Techniques 2001 (Proc.Eurographics Rendering Workshop), 2001. 2, 7, 8, 10

[LKK98] R. Lu, J. Koenderink, A. Kappers. Optical prop-erties (bidirectional reflection distribution functions) ofvelvet. Applied Optics, 37:5974–5984, 1998. 2, 4, 5

[LSS+98] Aaron Lee, W. Sweldens, P. Schroder,L. Cowsar, and D. Dobkin. MAPS: Multiresolutionadaptive parameterization of surfaces. In Proc. ACMSIGGRAPH, pp. 95–104, 1998. 6

[MAA01] M. McCool, J. Ang, and A. Ahmad. Homomor-phic factorization of BRDFs for high-performance render-ing. In Proc. ACM SIGGRAPH, p. 171–178, 2001. 2, 11

[Mar98] S. Marschner. Inverse rendering for computergraphics. PhD thesis, Cornell University, 1998. 3

[Mic86] C. A. Micchelli. Interpolation of scattered data:distance matrices and conditionally positive definite func-tions. Constructive Approximation, 1:11–22, 1986. 4

[MLH02] D. K. McAllister, A. Lastra, and W. Heidrich.Efficient rendering of spatial bi-directional reflectancedistribution functions. In Graphics Hardware 2002(Proc. Eurographics/ACM SIGGRAPH Hardware Work-shop), pp. 79–88, 2002. 1, 5

[MPBM02] W. Matusik, H. Pfister, M. Brand, andL. McMillan. Image-based 3D photography using opac-ity hulls. ACM Trans. Graphics (Proc. ACM SIGGRAPH),21(3):427–437, 2002. 2, 3, 7

[MPBM03] W. Matusik, H. Pfister, M. Brand, andL. McMillan. A data-driven reflectance model. ACMTrans. Graphics (Proc. ACM SIGGRAPH), 22(3):759–769, 2003. 2, 4, 5

[MWL+99] S. Marschner, S. Westin, E. Lafortune, K. Tor-rance, and D. Greenberg. Image-based BRDF measure-ment including human skin. In Rendering Techniques ’99(Proc. Eurographics Rendering Workshop), pp. 139–152,1999. 2, 4, 5

[NBB04] S.K. Nayar, P.N. Belhumeur, and T.E. Boult.Lighting sensitive display. ACM Trans. Graphics,23(4):963–979, 2004. 9, 10

[NFB97] S.K. Nayar, X. Fang, and T. Boult. Separation ofreflection components using color and polarization. Int.Journal of Computer Vision, 21(3):163–186, 1997. 8

[ON94] M. Oren and S. Nayar. Generalization of Lam-bert’s reflectance model. In Proc. ACM SIGGRAPH, pp.239–246, 1994. 5

[Pow92] M.J.D. Powell. The theory of radial basis func-tion approximation in 1990. In W. Light, editor, Advancesin Numerical Analysis, Vol. II, pp. 105–210. Oxford Sci-ence Publications, 1992. 4, 6

[RH02] R. Ramamoorthi, P. Hanrahan. Frequency spaceenvironment map rendering. ACM Trans. Graphics(Proc. ACM SIGGRAPH), 21(3):517–526, 2002. 2

[Rus98] S. Rusinkiewicz. A new change of variables forefficient BRDF representation. In Rendering Techniques’98 (Proc. Eurographics Rendering Workshop), pp. 11–22, 1998. 3

[Sha85] S. Shafer. Using color to separate reflection com-ponents. COLOR res. appl., 10(4):210–218, 1985. 8

[SHHS03] P. Sloan, J. Hall, J. Hart, and J. Snyder. Clus-tered principal components for precomputed radiancetransfer. ACM Trans. Graphics (Proc. ACM SIGGRAPH),22(3):382–391, 2003. 9

[SWI97] Y. Sato, M. Wheeler, and K. Ikeuchi. Objectshape and reflectance modeling from observation. InProc. ACM SIGGRAPH, pp. 379–387, 1997. 1, 2, 3, 8

[Ter83] D. Terzopoulos. Multilevel computational pro-cesses for visual surface reconstruction. Computer Vision,Graphics and Image Processing, 24:52–96, 1983. 4

[WAA+00] D. Wood, D. Azuma, K. Aldinger, B. Curless,T. Duchamp, D. Salesin, and W. Stuetzle. Surface lightfields for 3D photography. In Proc. ACM SIGGRAPH,pp. 287–296, 2000. 2, 7

[War92] Gregory J. Ward. Measuring and modelinganisotropic reflection. Computer Graphics (Proc. ACMSIGGRAPH), 26(2):265–272, 1992. 6

[YDMH99] Y. Yu, P. Debevec, J. Malik, and T. Hawkins.Inverse global illumination: recovering reflectance mod-els of real scenes from photographs. In Proc. ACM SIG-GRAPH, pp. 215–224, 1999. 1, 2

c© The Eurographics Association 2005.