12
Initial Review Applications of Adaptive Blue Noise Sampling within Monte Carlo Path Tracing Declan Russell * Research Engineer Bournemouth University Richard Southern EngD Supervisor Bournemouth University Ian Stephenson EngD Supervisor Bournemouth University Figure 1: Example Blue Noise Sampling Distribution generated with [Jiang et al. 2015] Abstract This report is a summary of the first year of my EndD undertaken at the CDE. this will cover work completed over this duration along with the research carried out within the field of sampling used in Monte Carlo ray tracing. Monte Carlo ray tracing is a vastly adopted image syntheses technique used ubiquitously in industry. Although capable of producing a high quality of physically correct images Monte Carlo ray tracing often needs thousands of samples requiring large computation times to achieve such results. Strategic sample placement has been proved to improve convergence rate of Monte Carlo ray tracing requiring fewer samples to produce results comparable in quality. We explore the use of blue noise sampling over two different dimensions of Monte Carlo ray tracing inluding the generation of primary rays and sampling BxDFs. We achieve this with the use of an adaptive SPH based method for creating blue noise samples [Jiang et al. 2015]. Firstly in the primary ray dimen- sion we use this to adapts samples based on information retrieved via each sample about the scene. This increases the density of sam- ples in regions of higher frequency noise where traditional sam- pling techniques may under sample. Secondly we achieve impor- tance sampling via generate samples over a hemisphere that adapt to fit with a BxDf function. Using these techniques we have seen improvements in combating aliasing and convergence rates when compared to traditional low discrepancy sequences and blue noise. In this report we display our proposed ideas and explain the po- tential improvements when applied in rendering, however this tech- nique could also be applied to any arbitrary signal reconstruction problems. Keywords: Blue Noise, Sampling, Ray Tracing Concepts: Computing methodologies Ray tracing; * e-mail:[email protected] e-mail:[email protected] e-mail:[email protected] Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third- party components of this work must be honored. For all other uses, contact 1 Introduction This report serves to be a summary of the work completed in the first year of the Engineering Doctorate program that I have under- taken as a member of the Center for Digital Entertainment. I will use this to give an outline field of research I propose to investigate, the work currently carried out and where I plan to take this work in the proceeding years of the qualification. I will finish the report with a plan with set milestones to work towards in the following 3 years of the EngD program. Choosing an appropriate topic of research is incredibly important when embarking on the next 3 years of the EngD qualification. The topic should satisfy a number of criteria including relevance to in- dustry, originality that provides an addition to knowledge while re- maining in an achievable but suitable scope of difficulty. 1.1 Aims & Objectives There are a number of aims and objectives that I will propose to complete in the course of the following 3 year of the EngD program. Analyse the current standard of research available in the se- lected research area to gain an understanding of what is the cutting edge. Discuss with supervisors be that industrial and academic on the potential paths that could be taken to build upon and im- prove the cutting edge. Prototype research undertaken in an appropriate environment and compare our method with the cutting edge. Publish results to academia to achieve an addition to knowl- edge. 1.2 Proposed Area of Research & Justification Beginning the EngD there where a number are a number of ar- eas computer graphics that I have had experience with, however the owner/author(s). c 2016 Copyright held by the owner/author(s). Bournemouth University 2016, 2016, Bournemouth, UK

Initial Reviewpplications of Adaptive Blue Noise …...the generation of primary rays and sampling BxDFs. We achieve this with the use of an adaptive SPH based method for creating

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Initial Reviewpplications of Adaptive Blue Noise …...the generation of primary rays and sampling BxDFs. We achieve this with the use of an adaptive SPH based method for creating

Initial ReviewApplications of Adaptive Blue Noise Sampling within Monte Carlo Path Tracing

Declan Russell∗

Research Engineer Bournemouth UniversityRichard Southern†

EngD Supervisor Bournemouth UniversityIan Stephenson‡

EngD Supervisor Bournemouth University

Figure 1: Example Blue Noise Sampling Distribution generated with [Jiang et al. 2015]

Abstract

This report is a summary of the first year of my EndD undertaken atthe CDE. this will cover work completed over this duration alongwith the research carried out within the field of sampling used inMonte Carlo ray tracing. Monte Carlo ray tracing is a vastlyadopted image syntheses technique used ubiquitously in industry.Although capable of producing a high quality of physically correctimages Monte Carlo ray tracing often needs thousands of samplesrequiring large computation times to achieve such results. Strategicsample placement has been proved to improve convergence rate ofMonte Carlo ray tracing requiring fewer samples to produce resultscomparable in quality. We explore the use of blue noise samplingover two different dimensions of Monte Carlo ray tracing inludingthe generation of primary rays and sampling BxDFs. We achievethis with the use of an adaptive SPH based method for creating bluenoise samples [Jiang et al. 2015]. Firstly in the primary ray dimen-sion we use this to adapts samples based on information retrievedvia each sample about the scene. This increases the density of sam-ples in regions of higher frequency noise where traditional sam-pling techniques may under sample. Secondly we achieve impor-tance sampling via generate samples over a hemisphere that adaptto fit with a BxDf function. Using these techniques we have seenimprovements in combating aliasing and convergence rates whencompared to traditional low discrepancy sequences and blue noise.In this report we display our proposed ideas and explain the po-tential improvements when applied in rendering, however this tech-nique could also be applied to any arbitrary signal reconstructionproblems.

Keywords: Blue Noise, Sampling, Ray Tracing

Concepts: •Computing methodologies→ Ray tracing;

∗e-mail:[email protected]†e-mail:[email protected]‡e-mail:[email protected]

Permission to make digital or hard copies of part or all of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact

1 Introduction

This report serves to be a summary of the work completed in thefirst year of the Engineering Doctorate program that I have under-taken as a member of the Center for Digital Entertainment. I willuse this to give an outline field of research I propose to investigate,the work currently carried out and where I plan to take this workin the proceeding years of the qualification. I will finish the reportwith a plan with set milestones to work towards in the following 3years of the EngD program.

Choosing an appropriate topic of research is incredibly importantwhen embarking on the next 3 years of the EngD qualification. Thetopic should satisfy a number of criteria including relevance to in-dustry, originality that provides an addition to knowledge while re-maining in an achievable but suitable scope of difficulty.

1.1 Aims & Objectives

There are a number of aims and objectives that I will propose tocomplete in the course of the following 3 year of the EngD program.

• Analyse the current standard of research available in the se-lected research area to gain an understanding of what is thecutting edge.

• Discuss with supervisors be that industrial and academic onthe potential paths that could be taken to build upon and im-prove the cutting edge.

• Prototype research undertaken in an appropriate environmentand compare our method with the cutting edge.

• Publish results to academia to achieve an addition to knowl-edge.

1.2 Proposed Area of Research & Justification

Beginning the EngD there where a number are a number of ar-eas computer graphics that I have had experience with, however

the owner/author(s). c© 2016 Copyright held by the owner/author(s).Bournemouth University 2016, 2016, Bournemouth, UK

Page 2: Initial Reviewpplications of Adaptive Blue Noise …...the generation of primary rays and sampling BxDFs. We achieve this with the use of an adaptive SPH based method for creating

the area I found most enjoyable and deemed holds appropriate re-search potential is physically based rendering techniques. The mostprominent of which is the family of techniques such as the widelyadopted method path tracing, that fall under the category known asray tracing. Ray tracing is a technique used for decades to achievehigh quality physically accurate images in computer graphics. It isa technique that is ubiquitous across production standard renderersused in a variety of areas including product design, visual effectsand architecture. Although being a vastly developed field ray trac-ing still cannot be considered a solved problem. Even with industrystandard renderers ray tracing is for the most case considered anoffline process with some renders taking years of computation timeto generate a single frame. This is make this field an interestingproblem as improving this performance can save companies money,production time and opens the possibility of using this technique tobetter the quality of real time graphics.

Initially coming into my studies I have had experience with creatingmy own GPU accelerated ray tracer which in simple scenes couldprovide fast online results. After naively pursuing the assumptionthat the entirety of ray tracings heavy computation problems couldbe solved by some proficient parallel algorithm it quickly becameapparent that this is was not the case. Their are numerous areas ofof ray tracing such filtering, shading or even integration that eachindividually have their own deep problems with no complete solu-tion. After much debate my supervisors and I eventually settled onfirstly investigating the area of sampling methods and how they canimprove performance in ray tracing in which our progress will beoutlined in the proceeding sections of this report.

2 Overview of the Year & Work Completed

The main focus for this year was to gain a deeper understandingin the area of physically based rendering. This began by ready themost recent developments published in the latest SIGGRAPH pro-ceedings. After gaining an overview of what the current researchtrends and areas that held research potential my supervisors and Ihad agreed on a couple of areas within sampling theorem used inray tracing to investigate. These include the generation of primaryray and shading point samples. Since then we have made progressin both these areas of research which I will demonstrate along withour prototypes and results in the following sections of this report.

I have attended a number of courses and conferences over the lastyear. In November I completed the Rendering and Shading shortcourse at Bournemouth University. This helped to reinforce my un-derstandings of the foundations of physically based rendering, cov-ering a variety of areas including camera models, shading models,sampling and signal reconstruction techniques. This course alsointroduced me to Pixar’s production renderer Renderman, a ren-derer popular across various institutions including of course Pixar,Industrial Light and Magic (ILM) and the Moving Picture Com-pany (MPC). Though the duration of the course I was taught thefundamentals of how to use the Renderman renderer and task withputting them into practice with a set assignment. I have since fur-thered my knowledge of the Renderman renderer learning how tocreate a number of Renderman plugins using there C++ frameworkwhich has been particularly useful in prototyping some of the re-search outlined in this report.

In terms of conferences in the last year I have attended BFX,ACM SIGGRAPH Anaheim and ACM SIGGRAPH London. Theseproved as a chance to not only further my knowledge of the mostrecent developments in the field but also as invaluable networkingexperiences. With an abundance of industry professionals at theseevents this gave me the opportunity to gain insights to industrialdevelopments which it has become apparent do not always follow

the same trends of academic research. Furthermore I was able toexchange ideas about potential research to pursue that would berelevant and attractive to industry. Finally I also attended the ”Re-search Jam” hosted by the CDE in an the Italian villa based inBournemouth. This was a 3 day retreat in which the CDE co-hort worked with the Bournemouth University Dementia Instituteto develop and prototype research to improve the lives of dementiasufferers. The cohort were divided into team which gave a greatopportunity to collaborate with fellow peers to improve team workskills and gain an insight to others approach to research.

3 Background: Monte Carlo Ray Tracing

Monte Carlo Ray Tracing is the most popular technique for gen-erating high quality physically plausible images. It is the practiceof simulating the transport of light rays as they propagate throughsome explicitly defined scene. Light rays simulated are used tocompute the radiance apparent throughout the scene via MonteCarlo integration. Monte Carlo integration is used ubiquitouslyacross many signal reconstruction methods and it is one of the corepillars of ray tracing. It is used as the main method across thevarious dimensions of ray tracing to integrate incoming radiancein various points in the scene, hence the name ”Monte Carlo RayTracing”. It has a very attractive property of being a integrationmethod that is indifferent of how many dimensions you are work-ing in, making Monte Carlo integration an exception to the curse ofdimensionality which restricts other integration methods. It is thisproperty that allows traditional ray tracing to be a practical methodfor generating images given that most if not all scenes that we wantto render lie in multidimensional space. I will use this section togive a some background to Monte Carlo integration to give somecontext to the sampling research that we present in this report.

3.1 Probability Theory

Monte Carlo integration, named by John Von Neumann in referenceto a casino in Monaco called Monte Carlo where lots of chancegames are played, is the process of using random samples of somefunction to approximate there results. Therefore to begin I will in-troduce some probability theory. Random variableX is the result ofsome random process to generates numbers. Random numbers caneither be discrete (e.g. the roll of a die) or continuous which takeon real values R. The Cumulative Distribution Function (CDF) isthe probability of random value X being less than or equal to someselected variable x:

cdf(x) = P{X ≤ x}

The Probability Density Function (PDF) is the probability of choos-ing some variable x. The PDF is equivalent to the derivative of theCDF:

pdf(x) =d

dxcdf(x)

PDFs are always nonnegative and integrate to 1 over their domains.The expected value E[f(x)] of function f(x) is the average valueover the function with distribution pdf(x) over some domain D de-fined as:

E[f(x)] =

∫D

f(x)pdf(x)dx

It is this that is the basis of how Monte Carlo integration computesthe expected values of arbitrary integrals.

Page 3: Initial Reviewpplications of Adaptive Blue Noise …...the generation of primary rays and sampling BxDFs. We achieve this with the use of an adaptive SPH based method for creating

3.2 The Monte Carlo Estimator

As mentioned previous the basic Monte Carlo estimator uses ran-dom sampling of a function to compute the estimate or expectedvalue of an integral. To introduce the estimator we will use an ex-ample in the one-dimensional case. Suppose we wish to integratefunction f(x) over the domain [a, b]:

F =

∫ b

a

f(x)dx

Given a set of N uniform random variables Xi ∈ [a, b] the MonteCarlo operator defines the expected value of the estimator:

F =b− aN

N∑i=1

f(Xi)

E[Fn] is in fact equal to the integral. This can be easilly shown injust a few steps. Notably the PDF to our random variables Xi mustbe equal to 1/(b − a), since the PDF must integrate to one overthe domain [a, b]. Therefore with some manipulate the equation toprove our statement:

E[F ] = E[b− aN

N∑i=1

f(Xi)]

=b− aN

N∑i=1

E[f(Xi)]

=b− aN

N∑i=1

∫ b

a

f(x)pdf(x)dx

=1

N

N∑i=1

∫ b

a

f(x)dx

=

∫ b

a

f(x)dx

We can relax the restriction to uniform random numbers with asmall generalization. Having this generalization is very importantas choosing an appropriate PDF to draw samples from to solve anintegral has a large impact on reducing variance in Monte Carlointegration.

F =1

N

N∑i=1

f(Xi)

pdf(Xi)

Monte Carlo integration can very simply be extended to multipledimensions. We simply select our random samples Xi from amultidimensional PDF and the same estimator is used. For exam-ple in a three dimensional integral using uniform variables whereXi = (xi, yi, zi) our estimator simply becomes.

F =(x1 − x0)(y1 − y0)(z1 − z0)

N

∑i

f(Xi

where our PDF is a constant value

pdf(X) =1

x1 − x01

y1 − y01

z1 − z0

However due to the nature of Monte Carlo integration the error re-duced per sample decreases with a rate of O(

√N) where N is the

number samples performed. This means that to reduce the error byhalf we must quadruple the number of samples. Means to solvethe integral to a high level of precision large numbers of samplesmay have to be taken leading to large computation times. Whileother integration methods converge significantly faster in one di-mension than Monte Carlo, these techniques become exponentiallyworse with increased dimensions where as our Monte Carlo is nat-urally indifferent to dimensions. Making it the appropriate choicefor such problems.

3.3 Signal Sampling

Now that we have outlined how we integrate our image functionwith Monte Carlo integration I will give some insight to how oursampling strategies can have an important role in improving errorthat lies in our estimator. Aliasing is one of the biggest practicalproblems in signal reconstruction. If a function is not sampled ata significantly high sampling rate this can lead to a loss of dataand the reconstruction of the signal not accurately representing theoriginal function as seen in figure 2.

0 0.5 1 1.5 2 2.5 3 3.5 40

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

(a) The function y = 1 + cos(4x2)

0 0.5 1 1.5 2 2.5 3 3.5 40

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

(b) The function y = 1 + cos(4x2) sampled at the interval 0.25

Figure 2: Example of aliasing caused by high frequency informa-tion lost via an inappropriate sampling.

To avoid loss of data sampling theorem tells us that to avoid suchaliasing we must sample at a minimum rate of twice the maximumfrequency apparent within the signal. This minimum frequency isknown as the Nyquist frequency. However although it may seemobvious to sample every signal at the Nyquist frequency it is not al-ways that easy. The majority of signals in rendering may not have aband limit, and therefore it is impossible to sample at a high enoughrate to avoid loss of data.

Although we may not always able to sample at the required fre-quency to avoid aliasing there are a number of ways we can ap-

Page 4: Initial Reviewpplications of Adaptive Blue Noise …...the generation of primary rays and sampling BxDFs. We achieve this with the use of an adaptive SPH based method for creating

proach sampling signals to reduce the visual impact of the aliasing.These are suitable known as antialising techniques. Firstly non-uniform sampling is where we vary the space between samples.Although still incapable of obtaining correct results in unlimitedband frequencies it tends to mask aliasing artefacts as noise whichis much more visually pleasing. You can see this in figure 3, noticethe noise especially noticeable in the background of the uniformsampling compared to the much smoother non-uniform sampling.

(a) Uniform Sampling

(b) Non-uniform Sampling

Figure 3: Comparison of images generated with 1 sample per pixelusing uniform 3a and non-uniform 3b sampling.

Although using random non-uniform samples proves better usingpure random samples to sample some function still has its flaws.Samples often cluster causing an uneven distribution over the sam-pling domain, see figure 4a. This in its self can also cause lossof important information about our function. Strategic samplingmethods look to avoid this with the goal of generating stochasticand yet evenly distributed samples. These include low discrepancytechniques such as jitter 4b and blue noise 4c sampling which willbe discussed in sections 4.1 and 4.2 respectively.

Although these methods prove as a good middle ground for sam-pling the general case but ideally we would want to vary our sam-pling ray such that we have an increase in high frequencies and adecrease in low. This is known as adaptive super sampling. If wecan identify the regions in which frequencies exceed the Nyquistlimit then we can increase the sampling density of these areas with-out the expense of having to increase the sampling rate across theentirety of our domain. For more information on sampling the-ory and Monte Carlo integration see [Pharr and Humphreys 2010;Jarosz 2008; Veach 1997].

4 Previous Work

Now that I have given some background into the theory of samplingI will use this section to outline previous developments in strategicsample generation.

(a) Random Sampling (b) Jitter Sampling

(c) Blue Noise Sampling

Figure 4: Various anti-aliasing sampling distributions.

4.1 Low Discrepancy Sampling

The first and most basic strategical sampling method theorized isthat of Jittered sampling. This is the process of applying ran-dom purmutations to uniform sample distributions. Latin Hyper-cube Sampling (LHS). This was first described by McKay in 1979[McKay et al. 1979] and involves dividing our n-dimensional sam-ple space into M axis-aligned hyperplanes. Samples are generatedwithin a selection of these hyperplanes abiding by the rule that onlyone sample can be taken within its relative hyperplane. An ab-straction of this method which holds higher efficiency is Orthogo-nal Array Sampling [Owen 1992]. Although simple to implementwith improved results in comparison to pure random sampling therehave been criticisms regarding the performance and precision ofthis sampling method. Futher developments in this decade take theform of N-Rooks [Shirley 1991]. This method involves placing ajittered sample in each cell of the diagonal of a grid over our sampledomain. The x and y coordinates are then randomly shuffled. How-ever N-Rooks distribution generated is deemed poor and a smallimprovement over pure random sampling. This was improved in1994 by Chiu with Multi-Jittered sampling [Chiu et al. 1994].

More recent developments in low discrepency sampling includeHammersmith, (0-2) sequences [Pharr and Humphreys 2010], ran-dom Halton sequence [Okten et al. 2012] and the most popular tobe used in production renderers the Sobol sequence [Bratley andFox 1988]. These sequences generate well-spaced quasi-randomsamples in large numbers of dimensions. This is a very useful inproduction rendering as it is very common that multiple dimen-sional samples must be generated to take into account sampling inscreen space, time and lens space. Although these sequences fitwell within the Latin Hypercube [Pharr and Humphreys 2010] it isdifficult to avoid sample clustering in adjacent sample regions andin some cases can lead to regular samples forming in when a largenumber of samples are generated. Further reading on this subject

Page 5: Initial Reviewpplications of Adaptive Blue Noise …...the generation of primary rays and sampling BxDFs. We achieve this with the use of an adaptive SPH based method for creating

can be found in [Veach 1997; Jarosz 2008].

4.2 Blue Noise

Blue noise sampling, named due to its power spectrum resemblingthat of the signal generated by blue light, is a technique is popularacross several areas of graphics such as image synthesis and ge-ometry processing. Blue noise is a close approximation to Poissondisk sampling in which no two samples are generated closer thana specified distance. This is exceptionally desired trait for samplesto have in image synthesis techniques as studies have shown thatthe rods and cones within the human eye are distributed in a similarway. This sampling distribution produces stochastic samples thatare evenly spread in the special domain, giving us an even distribu-tion over our domain while also reducing aliasing issues maskingthem as noise which mentioned previously appears more visuallypleasing.

There are a number of methods used for creating various profilesof blue noise with the most simple being the dart throwing algo-rithm [Cook 1986; Mitchell 1987]. Various other techniques in-clude Lloyd’s relaxation [Lloyd 1982], Poisson disk sampling [Mc-Cool and Fiume 1992] and Capacity Constrained Voronoi Tessella-tions (CCVT) [Balzer et al. 2009]. However these techniques tendto have a high level of computation and are also limited in the senseas you cannot change the blue noise profile relative to each method.Later developments have achieved more efficient methods reducingcomputational complexity toO(N logN), see [Jones 2006; Du andEmelianenko 2006; Dunbar and Humphreys 2006]. However morerecently there has been research into parallelized techniques in cre-ating blue noise to improve on computation such as [Wei 2008] andin 2015 a technique has been proposed using a smoothed particlehydrodynamics (SPH) approach to creating blue noise [Jiang et al.2015].This method gives a level of flexibility by allowing the userto vary blue noise profiles by changing a single parameter. Further-more this technique gives potential to high levels of performancebeing based upon SPH which can be easily parallelized to run effi-ciently and on large scales.

5 Smoothed Particle Hydrodynamics

5.1 Introduction

Smoothed particle hydrodynamics (SPH) is a widely adoptedmethod used in fluid simulation. SPH serves as an approximation tosolve Navier Stokes equations which provide a mathematical modelof most fluids occurring in nature.

ρdu

dt= −Op+ µO2u+ f

This equation looks at solving the acceleration of a fluid dudt

by ac-cumulating the pressure force −Op, viscosity µO2u and other ex-ternal forces f . SPH was first used for fluids by [Monaghan 1994]and uses a collection of approximations to solve each of these terms.In these terms a scalarAi which represents an attribute of a particle,at location xi is approximated with a set of neighbouring points jat location xj using a symmetric smoothing kernel Wij . Where his the smoothing length of our kernel.

A(x) =∑j

Ajmj

ρjW (x− xj , h)

OA(x) =∑j

Ajmj

ρjOW (x− xj , h)

Where∫W (x, h)dx = 1 and W (x, h) = 0 when ||x|| > h. Fol-

lowing these approximations allow us to solve for our unknownforces to solve for the acceleration of our fluid.

ρi =∑j

mjWij

−Op = −mj

∑j

mj(piρi

+pjρj

)OWij

p = k(ρ− ρ0)

Where p is our pressure calculated from our density ρ along with auser defined rest density ρ0 and gas constant k.

5.2 Generating Adaptive Blue Noise Samples withSPH

Using the approach presented in [Jiang et al. 2015] it is possibleto generate various blue noise profiles which each have their ownadvantages and disadvantages when it comes to noise and aliasing.Jiang conveniently designs her algorithm such that the blue noiseprofiles generated can be controlled simply by varying a single in-put parameter, see figure 5. This parameter, known as the densitydifference is the difference between the rest density ρ0 and the ac-tual density ρ which is calculated as the mean density of all par-ticles. To control the density difference which for our use we willname ρ4, at every step in the simulation we must calculate ρ andset ρ0 such that is is lower than ρ by the user defined ρ4 or moresimple expressed by the equation below.

ρ0 = ρ− ρ4

The simulation is considered converged when particle displacement||x′ − x|| is less than ε where x′ is the updated point position. ε isset as ε = 0.01d where d is the average distance between adjacentparticles.

(a) Blue noise samples generated (b) Fourier transform power spectrum gen-erated using [Wei and Wang 2011]

Figure 5: Blue noise samples and analysis generated with a densitydifference of 30

This method also supports adaptive sampling as seen in figure . Thisis achieved by applying a distance field scale s(x) to sample prop-erties. This scalar field could be based off any arbitrary property.An example could be in the use of image stippling one would wanthigh densities of particles in darker regions of an image. Thereforeyou could use the scalar s(x) = 1/

√I(x) where I(x) is the image

intensity at location x. We use this to warp the distance betweensamples when computing our SPH calculations. The distance be-tween 2 particles now becomes:

Page 6: Initial Reviewpplications of Adaptive Blue Noise …...the generation of primary rays and sampling BxDFs. We achieve this with the use of an adaptive SPH based method for creating

s(xi, xk) =2(xi − xj

s(xi) + s(xj)

This is then used in our smoothing kernal described in section 5.1.

Wij =W (s(xi, xj), h)

Therefore as we increase our scale s(x) our smoothing kernalswill cover larger distances therefore causing an increase in pressurearound this particle pushing neighbouring particles away resultingin particle placement becoming more sparse.

(a) Original apple image (b) Stippled apple image

Figure 6: Adaptive blue noise samples used to recreate 6a throughimage stippling 6b

6 BxDFs

6.1 Introduction

The term BxDF encompasses a family of mathematical modelsthat describing the way that light scatters when interacting with asurface. This family includes bidirectional reflectance distributionfunction (BRDF), bidirectional transmission distribution function(BTDF) and bidirectional specular distribution function (BSDF)which encompasses both BRDFs and BTDFs. These function arean abstraction to model light reflectance, transmission and specu-lar respectively. During our integration used in ray tracing we useour BxDF model to compute the scattered light at a point on a sur-face based upon the incoming illumination from light sources in ascene. Reflectance of real world surfaces can be generalized into amixture of four categories diffuse, glossy specular, perfect specularand retro reflective.

The simplest distribution to model is diffuse. Diffuse scatters lightequally in all directions and can be thought of as matte like mate-rials such as paper. Glossy is a representation of specular surfacessuch as plastics. Perfect specular scatters incident light in a sin-gle outgoing direction and represents mirror or glass like surfaces.Finally retro-reflective primarily reflect light back towards incidentdirection such as materials like velvet.

The most important functionality of our BxDF is to calculate theratio of reflected radiance I given the incident direction wi and re-flected direction wr of our light. One simple example of this is thediffuse BRDF which evaluates to the cosine of the angle betweenthe incoming light direction and the normal of the surface N .

I(wi) = wi ·N

Figure 7: (a) Diffuse, (b) glossy diffuse, (c) perfect specular, and(d) retro reflective distribution lobes [Pharr and Humphreys 2010]

As you can see as the surface normal tends towards the incominglight direction our reflected radiance tends towards 1 as you can seein figure 8

Figure 8: Sphere with diffuse BRDF applied.

As mentioned before our diffuse lobe scatters light equally in alldirections. However due to our BRDF giving low radiance withrays that tend towards the perpendicular plane of our normal thiscan lead to tracing rays that ultimately give a very low impact uponour final computed colour. Therefore it is a common technique toimprove performance by stratify our samples to trace more rays indirections that our BRDF will compute a higher value of radiance.

6.2 Sampling BxDFs

In order to integrate over our surface we must draw random samplesover our domain to be used in our Monte Carlo estimator. The waywe sample our BxDF function can have a vary large effect on thequality of our results. Importance sampling has been proven to bea very effective variance reducing technique, therefore it is verypractical to use a sampling distribution that roughly resembles ourintegrand. Effectively we want some function f(ξ), where ξ is arandom uniformly distributed number, that generates these samples.I will briefly discuss the currently used methods for achieving thisin the following sections. For more in depth explanation of thesetechniques refer to [Pharr and Humphreys 2010, pg 643].

6.2.1 Inversion Method

The inversion method is made up of four steps. Firstly we arerequired to compute our cumulative distribution function (CDF).

Page 7: Initial Reviewpplications of Adaptive Blue Noise …...the generation of primary rays and sampling BxDFs. We achieve this with the use of an adaptive SPH based method for creating

This can be obtained by integrating the probability density func-tion (PDF) of our BxDF. We will notate this P (x) =

∫ x

0p(x′)dx′

where P (x) is our CDF and p(x′) is our PDF. Secondly we com-pute the inverse of this CDF P−1(x) and generate uniformly dis-tributed random number ξ. Finally we compute our importancesample Xi where Xi = P−1(ξ).

This method works quite well and although requires extra designthought when implementing BxDF’s its still fairly simple to do formost cases. However the inversion method does assumes that it ispossible to perform this analytic solution. Sometimes it may not bepossible integrate our BxDF function in order to create our CDF orperhaps it may not be possible to invert our CDF.

6.2.2 Rejection Method

As mentioned in the previous section it may not be possible to cre-ate an analytic solution to generated samples that correspond withour integral. The rejection method can create samples only requir-ing the information provided by the radiance function and the PDFof our BxDF. Given that we are drawing samples from functionf(x) with PDF p(x) that satisfies f(x) < cp(x) where c is somescalar constant. The rejection method goes as follows:

loop f o r e v e r :sample X from p ’ s d i s t r i b u t i o ni f ξ < f(X)/(cp(x)) t h e n

r e t u r n X

We repeatedly choose pairs of random variables (x, ξ) until theequation is satisfied. Therefore having a higher probability of ac-ceptance within the higher regions of f(x). This technique is es-sentially dart throwing, which although will work for any arbitraryBxDF function. However this techniques efficiency is dependent onhow closely our distribution cp(x) fits f(x). Leaving function withpoorly approximated PDFs greater amounts of wasted computationwhen searching for samples.

6.2.3 Metropolis Sampling

Metropolis sampling [Metropolis et al. 1953] is a very powerfultechnique which can generate samples proportionally to a functionsvalue. However sequential samples are often correlated making itdifficult to guarantee a even sample distribution across our integral.Metropolis sampling is also difficult to implement and one of therarer methods used in industry. Therefore we will not be pursuingthis area in this report.

7 Our Methods

In our research we explore practical uses for adaptive blue noisesampling in various dimensions of Monte Carlo Path Tracing. Thefirst of which is the generating samples for shading points. Theseare points where rays intersect objects within our scene and mustgenerate new rays based of properties of the objects surface. Sec-ondly we will explore generating primary ray samples. These arethe first rays generated that originate from the camera and are pro-jected into the scene. Within the following sections we will outlinethe techniques we have been developing to improve on these twoareas.

7.1 BxDF Sampling

To improve upon the current standard of BxDF sampling outlinedin section 6.2 we propose a technique that again adopts [Jiang et al.2015] method of blue noise sampling and adapts it for use with

BxDF models. Our method constrains particles in our SPH simula-tion to lie upon the surface of an appropriate lobe shape. e.g. for thecase of diffuse materials hemisphere to mimic the distribution overa BxDF lobe. Constraining our simulation can be easily accom-plished by mapping particles back to the surface of the lobe afterevery step of the simulation. This provides an even distribution ofstochastic samples across our lobe.

However without importance sampling blue noise samples alonewould give vastly inferior results than the previously mentionedtechniques 6.2. Our method achieves importance sampling of ar-bitrary BxDFs in a very simplistic manor. As described in section5.2 Jiangs method supports adaptive sampling through the use ofsome defined scaling function s(xi) to warp the relative distancebetween particles used in our smoothing kernal. We design thisscaling function to be inversely equivalent to our BxDF. FollowingJiang’s method this scaling function would be:

s(xi) = 1/I(xi) (1)

Where I(xi) is our radiance intensity of our BxDF at location xiacross our lobe. This in turn results in samples becoming denseras I(xi) tends towards it maximum value. Using this method ofadaptive sampling gives us the ability to support any arbitrary BxDFwithout the need for any analytic development. An example of thiscan be seen in figure 11a where we use a simple diffuse BRDFwhere I(xi) = xi · n where n is the normal of our surface.

However we found in our tests that using equation 1 causes dis-continuities when used with an acceleration structure such as a spa-cial hash table. When using an acceleration structure you parti-tioned space into a subset of equally divided space. When using anadaptive distance scale per particle this causes bandwidths of oursmoothing kernels to vary relative to every particle making it im-possible to select the size of our subdivisions in our accelerationstructure. If a kernel bandwidth is larger than our hash subdivisionsthis can cause under sampling neighbour particles causing areas oflow densities causing large particle clustering. On the other handif our kernel bandwidth to too small this can cause areas of highpressure, causing particles to cluster at the boundaries see figure 9.

(a) Scaling function that causes low den-sities. Notice the pooling of particles to-wards the bottom of our simulation

(b) Scaling function that causes high pres-sure regions. Notice the grid like patternscaused

Figure 9: Distance scaling functions that cause discontinuitieswhen used with an acceleration structure

To help avoid these discontinuities we propose some possible so-lutions, however currently these solutions are by no means entirelyreliable and require some experimentation to achieved the desiredeffect. Firstly to avoid areas of low density around the boundarywe simply remove the boundary. Running our simulation over asphere and such that our surface becomes continuous, see figure10. We then simply use particles generated on one of the hemi-spheres of our sphere as our sample positions. To avoid wasting thecomputation of simulation the particles on our second hemisphere

Page 8: Initial Reviewpplications of Adaptive Blue Noise …...the generation of primary rays and sampling BxDFs. We achieve this with the use of an adaptive SPH based method for creating

we simply use this hemisphere to generate another batch of samplesfor our next point of integration.

Figure 10: Diffuse samples generated over a sphere

Secondly we currently limit our scaling function between mini-mum and maximum boundaries. This helps avoid creating kernelbandwidth that are too small thus causing particles to cluster at theboundaries of our hash. We currently use the the scaling function:

s(xi) = a(1− I(xi)) + cmin

a = cmax − cmin

Where cmin and cmax are the minimum and maximum values re-spectively that we wish our scaling function to be limited to.

(a) Diffuse (b) Blinn

Figure 11: Top down view of various BxDF models approximatedwith our adaptive blue noise samples

7.2 Primary Ray Sampling

Primary ray samples, which can be thought of as the first dimen-sion of sampling in our Monte Carlo path tracing is the generationof rays from our camera into our scene. We are essentially tryingto integrate the radiance that enters our camera from our explicitscene into discrete grid of colours, i.e. our pixels which can oth-erwise be thought of as point samples of our image function. We

can generate primary ray samples by distributing of points across a2-dimensional plane. This plane varies in size depending on proper-ties of the camera such as field of view, the width and the height ofthe image plane and the shape of the camera. We then use this dis-tribution as the ray directions from our camera to sample the scenefor use with Monte Carlo integration. As mentioned previously,generating a good sampling distribution can significantly improvethe performance of Monte Carlo integration.

Again in our method we use the technique proposed in [Jiang et al.2015] to create our blue noise sampling distribution due to its speedand flexibility. We simulate the total number of samples that we willneed for our integration at the same time across a 2-dimensionalplane. We have found that to solve boundary discontinuities, usingghost particles in our boundary worked better than the originallyproposed method. This involves creating a boundary out of parti-cles around the borders of our simulation. These particles are takeninto account in our SPH simulation, causing high pressure whenparticles move to close and therefore push particles away. This bor-der is displayed in blue in figure 12.

(a) Original apple image (b) Samples generated

Figure 12: Adaptive blue noise samples generated and adaptedbased on the variance measured when producing image 12a

After convergence has been achieved we randomly select a singlesample, if available, for every pixel in our image plane for use inour Monte Carlo integration. Every sample selected for use in in-tegration is then set to be frozen in our simulation. This avoidssuccessive samples being able to be generated in the same placeas previous samples taken. After samples have been traced we canuse information retrieved in each pass to adapt our successive sam-ples to better fit our image function. Currently we have investigatedusing the variance measured at every pixel after each pass in inte-gration. We use this variance to adapt successive samples generatedwithin our scene by varying our scalar distance field relative to thevariance of each pixel using the equation:

s(x) =1

aV (x) + cmin

a = cmax − cmin

Where cmin and cmax are the minimum and maximum values re-spectively that we wish our scaling function to be limited to andV (x) is the variance at pixel location x. This means that in areaswhere the variance increasing, so does the density of samples inthese areas. We have found that these areas generally include highfrequency regions such as hard edges of objects and sharp changesin textures that could appear in a scene as you can see in figure 12,an example of samples generated using this method. Currently weonly look into using variance to adapt our particles however thiscould be adapted to other properties of our scene as discussed inthe upcoming future work section.

Page 9: Initial Reviewpplications of Adaptive Blue Noise …...the generation of primary rays and sampling BxDFs. We achieve this with the use of an adaptive SPH based method for creating

8 Results

8.1 BxDF Sampling

So far we have created a plugin for Pixar’s Renderman that pro-duces samples the diffuse reflection model. Our current unop-timized simulation takes approximately 60 seconds to converge100,000 samples on a NVidia 970m graphics card and a Intel Corei7-4710MQ CPU @ 2.50GHz. Due to this large convergence timein this early implementation we use this sampling distribution forevery point of integration. We then randomly shuffle the samplessuch that different samples are selected at each point of integration.Although this proves as a good test for early results this causes oursamples to become increasingly correlated the more samples weintegrate with.

Figure 13: Root mean squared error of our blue noise samples(blue) and the inversion method (right) when compared to ourground truth image

We can see this in figures 13 and 14 with a small number of BxDFsamples our method out performs inversion. However with ourcurrent implementation as samples are increased we have certainlevels of correlation and becoming out performed by the inversionmethod.

8.2 Primary Ray Sampling

For our current experiments with our primary ray sampling genera-tion, we use Monte Carlo integration and point sampling to scale asome large resolution image to a smaller resolution image. This isessentially mimicking what we would be doing in traditional pathtracing of evaluating some image function with point samples how-ever for early experiments this is easier to implement and removesthe computation of actually having to trace rays around a scene.For the results in this report we scale a 3000x3000 pixel image toa 200x200 pixel image. With our current implementation we gen-erate 240,000 samples in approximately 10 seconds on a NVidia970m graphics card and a Intel Core i7-4710MQ CPU @ 2.50GHz.This number of samples can be thought of as 6 samples per pixelfor a 200x200 pixel image.

Needless to say as you can see in figure 15, blue noise far superior tosimple jitter, however what their are fine differences between bluenoise and our variance adapted blue noise. Our method appears toproduce better results around the edges of objects. This is appar-ent if you focus around the edges of the apples stork displayed infigure 16. However due the the large areas of high variance causedby the chequerboard running off into the distance in the top of the

2 BxDF samples

4 BxDF samples

6 BxDF samples

8 BxDF samples

10 BxDF samples

Figure 14: Renders of diffuse sphere produced with Pixar’s Ren-derman using our blue noise sampling (left), the inversion methodand some low discrepancy random sequence (right) and our refer-ence image produced with 500 primary ray samples and 10 BxDFsamples (center). All renders were done using a single primary raysample.

Page 10: Initial Reviewpplications of Adaptive Blue Noise …...the generation of primary rays and sampling BxDFs. We achieve this with the use of an adaptive SPH based method for creating

(a) Ground Truth (b) Jitter

(c) Blue Noise (d) Adaptive Blue Noise

Figure 15: Comparison of different sampling techniques whenscaling a 3000x3000 pixel image to 200x200 using 240,000 sam-ples.

(a) Blue Noise (b) Ground Truth (c) Adaptive Blue Noise

Figure 16: Close up on the stork of the apple. Notice the aliasingon the edge of the stork in 16a compared to 16c.

image we suffer from some sub-sampling and aliasing issues. Thisis caused by our method identifying a very large area of high vari-ance and adapting our particles such that they can converge closertogether. Unfortunately at this point there is a lack of samples tocreate an even distribution around this area leading to clustering ana distribution similar to that of white noise.

9 Future Work

9.1 BxDF Sampling

The most important work to focus on currently is to increase thespeed of convergence of our particle simulation. As mentioned insection 8.1 our simulation takes 60 seconds to converge. This is nota viable computation time when we could have thousands of shad-ing points across our surface that need to be integrated. The mostsimple approach to this is to first just naively vary the parameters inour SPH simulation to increase the the speed of particles converg-ing. However we have also discussed the idea of having multipleonline simulations running concurrently. With using these simu-lations we would select the simulation that has generated sampleswith conditions closest to our current shading point. For examplewith Blinn we could compare half angle vectors and use the se-lect the simulation converged with the most similar vector. This

simulation would then become the starting point of convergencerather than starting from white noise, meaning that samples wouldbe closer to convergence as soon as the simulation begins.

Further areas of of interest would be to research into accountingscene properties such as light positions in our scaling function.Therefore making our method also a viable method for generat-ing multiple importance samples. This is unachievable with thetraditional inversion method and hard to achieve with Metropoliswithout causing correlations in successive samples. For more onmultiple importance sampling refer to [Pharr and Humphreys 2010,pg 690].

9.2 Primary Ray Sampling

There are a number of ideas I would like to investigate to progressthis research. So far we have only looked at investigating the useof variance as our scaling function. It could however be beneficialto use other properties retrieved from our scene. This could includegeometric information for edge detection, light intensity if you wantto focus on specular areas of a scene or even we could use some pre-baked information such as occlusion maps to minimise samples sentto areas of where light is not apparent. However firstly before in-vestigating these areas I think it is important to fix the subsamplingissue discussed at the end of section 8.2.

10 Proposed Time Scale and Milestones

Planning ahead into the future it is beneficial to set milestones.This helps plan what work must lie ahead making time manage-ment more efficient and also providing clear goals to work towards.I will break this section up into three sections consisting of shortterm, mid term and long term milestones to help set a level of pri-ority to each milestone give a estimated time scale.

10.1 Short Term

Our first milestone that we are focusing on currently is acquiringan industrial placement. This is the most important milestone asit is required to be obtained before the end of the second year tosuccessfully complete the EngD program. We have been in talkswith a number of companies all with varying levels of success. Themost prominent of which include MPC, Double Negative, Frame-store, Solid Angle and Drilling systems. Unfortunately after almostbeing successful with Framestore and Drilling systems they haveboth pulled out. MPC and Solid Angle however are still possibleoptions.

MPC is a visual effects company which there headquarters arebased in Soho, London. MPC already has ties with the CDE anda couple of placements embedded there. Here they have their ren-dering research department based at in London which has ongoingwork in various fields. An example of which is an ongoing projectthey have called Decko. This is there real time GPU renderer whichruns on NVidia’s OptiX a library I have has previous experiencewith.

Solid Angle is a company newly owned by Autodesk which hasoffices in Madrid and London. They make the widely adopted pro-duction renderer called Arnold used by various companies such asFramestore, The Mill and Industrial Light and Magic. They pro-duce a vast amount of research various areas of rendering that ofwhich is public record you can find here. Coincidently howeverthey have just published a technical talk at SIGGRAPH 2016 in thearea of using blue noise with sampling distribution [Georgiev andFajardo 2016]. We are currently in negotiations with Solid Angle

Page 11: Initial Reviewpplications of Adaptive Blue Noise …...the generation of primary rays and sampling BxDFs. We achieve this with the use of an adaptive SPH based method for creating

and are waiting for a response on whether the CDE is somethingthat Autodesk is able to support.

10.2 Mid Term

Mid term milestones mainly focus on publishing the work we havecurrently out lined in this report. There are a number of confer-ences and journals in which we could publish depending the datethe work is completed. The first of which is Eurographics 2017.The Eurographics conference has a larger focus on areas of com-puter graphics such as rendering than other conferences, howeverthe deadline for submission is on the 7th of October 2016 which isunlikely to be enough time to complete the paper. Secondly there isSIGGRAPH 2017 conference on Computer Graphics and Interac-tive Techniques. SIGGRAPH is one of the most esteemed confer-ences in computer graphics which the deadline for technical papersis on the 17th of January 2017. Following this there is always anumber of journals we could submit to such as IEEE.

Following this I would begin work on my next set of papers. Aftertalking with a few industry professionals allegedly light sampling isstill large unresolved problem. Given my current research in sam-pling this seems like a suitable area to explore.

10.3 Long Term

Long term milestones are the most flexible of all milestones giventhat we cannot predict what may lie in the future. The general planis to continue general research within the area of blue noise andrendering. There has been some discussion of some collaborationwith Thomas Bashford-Rogers, a postdoc from Warwick Univer-sity. His research focuses on the research in creating accurate com-puter graphics with research in areas including light transport, im-portance sampling and Markov chain Monte Carlo (MCMC). Youcan find more information about Thomas on his website.

Major deadlines that should be mentioned include the Transfer andthe Viva voce. The Transfer will happen at the end of my secondyear around the 7th of October 2017. This is an important documentthat I should put some time aside regularly throughout the secondyear to work on. The third and forth years proceeding the transferwill be spent working on the proposed research project, preparingrelevant data for my thesis. The project should be finished in timefor my forth year so that I can focus on writing up my thesis readyfor presenting at the Viva Voce which will occur around the end myfourth year.

10.4 Future Conferences & Courses

In the near future at this years BFX festival SideFX, the creators ofHoudini are giving a master class. I plan to attend this master classas I believe Houdini could be very beneficial to quickly prototypefuture research. Its modular design gives it a lot of flexibility andexcels in simulations, for example fluids, which coincidently I cur-rently use in my research. This could also be an opportunity to net-work and attend some technical talks at BFX in the process. Otherthan BFX and the Houdini master class in terms of conferences orcourses to attend I have no major plans. Attending conferences willmainly be dictated by either if I have published at the conferencerequiring me to present or if content at a conference could be bene-ficial to my research.

11 Summary

In this document I have presented an overview of the work that Icompleted to date in the EngD program. I have introduced the field

of sampling within Monte Carlo ray tracing and outlined develop-ments in the field to give some context to the research that I have un-dertaken. I have given an in depth description, the current progressand where I plan to progress in the future of the research that Ihave developed. Finally I have given what I believe to be achiev-able milestones that include placements, conferences and academicdeadlines to work towards for for the remainder of the next 3 yearsof the EngD program.

References

BALZER, M., SCHLOMER, T., AND DEUSSEN, O. 2009.Capacity-constrained point distributions. ACM Transactions onGraphics 28, 3, 1.

BRATLEY, P., AND FOX, B. L. 1988. ALGORITHM 659: imple-menting Sobol’s quasirandom sequence generator. ACM Trans-actions on Mathematical Software 14, 1, 88–100.

CHIU, K., SHIRLEY, P., AND WANG, C. 1994. Multi-jitteredsampling. Graphics gems IV.

COOK, R. L. 1986. Stochastic sampling in computer graphics.ACM Transactions on Graphics 5, 1, 51–72.

DU, Q., AND EMELIANENKO, M. 2006. Acceleration schemes forcomputing centroidal Voronoi tessellations. Numerical LinearAlgebra with Applications 13, 2-3, 173–192.

DUNBAR, D., AND HUMPHREYS, G. 2006. A spatial data struc-ture for fast Poisson-disk sample generation. ACM Transactionson Graphics 25, 3, 503.

GEORGIEV, I., AND FAJARDO, M. 2016. Blue-noise DitheredSampling. 2008.

JAROSZ, W. 2008. Efficient Monte Carlo Methods for Light Trans-port in Scattering Media. PhD thesis, UNIVERSITY OF CALI-FORNIA, SAN DIEGO.

JIANG, M., ZHOU, Y., WANG, R., SOUTHERN, R., AND ZHANG,J. J. 2015. Blue noise sampling using an SPH-based method.ACM Transactions on Graphics 34, 6, 1–11.

JONES, T. R. 2006. Efficient Generation of Poisson-Disk SamplingPatterns. Journal of Graphics, GPU, and Game Tools 11, 27–36.

LLOYD, S. P. 1982. Least Squares Quantization in PCM. IEEETransactions on Information Theory 28, 2, 129–137.

MCCOOL, M., AND FIUME, E., 1992. Hierarchical Poisson DiskSampling Distributions.

MCKAY, M. D., BECKMAN, R. J., AND CONOVER, W. J. 1979.Comparison of Three Methods for Selecting Values of InputVariables in the Analysis of Output from a Computer Code.Technometrics 21, 2, 239–245.

METROPOLIS, N., ROSENBLUTH, A. W., ROSENBLUTH, M. N.,TELLER, A. H., AND TELLER, E. 1953. Equation of statecalculations by fast computing machines. Journal ChemicalPhysics 21, 6, 1087–1092.

MITCHELL, D. P. 1987. Generating antialiased images at lowsampling densities. ACM SIGGRAPH Computer Graphics 21, 4,65–72.

MONAGHAN, J., 1994. Simulating free surface flows with SPH.

OKTEN, G., SHAH, M., AND GONCHAROV, Y. 2012. Randomand Deterministic Digit Permutations of the Halton Sequence.

Page 12: Initial Reviewpplications of Adaptive Blue Noise …...the generation of primary rays and sampling BxDFs. We achieve this with the use of an adaptive SPH based method for creating

Springer Proceedings in Mathematics and Statistics 23, January2009, 609–622.

OWEN, A. B., 1992. Orthogonal Arrays for Computer Experi-ments, Integration and Visualization.

PHARR, M., AND HUMPHREYS, G. 2010. Physically Based Ren-dering. Physically Based Rendering, April, 738–871.

SHIRLEY, P. 1991. Discrepancy as a Quality Measure for SampleDistributions. Test, 1–7.

VEACH, E. 1997. Robust Monte Carlo Methods for Light TransportSimulation. Dissertation at the Department of Computer Scienceof Stanford University 134, December, 759–764.

WEI, L.-Y., AND WANG, R. 2011. Differential domain analysisfor non-uniform sampling. ACM Transactions on Graphics 30,4, 1.

WEI, L.-Y. 2008. Parallel Poisson disk sampling. ACM Transac-tions on Graphics 27, 3, 1.