Upload
claire-cunningham
View
236
Download
0
Tags:
Embed Size (px)
Citation preview
Introduction to Computer Graphics
PPT: 8 Illumination models, transparency, shadows, halftones.
Pravalith Sama U#59260651
Photorealism in Computer Graphics:• Photorealism in computer graphics involves
• Accurate representations of surface properties, and• Good physical descriptions of the lighting effects
• Modeling the lighting effects that we see on an object is a complex process, involving principles of both physics and psychology
• Physical illumination models involve• Material properties, object position relative to light
sources and other objects, the features of the light sources, and so on
Illumination and Rendering• An illumination model in computer graphics
• also called a lighting model or a shading model• used to calculate the color of an illuminated position on
the surface of an object• Approximations of the physical laws
• A surface-rendering method determine the pixel colors for all projected positions in a scene.
Light Sources• Point light sources
• Emitting radiant energy at a single point• Specified with its position and the color of the emitted
light
• Infinitely distant light sources• A large light source, such as sun, that is very far from a
scene• Little variation in its directional effects• Specified with its color value and a fixed direction for the
light rays
Light Sources• Directional light sources
• Produces a directional beam of light• Spotlight effects.
• Area light sources
Intensity Attenuation• As light moves from a light source its intensity
diminished. At any distance dl away from the light
source the intensity diminishes by a factor of • However, using the factor does not produce
very good results so we use something different.
21
ld2
1ld
Intensity Attenuation
• We use instead in inverse quadratic function of the form:
where the coefficients a0, a1, and a2 can be varied to produce optimal results.• The intensity attenuation is not applied to light sources at
infinity because all points in the scene are at a nearly equal distance from a far-off source
2210
1)(
ll
lradattendadaa
df
Directional Light Sources & Spotlights• To turn a point light source into a spotlight we
simply add a vector direction and an angular limit
θl
• We can denote Vlight as the unit vector in the direction of the light and Vobj as the unit vector from the light source to an object
• The dot-product of these two vectors gives us the angle between them
• If this angle is inside the light’s angular limit then the object is within the spotlight
cos lightobj VV
Angular Intensity Attenuation• As well as light intensity decreasing as we move
away from a light source, it also decreases angularly
• A commonly used function for calculating angular attenuation is:
• where the attenuation exponent al is assigned some positive value and angle is measured from the cone axis
laangattenf cos)( 0
Flaps And Cones• To restrict a light effects to a limited area of
scene, Warn implemented flaps and cones. Flaps, modeled after the barn doors found on professional photographic lights, confine the effects of light with coordinates of x,y&z. each light has 6 flaps, corresponding to user specified minimum and maximum values in each coordinate. Each flap has a flap indicates whether it is off or on.
• There is a examples given below: which shows the use of x flap and the y flap is used to restrict the light.
• As shown in 2nd figure, a cone with the generating angle of gamma may be used to restrict light sources.
Illumination model• The colours that we perceive are determined by
the nature of the light reflected from an object• For example, if white light is shone onto a
green object most wavelengths are absorbed, while green light is reflected from the object
Surface Lighting Effects• The amount of incident light reflected by a surface depends
on the type of material• Shiny materials reflect more of the incident light and dull
surfaces absorb more of the incident light• For transparent surfaces some of the light is also
transmitted through the material• An illumination model computes the lighting effects for a
surface using the various optical propertiesDegree of transparency, color reflectance, surface texture
• The reflection (phong illumination) model describes the way incident light reflects from an opaque surface.• Diffuse, ambient, specular reflections• Simple approximation of actual physical models
Diffuse Reflection• Surfaces that are rough or grainy tend to reflect
light in all directions• This scattered light is called diffuse reflection
Specular Reflection• Additionally to diffuse reflection some of the
reflected light is concentrated into a highlight or bright spot
• This is called specular reflection
Ambient Light• A surface that is not exposed to direct light may
still be lit up by reflections from other nearby objects – ambient light
• The total reflected light from a surface is the sum of the contributions from light sources and reflected light
Example
Example
Lambert’s Cosine Law• Diffuse surfaces follow Lambert’s Cosine Law• Lambert’s Cosine Law - reflected energy from a
small surface area in a particular direction is proportional to the cosine of the angle between that direction and the surface normal.
• Think about surface area and # of rays
Basic Illumination Model• We will consider a basic illumination model which
gives reasonably good results and is used in most graphics systems
• The important components are:• Ambient light• Diffuse reflection• Specular reflection
• For the most part we will consider only monochromatic light
Ambient Light• To incorporate background light we simply set a general
brightness level for a scene• This approximates the global diffuse reflections from various
surfaces within the scene• We will denote this value as Ia• Multiple reflection of nearby (light-reflecting) objects yields
a uniform illumination• A form of diffuse reflection independent of he viewing
direction and the spatial orientation of a surface• Ambient illumination is constant for an object
: the incident ambient intensity: ambient reflection coefficient, the proportion reflected
away from the surface
aaIkI
Diffuse Reflection• First we assume that surfaces reflect incident light with
equal intensity in all directions• Such surfaces are referred to as ideal diffuse reflectors or
Lambertian reflectors
• A parameter kd is set for each surface that determines the fraction of incident light that is to be scattered as diffuse reflections from that surface
• This parameter is known as the diffuse-reflection coefficient or the diffuse reflectivity
• kd is assigned a value between 0.0 and 1.0
• 0.0: dull surface that absorbs almost all light• 1.0: shiny surface that reflects almost all light
Diffuse Reflection – Ambient Light• For background lighting effects we can assume
that every surface is fully illuminated by the
scene’s ambient light Ia• Therefore the ambient contribution to the diffuse
reflection is given as:
• Ambient light alone is very uninteresting so we need some other lights in a scene as well
adambdiff IkI
Diffuse Reflection (cont…)• When a surface is illuminated by a light source,
the amount of incident light depends on the orientation of the surface relative to the light source direction
• The angle between the incoming light direction and a surface normal is referred to as the angle
of incidence given as θ
• So the amount of incident light on a surface is given as:
• So we can model the diffuse reflections as:
cos, lincidentl II
cos
,,
ld
incidentlddiffl
Ik
IkI
• Assuming we denote the
normal for a surface as N and the unit direction vector to the light source
as L then:
• So: cosLN
0 if
0 if
0
)(,
LN
LNLNIkI lddiffl
Combining Ambient And Incident Diffuse Reflections• To combine the diffuse reflections arising from
ambient and incident light most graphics packages use two separate diffuse-reflection coefficients:• ka for ambient light • kd for incident light
• The total diffuse reflection equation for a single point source can then be given as:
0 if
0 if )(
LN
LN
Ik
LNIkIkI
aa
ldaadiff
Examples
Specular Reflection• The bright spot that we see on a shiny surface is
the result of near total of the incident light in a concentrated region around the specular reflection angle
• The specular reflection angle equals the angle of the incident light
A perfect mirror reflects light only in the specular-reflection directionOther objects exhibit specular reflections over a finite range of viewing positions around vector R
The Phong Specular Reflection Model
• The Phong specular reflection model or Phong model is an empirical model for calculating specular reflection range developed in 1973 by Phong Bui Tuong
• The Phong model sets the intensity of specular reflection as proportional to the angle between the viewing vector and the specular reflection vector
• So, the specular reflection intensity is proportional to
• The angle Φ can be varied between 0° and 90° so that cosΦ varies from 1.0 to 0.0
• The specular-reflection exponent, ns is determined by the type of surface we want to display• Shiny surfaces have a very large value (>100)• Rough surfaces would have a value near 1
The graphs below show the effect of ns on the angular range in which we can expect to see specular reflections
• So the specular reflection intensity is given as:
• Remembering that we can say:snlsspecl IkI cos,
0or 0 if
0 and 0 if
0.0
)(,
LNRV
LNRVRVIkI
snls
specl
Example
Half Vector
• An alternative way of computing phong lighting is: Is = ks * Is * (N*H)n
• H (halfway vector): halfway between V and L: (V+L)/2• Fuzzier highlight
Multiple Light Sources
• We can place any number of light sources in a scene
• We compute the diffuse and specular reflections as sums of the contributions from the various sources
n
l
nsdlaa
n
lspecldifflambdiff
sRVkLNkIIk
IIII
1
1,,
Transparency • Having the property of transmitting light without
appreciable scattering so that bodies lying beyond are seen clearly
• Or allowing the passage of a specified form of radiation (as X-rays or ultraviolet light)
• Light travels in straight line. But when it hits an object it can do 1 of 3 things….it can be reflected, absorbed or transmitted.
• Light can be reflected, transmitted, or absorbed…but why?
• It depends on whether the object is transparent, translucent or opaque
Translucent• Translucent materials transmit some light but cause it to
spread in all directions• You can see light through these materials but not clearly• Lampshades, frosted glass, sheer fabrics, notebook paper
Transparent • The windows on a school bus,• A clear empty glass,• A clear window pane,• The lenses of some eyeglasses,• Clear plastic wrap,• The glass on a clock,• A hand lens,• Colored glass…• ALL of these are transparent. Yes, we can see
through them because light passes through each of them.
OPAQUE
• Opaque materials do not allow any light to pass through them
• Construction paper, Trees, Animals
• TRANSPARENT
• TRANSLUCENT
• OPAQUE
Snell’s law• Light bends by the physics principle of least time light
travels from point A to B by the fastest path.• When passing from material of index n1to n2 snell’s law gives the angle of refraction. n1sinθ1=n2sinθ2 where θ1 and θ2 are are the angles from perpendicular Light travels further in the faster material if the indices are same the light doesn’t bend.
What are Shadows?• partial darkness or obscurity within a part of space from
which rays from a source of light are cut off by an interposed opaque body.
• Regions not completely visible from a• light source• Assumptions:
• Single light source• Finite area light sources• Opaque objects
• Two parts:• Umbra: totally blocked from light• Penumbra: partially obscured
Basic Types of Light & Shadows
For each pixel:• Trace ray from eye through pixel center• Compute closest object intersection point P along ray• Calc Shadowi for point by performing
shadow feeler intersection test• Calc illumination at point P• Render the scene into the depth-buffer (no need compute
color)• For each pixel, determine if in shadow:
• “unproject” the screen space pixel point to transform into eye space
• Perform shadow feeler test with light in eye space to compute Shadowi
• Store Shadowi for each pixel
• Light the scene using per-pixel Shadowi values
• Volume of space in shadow of a single occluder with respect to a point light source
• OR• Volume of space swept out by extruding an occluding
polygon away from a point light source along the projector rays originating at the point light and passing through the vertices of the polygon
• Parity test to see if a point P on a visible surface is in shadow:• Initialize parity to 0• Shoot ray from eye to point P• Each time a shadow-volume boundary is crossed, invert
the parity• if parity=0, P is in shadow
if parity=1, P is lit
Light intensities• The term intensity is used to describe the rate at
which light spreads over a surface of a given area some distance from a source. The intensity varies with the distance from the source and the power of the source. Power is a property of the light source that describes the rate at which light energy is emitted by the source.
• light travels from a source through space is represented in Figure
Light energy emitted by the source (S) travels outward in all directions. The rays indicate the straight line paths of a photon through space
Gamma correction• Eye is sensitive to ratios of intensity rather than
absolute intensity levels – For example: difference between 50 & 100 (2x) watt bulb appears larger than difference between 100 & 150 (1.5x) watts
• Logarithmical light energy values appear to have equal brightness • Most displays output light intensity: L = kNγ where N is number of electrons in the beam and γ is usually 2.2-2. • To account for this, computer intensity values are inverse mapped using a gamma correction: I=(L/k)(1/γ)
Intensity level for high quality viewing:• Visible light: electromagnetic energy 400-700nm
range • Hue: dominant the spectral component(red,green,yellow) • Saturation: excitation purity – determines how far the color is from gray – Red is very saturated, pink is less so• Lightness: perceived achromatic intensity (luminance) of reflecting object • Brightness: luminance of self-luminous object
• Eye has three types of cones: peak at blue, green and red • Cones response to blue light is much weaker than other spectra
• Rods used to see at low-light instances and detect at very low wavelengths.
Intensity level:
Halftone techniques:• The halftone process is a data conversion process• Digital data conversion isn’t exclusively the domain of image processing,
but it is the soul of a halftone. A word of text would not be the same word if the 7-bit ASCII notation of that word changed, but changing the numerical value of a color may increase its accuracy. The halftone process is a method of changing data to ensure that the data accurately represents the image across different devices. The color red on one device would not be red when reproduced on a different device if the data describing red remained the same.
1. The halftone image contains a variety of densities that simulates continuous tone imagery through the use of a colorant deposited on media. 2. Halftone gets its name from an early process of breaking black dots in half and into half again through the use of a screen that blocks out half the light. • defines halftone as a subtractive color process that is implemented on a
printer. It is a subtractive process because light is reflected at maximum luminosity from a white sheet of paper. The addition of ink to the paper decreases the amount of reflected light.
Resolution of halftone screens
• The resolution of a halftone screen is measured in lines per inch (lpi). This is the number of lines of dots in one inch, measured parallel with the screen's angle. Known as the screen ruling, the resolution of a screen is written either with the suffix lpi or a hash mark; for example, "150 lpi" or "150#".
• The higher the pixel resolution of a source file, the greater the detail that can be reproduced. However, such increase also requires a corresponding increase in screen ruling or the output will suffer from posterization. Therefore file resolution is matched to the output resolution.
Multiple screens and color halftoning
• When different screens are combined, a number of distracting visual effects can occur, including the edges being overly emphasized, as well as a moiré pattern. This problem can be reduced by rotating the screens in relation to each other. This screen angle is another common measurement used in printing, measured in degrees clockwise from a line running to the left (9 o'clock is zero degrees).
• Halftoning is also commonly used for printing color pictures. The general idea is the same, by varying the density of the four secondary printing colors, cyan, magenta, yellow and black any particular shade can be reproduced.
• In this case there is an additional problem that can occur. In the simple case, one could create a halftone using the same techniques used for printing shades of grey, but in this case the different printing colors have to remain physically close to each other to fool the eye into thinking they are a single color. To do this the industry has standardized on a set of known angles, which result in the dots forming into small circles or rosettes.
• The dots cannot easily be seen by the naked eye, but can be discerned through a microscope or a magnifying glass.
Digital halftoning
• Digital halftoning has been replacing photographic halftoning since the 1970s when "electronic dot generators" were developed for the film recorder units linked to color drum scanners made by companies such as Crosfield Electronics, Hell and Linotype-Paul.
• In the 1980s, halftoning became available in the new generation of imagesetter film and paper recorders that had been developed from earlier "laser typesetters". Unlike pure scanners or pure typesetters, imagesetters could generate all the elements in a page including type, photographs and other graphic objects. Early examples were the widely used Linotype Linotronic 300 and 100 introduced in 1984, which were also the first to offerPostScript RIPs in 1985.
• All halftoning uses a high frequency/low frequency dichotomy. In photographic halftoning, the low frequency attribute is a local area of the output image designated a halftone cell. Each equal-sized cell relates to a corresponding area (size and location) of the continuous-tone input image. Within each cell, the high frequency attribute is a centered variable-sized halftone dot composed of ink or toner. The ratio of the inked area to the non-inked area of the output cell corresponds to the luminance or graylevel of the input cell. From a suitable distance, the human eye averages both the high frequency apparent gray level approximated by the ratio within the cell and the low frequency apparent changes in gray level between adjacent equally spaced cells and centered dots.
• Digital halftoning uses a raster image or bitmap within which each monochrome picture element or pixel may be on or off, ink or no ink. Consequently, to emulate the photographic halftone cell, the digital halftone cell must contain groups of monochrome pixels within the same-sized cell area. The fixed location and size of these monochrome pixels compromises the high frequency/low frequency dichotomy of the photographic halftone method. Clustered multi-pixel dots cannot "grow" incrementally but in jumps of one whole pixel. In addition, the placement of that pixel is slightly off-center. To minimize this compromise, the digital halftone monochrome pixels must be quite small, numbering from 600 to 2,540, or more, pixels per inch. However, digital image processing has also enabled more sophisticated dithering algorithms to decide which pixels to turn black or white, some of which yield better results than digital halftoning. Digital halftoning based on some modern image processing tools such as nonlinear diffusion and stochastic flipping has also been proposed recentl
Halftone Dots• The word “dot” was first used in the graphic arts
to refer to the tiny pattern of dots that can simulate a continuous tone image using solid ink. Developed in the mid- to late-1800s, this technique – and the use of the term “dot” – predated the computer graphics revolution by more than a century.
Dot patterns • As you can see in the image below, a photograph can
create a smooth gradation of values from black to white and all shades of gray in between. This is not the case, however, with most printing methods, including offset lithography and desktop digital printing. These technologies can only print areas of solid ink. The ink is never diluted, nor is white ink added to the mix to make shades of gray. The only way to reproduce shades of gray in print is to break the image up into tiny dots that appear to blend into a continuous tone when viewed with the naked eye. Such an image, composed of a pattern of tiny dots, is called a halftone. The dots themselves are known as halftone dots.
Example
• The process begins with a film negative of the original image. Light passes through the negative and then through a screen, usually a plate of glass with a grid of horizontal and vertical lines etched onto its surface. After passing through the screen, the light exposes another piece of film. The screen functions as a diffraction grating, breaking the light into tiny discreet rays, which create the pattern of dots. The result is a duplicate film negative with a pattern of solid dots instead of continuous shades of gray. The duplicate negative is then used to create a plate for the offset printing process.
Lines Of Dots• The halftone process introduces another bit of printing
terminology that often gets confused with the others. If you look at Figure 1, you’ll see that halftone dots are arranged in orderly rows or lines, usually oriented at an angle to the paper. In the conventional halftone process, the spacing of these lines of dots remains constant throughout the image; only the size of the dots varies to create different shades of gray.
• The spacing of lines of halftone dots is known as the screen frequency or line screen and is expressed in lines-per-inch (LPI), i.e., the number of lines (rows) of dots in an inch. Although this is a form of resolution, it is quite different from the resolution of a digital image, which will be discussed below. Remember that this halftone process predates digital imaging by a hundred years.
• Although the line screen remains constant throughout a single image (and usually for an entire printed piece) it is possible to use different line screens for different printed pieces
Electronic Halftones• magine that the printing surface (paper, film, or a plate) is
divided into a grid of tiny spaces. Each of these little spaces corresponds to the smallest possible mark that the laser device can create. If the laser strikes a specific space, it is turned “on” to create a black spot or printer element. In order to generate a halftone pattern, i.e., halftone dots arranged in orderly lines, the printer divides its pattern of spots into a grid of vertical columns and horizontal rows. At the intersection of each row and column is a cluster of printer spots known as a halftone cell. The printer can turn on or off the spots within each cell to create halftone dots of varying sizes. If only a few spots are turned on within each cell, it produces a small halftone dot, which gives the appearance of a light gray. As more spots are turned on within each cell, the halftone dots become larger, producing darker grays.
Example
• The spacing of these tiny spots or elements is the printer’s resolution. I prefer the term spots-per-inch (SPI), referring to the number of tiny spots or printer elements that the device can lay down in a linear inch. Unfortunately, most printer manufacturers use the more familiar expression dots-per-inch or DPI, a trend that began with the first dot matrix printers in the 1970s. This has lead to significant confusion between printer spots and halftone dots. In fact, many graphics professionals have reversed the terminology, using the word dot (and DPI) to refer to the tiny marks or elements made by the printer and the word spot for what were traditionally called halftone dots. The situation is further complicated by the fact that SPI is also used as a measure of the resolution of a digital scanner
Dithering techniques • Dithering is the attempt by a computer program to
approximate a color from a mixture of other colors when the required color is not available. For example, dithering occurs when a color is specified for a Web page that a browser on a particular operating system can't support. The browser will then attempt to replace the requested color with an approximation composed of two or more other colors it can produce. The result may or may not be acceptable to the graphic designer. It may also appear somewhat grainy since it's composed of different pixel intensities rather than a single intensity over the colored space
Halftone Approximation• Not all devices can display all colors
• e.g. GIF is only 256 colors
• Idea: With few available shades, produce illusion of many colors/shades?
• Technique: Halftone Approximation• Example: How do we do greyscale with black-and-
white monitors?
Halftone approximation • Technique: Dithering• Idea: create meta-pixels, grouping base pixels into
3x3s or 4x4s• Example: a 2x2 dither matrix for grayscale
Threshold Dithering• For every pixel: If the intensity < 0.5, replace with
black, else replace with white• 0.5 is the threshold• This is the naïve version of the algorithm
• To keep the overall image brightness the same, you should:• Compute the average intensity over the image• Use a threshold that gives that average• For example, if the average intensity is 0.6, use a
threshold that is higher than 40% of the pixels, and lower than the remaining 60%
Naïve Threshold Algorithm
Brightness Preserving Algorithm
Ordered Dithering• Break the image into small blocks• Define a threshold matrix
• Use a different threshold for each pixel of the block
• Compare each pixel to its own threshold
• The thresholds can be clustered, which looks• like newsprint• The thresholds can be “random” which looks better
Clustered Dithering
6875.3125.5625.1875.
125.9375.8125.5.
4375.875.10625.
25.625.375.75.
Dot Dispersion
3125.5625.5.75.
9375.0625.875.25.
4375.6875.375.625.
8125.1875.1125.
Pattern Dithering• Compute the intensity of each sub-block and index
a pattern
• NOT the same as before• Here, each sub-block has one of a fixed number of patterns
– pixel is determined only by average intensity of sub-block• In ordered dithering, each pixel is checked against the
dithering matrix before being turned on
• Used when display resolution is higher than image resolution – not uncommon with printers• Use 3x3 output for each input pixel
Floyd-Steinberg Dithering• Start at one corner and work through image pixel by
pixel• Usually scan top to bottom in a zig-zag
• Threshold each pixel• Compute the error at that pixel: The difference
between what should be there and what you did put there• If you made the pixel 0, e = original; if you made it 1, e =
original-1• Propagate error to neighbors by adding some
proportion of the error to each unprocessed neighbor• A mask tells you how to distribute the error
• Easiest to work with floating point image• Convert all pixels to 0-1 floating point
Example
Color Dithering• All the same techniques can be applied, with
some modification• Example is Floyd-Steinberg:
• Uniform color table• Error is difference from nearest color in the color table• Error propagation same as that for greyscale
• Each color channel treated independently
Example
• Issues with Dithering• Image is now 4x in size
• How do we keep image the same size?• Technique: Error Diffusion• Idea: When approximating pixel intensity, keep
track of error and try to make up for errors with later pixels
Error diffusion • Error diffusion is a type of halftoning in which
the quantization residual is distributed to neighboring pixels that have not yet been processed. Its main use is to convert a multi-level image into a binary image, though it has other applications.
• Unlike many other halftoning methods, error diffusion is classified as an area operation, because what the algorithm does at one location influences what happens at other locations. This means buffering is required, and complicates parallel processing. Point operations, such as ordered dither, do not have these complications.
• Error diffusion has the tendency to enhance edges in an image. This can make text in images more readable than in other halftoning techniques.
Algorithm description
• Error diffusion takes a monochrome or color image and reduces the number of quantization levels. A popular application of error diffusion involves reducing the number of quantization states to just two per channel. This makes the image suitable for printing on binary printers such as black and white laser printers.
• In the discussion which follows, it is assumed that the number of quantization states in the error diffused image is two per channel, unless otherwise stated.
One-dimensional error diffusion
• The simplest form of the algorithm scans the image one row at a time and one pixel at a time. The current pixel is compared to a half-gray value. If it is above the value a white pixel is generated in the resulting image. If the pixel is below the half way brightness, a black pixel is generated. The generated pixel is either full bright, or full black, so there is an error in the image. The error is then added to the next pixel in the image and the process repeats.
Two-dimensional error diffusion
• One dimensional error diffusion tends to have severe image artifacts that show up as distinct vertical lines. Two dimensional error diffusion reduces the visual artifacts. The simplest algorithm is exactly like one dimensional error diffusion, except half the error is added to the next pixel, and one quarter of the error is added to the pixel on the next line below, and one quarter of the error is added to the pixel on the next line below and one pixel forward.
• The kernel is:
• where "#" denotes the pixel currently being processed.• Further refinement can be had by dispersing the error further
away from the current pixel, as in the matrix given above in Enter the digital era. The sample image at the start of this article is an example of two dimensional error diffusion.
Error diffusion with several gray levels
• Error Diffusion may also be used to produce output images with more than two levels (per channel, in the case of color images). This has application in displays and printers which can produce 4, 8, or 16 levels in each image plane, such as electrostatic printers and displays in compact mobile telephones. Rather than use a single threshold to produce binary output, the closest permitted level is determined, and the error.
Example • Consider a 2D image/primitive
• Goals: Spread errors out in x and y pixels Nearby gets more error than far away
• Floyd-Steinberg Error Distribution Method• Let the current pixel be• Distribute error as follows:
• Let be the shade of pixel• To draw we round pixel to nearest shade K
and set
• Then, diffuse the errors throughout surrounding pixels, e.g. S[x + 1][y] += (7/16) err
• S[x - 1][y - 1] += (3/16) err• S[x][y - 1] += (5/16) err• S[x + 1][y - 1] += (1/16) err