IMAGE RESTORATION AND REALISM MILLIONS OF IMAGES SEMINAR YUVAL RADO

Preview:

Citation preview

IMAGE RESTORATION AND REALISMMILLIONS OF IMAGES SEMINAR

YUVAL RADO

2

IMAGE REALISM

•What is CG images?

•How can we tell the difference?

3

TODAY’S TOPICS

• Super – Resolution

• What is it?

• How it’s done?

• Algorithm.

• Results.

• CG2REAL

• The idea behind.

• Cosegmatation.

• Color & texture transfer.

• Results.

4

SUPER – RESOLUTION

• Methods for achieving high-resolution enlargements of pixel-based images.

• Estimating missing high-resolution detail that isn’t present in the original image, and which we can’t make visible by simple sharpening.

5

HOW IT’S DONE?

• Using learning based approach for enlarging images.

• In a training set, the algorithm learns the fine details that correspond to different image regions seen at a low-resolution and then uses those learned relationships to predict fine details in other images.

6

TRAINING SET GENERATION

Low resolution image

High resolution imageLow resolution enlargement via bilinear interpolation

High resolution high pass filter & contrast normalization

Low resolution high pass filter & contrast normalization

7

LOW RESOLUTION – HIGH RESOLUTION PROBLEM

Input Patch

Closest image patches from

database

Corresponding high-resolution patches from

database

8

HOW CAN WE SOLVE THIS?

• Markov Network

𝑃 (𝑋|𝑌 )= 1𝑍∏

(𝑖𝑗 )Ψ 𝑖𝑗 (𝑥 𝑖 ,𝑥 𝑗 )∏

(𝑖 )Φ𝑖 (𝑥 𝑖 , 𝑦 𝑖 ) Ψ 𝑖𝑗 (𝑥 𝑖 , 𝑥 𝑗 )=exp (− 𝑑𝑖𝑗 (𝑥 𝑖 ,𝑥 𝑗 )

2𝜎2 )

Problem: very long time to calculate, not practical.

9

THE BELIEF PROPAGATION

• Not giving exact results as the Markov Network, but much faster!

• Still gives good results.

• Only three or for iterations of the algorithm is enough for getting the results we need.

10

THE BELIEF PROPAGATION – CONT.

• Let be the message from node to node .

• The message contains the vectors of dimensionality of the state we estimate at node .

• is the part of the corresponds to high resolution patch .

• The rule of updating is:

• The marginal probability for each high resolution patch at node is:

11

FASTEST METHOD – ONE PASS ALGORITHM

• Based on the belief propagation, there is a faster algorithm that calculates only the high resolution patch compatibilities of neighboring high resolution patches that are already selected, typically the patches above and to the left, in raster-scan order processing.

• One pass super resolution generates the missing high-frequency content of a zoomed image as a sequence of predictions from local image information.

12

ONE PASS ALGORITHM – DIAGRAM

13

RESULTS

• The training set pictures:

14

RESULTS – CONT.

Original Image Cubic spline One pass algorithm

15

RESULTS – CONT.

Cubic splineOriginal Image One pass algorithm

16

RESULTS – CONT.

17

RESULTS – TRAINING SET DEPENDENCY

Training set example Input image One pass algorithm

18

RESULTS – FAILURE EXAMPLE

Original Image Cubic spline One pass algorithm

19

CG2REAL

• Improving the Realism of Computer Generated Images using a large Collection of Photographs.

Computer Generated CG2REAL

20

THE IDEA BEHIND?

• Use Computer Generated image as an input.

• Look in real photo collection for similar images.

• Mark the corresponding area in the CG image.

• Transfer the color and texture from the real image to the CG image.

• Smooth the edges.

21

THE PROCESS

22

FINDING SIMILAR IMAGES

• Ordering the images in pyramid.

• The key of the pyramid is a combination of two features:

• The SIFT features of each image.

• The color in each feature.

23

FINDING SIFT FEATURES

1. Scale Space extrema detectiona) Construct Scale Space

b) Take Difference of Gaussians

c) Locate DoG Extrema

2. Key point localization

3. Orientation assignment

4. Build Key point Descriptors

24

COSEGMATATION

• Segmenting the images from the database and the input CG image.

• Matching similar regions in all images.

• All in one step!

25

COSEGMATATION – CONT.

• For each pixel we define a feature vector which is the concatenation of:

• The pixel color in L*a*b* space.

• the normalized and coordinates at .

• A binary indicator vector such that is when pixel is in the image and otherwise.

26

COSEGMATATION – CONT.

• The distance between feature vectors at pixels and in images and is a weighted Euclidean distance:

• is the L*a*b* color distance between pixel in image and pixel in image .

• is spatial distance between pixels and .

• The delta function encodes the distance between the binary components of the feature vector.

27

COSEGMATATION – RESULTS

28

TEXTURE TRANSFER

• Done locally, by the results of the Cosegmatation.

• Rely on the similar photographs we retrieved from the database to provide us with a set of textures to help upgrade the realism of the CG image.

• Limitations: Can’t reuse the same region many times because this often leads to visual artifacts in the form of repeated regions.

• The idea behind: We align multiple shifted copies of each real image to the different regions of the CG image and transfer textures using graph-cut.

29

TEXTURE TRANSFER – CONT.

• For each cosegmented part of the picture, we use cross correlation of edge maps (magnitudes of gradients) to find the real image, and the optimal shift, that best matches the CG image for that particular region.

• We repeat the process in a greedy manner until all regions in the CG image are completely covered.

• To reduce repeated textures, we only allow up to shifted copies of an image to be used for texture transfer (typically ).

• Now each pixel contains up to labels.

30

TEXTURE TRANSFER – CONT.

• For each pixel we use the label assignment function to choose which label we apply in that pixel.

• The label assignment function:

31

TEXTURE TRANSFER – CONT.

• is a data penalty term that measures distance between a patch around pixel in the CG image and a real image.

• is the average distance in L*a*b* space between the patch centered around pixel in the CG image and the patch centered around pixel in the image associated with label .

• is the average distance between the magnitudes of the gradients of the patches.

controls the error of transferring textures between different cosegmentation regions.

• and are normalized weights.

𝐶 (𝐿 )=∑𝑝

𝐶𝑑 (𝑝 ,𝐿 (𝑃 ) )+∑𝑝 ,𝑞

𝐶𝑖 (𝑝 ,𝑞 ,𝐿 (𝑝 ) ,𝐿 (𝑞) )

32

TEXTURE TRANSFER – CONT.

• is an interaction term between two pixels and and their labels.

• M(p) is near strong edges in the CG image and near in smooth regions.

• affects the amount of texture switching that can occur. For low values of , the algorithm will prefer small patches of textures from many images and for high values of the algorithm will choose large blocks of texture from the same image.

𝐶 (𝐿 )=∑𝑝

𝐶𝑑 (𝑝 ,𝐿 (𝑃 ) )+∑𝑝 ,𝑞

𝐶𝑖 (𝑝 ,𝑞 ,𝐿 (𝑝 ) ,𝐿 (𝑞) )

33

TEXTURE TRANSFER – CONT.

• After we choose the right label assignment using the function we described earlier we transfer the texture and smooth it nicely to the CG image via Poisson blending.

34

COLOR TRANSFER

•Has two approaches:

•Color histogram matching.

• Local color transfer.

35

COLOR HISTOGRAM MATCHING

• Works well between real images.

• Typically fails when used In matching CG images and real images.

• This happens because the histogram of CG images is very different the histogram of real. Due to less colors used in CG imaginary.

• This leads to instability in global color transfer.

36

COLOR HISTOGRAM MATCHINGCG input Global histogram matching

37

LOCAL COLOR TRANSFER

•How it’s done?

• Down sampling of the images.

• Computation of the color transfer offsets per region from the lower resolution images.

• smoothing and up sampling the offsets using joint bilateral up sampling.

38

LOCAL COLOR TRANSFER - ALGORITHM

• In each subsampled region that we have we match two histograms:

• 1D histogram matching on the L* channel.

• 2D histogram matching on the a* and b* channels.

• Great results obtained after no more than 10 iterations of this algorithm.

39

LOCAL COLOR TRANSFER - RESULTS

CG input Color model Local color transfer

40

TONE TRANSFER

•Decompose the luminance channel of the CG image and one or more real images using a QMF pyramid (QMF - quadrature mirror filter).

•We apply 1D histogram matching to match the subband statistics of the CG image to the real images in every region.

41

TONE TRANSFER – CONT.

• Now we model the effect of the histogram matching as a change in gain:

• is the level subband coefficient at pixel .

• is the corresponding subband coefficient after regional histogram matching

• is the gain, when it’s greater than 1 it’s amplifies the details in the subband. When it’s less than 1 it will diminish those details.

• lower subbands are not amplified beyond higher subbands and that the gain signals are smooth near zero crossings.

42

TONE TRANSFER – RESULTS

CG input Tone modelLocal color an tone transfer

Close up before

Close up after

43

CG2REAL – RESULTS

CG Image

CG2REAL Image

44

CG2REAL – RESULTS

CG Image

CG2REAL Image

45

CG2REAL – RESULTS

CG Image

CG2REAL Image

46

CG2REAL – RESULTS

CG Image

CG2REAL Image

47

CG2REAL – RESULTS

CG Image

CG2REAL Image

48

CG2REAL – RESULTS

CG Image

CG2REAL Image

49

CG2REAL – FAILURES

50

CG2REAL – EVALUATION

51

CG2REAL – EVALUATION

52

THANK YOU FOR LISTENING

53

REFERENCES

• William T. Freeman, Thouis R. Jones, and Egon C. PasztorIEEE Computer Graphics and Applications 2002, Example-Based Super-Resolution.

• Johnson MK, Dale K, Avidan S, Pfister H, Freeman WT, Matusik W, CG2Real: Improving the Realism of Computer Generated Images using a Large Collection of Photographs.

• http://en.wikipedia.org/wiki/Superresolution