Quantitative resultsQuantitative results
Qualitative resultsQualitative results
Neighborhood size influenceNeighborhood size influence
Dictionary size influenceDictionary size influence
Example-based Super-ResolutionExample-based Super-Resolution
Our approachOur approach
Main contributionsMain contributions
Anchored Neighborhood Regression for Fast Example-Based Anchored Neighborhood Regression for Fast Example-Based Super-ResolutionSuper-Resolution
Radu Timofte Vincent De Smet Luc Van Gool
We present example-based super-resolution methods that exploit sparsity and neighbor embedding to achieve state-of-the-art quality while significantly improving speed.
Our Global Regression method uses ridge regression to solve the sparse SR problem, allowing an LR to HR mapping to be precomputed offline. This results in an online computation 100x faster than previous methods.
To achieve higher adaptability and quality, our Anchored Neighborhood Regression method uses local feature neighborhoods instead of the entire dictionary, achieving a 10x increase in speed over previous methods.
Super-Resolution
Low Resolution (LR)
High Resolution (HR)
Dictionary(LR, HR)patches
Global regression can be seen as the extreme case of our more general method called Anchored Neighborhood Regression:
● Offline: For each dictionary atom:
● Find K nearest neighbors● These represent its neighborhood● Calculate local projection matrix P
J based
on its local neighborhood
● Online: For each LR input patch:
● Find its nearest neighbor atom● Calculate HR output patch using NN
atom's stored projection matrix
● Least squares with L2-minimization constraint
● Closed-form solution (Tikhonov regularization/ridge regression):
● HR patches use same reconstruction weights, or
if we use the entire dictionary. ● SR then becomes a multiplication with a
projection matrix PG which can be
calculated offline.
Dictionary atoms Input sample
Anchored neighborhood
Standard neighborhoodSimilar neighborhoods!
● Trained dictionary needs about 16x less atoms than random patch dictionary● ANR/GR execute one to two orders of magnitude faster than compared sparse/neighbor embedding methods
● For proper dictionary and neighborhood size, all methods show similar quality● The main advantage of ANR/GR is in computation time
● Example-based super-resolution uses a dictionary of corresponding LR/HRimage patches to create a plausible HR image
● Exploits image statistics of small patches
Global Regression Anchored Neighborhood Regression
0.01 0.1 1 10 10028.1
28.2
28.3
28.4
28.5
28.6
28.7
28.8
Speed (1/s)
PS
NR
(dB
)
Yang et al.
NE + NNLS
Zeyde et al.
NE + LLE NE + LS
ANR
GR