2
Fast Fourier Intrinsic Network – Supplementary Material – Yanlin Qian 1 , Miaojing Shi 2 , Joni-Kristian K¨ am¨ ar¨ ainen 1 , Jiri Matas 3 1 Computing Sciences, Tampere University 2 Department of Informatics, King’s College London 3 Center for Machine Perception, Czech Technical University in Prague [email protected] This supplementary material provides 1) the train- ing&testing splits for MPI-Sintel and MIT Intrinsic; 2) more examples for visualization and comparison. 1. Training&Testing Splits We use the same training&testing split files with [8] and [1] for MIP-Sintel and MIT Intrinsic, respectively. We think it would be good to publish these files such that any following-up works can use them and make a fair compar- ison with us or any previous relevant works. We report the scene-split for MPI-Sintel and object-split for MIT Intrinsic dataset below. MPI-Sintel: training: alley 1, bamboo 1, bandage 1,cave 2,market 2, market 6, shaman 2, sleeping 1, temple 2 testing: alley 2, bamboo 2, bandage 2, cave 4, market 5, mountain 1, shaman 3, sleeping 2, temple 3 MIT Intrinsic: training: apple,box, cup1, dinosaur, frog1, panther, paper1, phone, squirrel, teabag2 testing: cup2, deer, frog2, paper2, pear, potato, raccoon, sun, teabag1, turtle 2. More Examples for Visualization • Fig. 1 and Fig. 4: visual results on MPI-Sintel dataset with scene split and image split, respectively. We com- pare our method with [6, 8, 4, 5, 1]. We particularly point the readers to the flatten patches and fine textures (see red arrows) in the images to show the superiority of our method. • Fig. 3: more visual results on MIT Intrinsic dataset. In Fig. 3 we compare our FFI-Net with different versions in [2]; ours are clearly visually better than [2]. • Fig. 2: visual results on IIW benchmark. We compare our FFI-Net with other representative approaches [3, Input (A) (S) DI [8] Fan [6] FFI Ground Truth Figure 1: Examples on the MPI-Sintel dataset (scene split). 9, 7]. References [1] Jonathan T Barron and Jitendra Malik. Shape, illumination, and reflectance from shading. IEEE transactions on pattern analysis and machine intelligence, 37(8):1670–1687, 2015. [2] Anil S Baslamisli, Hoang-An Le, and Theo Gevers. Cnn based learning using reflection and retinex models for intrinsic im- age decomposition. In CVPR, 2018. [3] Sean Bell, Kavita Bala, and Noah Snavely. Intrinsic images in the wild. ACM Transactions on Graphics (TOG), 33(4):159, 2014. [4] Qifeng Chen and Vladlen Koltun. A simple model for intrinsic image decomposition with depth cues. In ICCV, 2013.

Fast Fourier Intrinsic Network – Supplementary Material

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Fast Fourier Intrinsic Network – Supplementary Material

Fast Fourier Intrinsic Network– Supplementary Material –

Yanlin Qian1, Miaojing Shi2, Joni-Kristian Kamarainen1, Jiri Matas31Computing Sciences, Tampere University

2Department of Informatics, King’s College London3Center for Machine Perception, Czech Technical University in Prague

[email protected]

This supplementary material provides 1) the train-ing&testing splits for MPI-Sintel and MIT Intrinsic; 2)more examples for visualization and comparison.

1. Training&Testing SplitsWe use the same training&testing split files with [8]

and [1] for MIP-Sintel and MIT Intrinsic, respectively. Wethink it would be good to publish these files such that anyfollowing-up works can use them and make a fair compar-ison with us or any previous relevant works. We report thescene-split for MPI-Sintel and object-split for MIT Intrinsicdataset below.

MPI-Sintel:training: alley 1, bamboo 1, bandage 1,cave 2,market 2,market 6, shaman 2, sleeping 1, temple 2testing: alley 2, bamboo 2, bandage 2, cave 4, market 5,mountain 1, shaman 3, sleeping 2, temple 3MIT Intrinsic:training: apple,box, cup1, dinosaur, frog1, panther, paper1,phone, squirrel, teabag2testing: cup2, deer, frog2, paper2, pear, potato, raccoon,sun, teabag1, turtle

2. More Examples for Visualization• Fig. 1 and Fig. 4: visual results on MPI-Sintel dataset

with scene split and image split, respectively. We com-pare our method with [6, 8, 4, 5, 1]. We particularlypoint the readers to the flatten patches and fine textures(see red arrows) in the images to show the superiorityof our method.

• Fig. 3: more visual results on MIT Intrinsic dataset. InFig. 3 we compare our FFI-Net with different versionsin [2]; ours are clearly visually better than [2].

• Fig. 2: visual results on IIW benchmark. We compareour FFI-Net with other representative approaches [3,

Inpu

t

(A) (S)

DI[

8]Fa

n[6

]FF

IG

roun

dTr

uth

Figure 1: Examples on the MPI-Sintel dataset (scene split).

9, 7].

References[1] Jonathan T Barron and Jitendra Malik. Shape, illumination,

and reflectance from shading. IEEE transactions on patternanalysis and machine intelligence, 37(8):1670–1687, 2015.

[2] Anil S Baslamisli, Hoang-An Le, and Theo Gevers. Cnn basedlearning using reflection and retinex models for intrinsic im-age decomposition. In CVPR, 2018.

[3] Sean Bell, Kavita Bala, and Noah Snavely. Intrinsic images inthe wild. ACM Transactions on Graphics (TOG), 33(4):159,2014.

[4] Qifeng Chen and Vladlen Koltun. A simple model for intrinsicimage decomposition with depth cues. In ICCV, 2013.

Page 2: Fast Fourier Intrinsic Network – Supplementary Material

Image Bell [3] (A) Bell [3](S) Zhou [9] (A) Zhou [9] (S) CGI [7] (A) CGI [7] (S) FFI-Net (A) FFI-Net (S)

Figure 2: Qualitative comparisons of (A)lebdo and (S)hading on IIW.

(A)

(S)

(A)

(S)

Input IntriNet IntriNet Retinet FFI-Net Ground Truthw/o imf[2] w/ imf[2] [2]

Figure 3: Sample (A)lbedo and (S)hading on MIT Intrinsic.Comparison with different versions in [2]. Results of [2] aredownloaded from their project webpage.

[5] Lechao Cheng, Chengyi Zhang, and Zicheng Liao. Intrin-sic image transformation via scale space decomposition. InCVPR, 2018.

[6] Qingnan Fan, Jiaolong Yang, Gang Hua, Baoquan Chen, andDavid Wipf. Revisiting deep intrinsic image decompositions.In CVPR, 2018.

[7] Zhengqi Li and Noah Snavely. Cgintrinsics: Better intrinsicimage decomposition through physically-based rendering. InECCV, 2018.

[8] Takuya Narihira, Michael Maire, and Stella X Yu. Direct in-trinsics: Learning albedo-shading decomposition by convolu-tional regression. In ICCV, 2015.

[9] Tinghui Zhou, Philipp Krahenbuhl, and Alexei A Efros.Learning data-driven reflectance priors for intrinsic image de-composition. In CVPR, 2015.

Inpu

t(A) (S)

Bar

ron

[1]

Che

n[4

]D

I[8]

Fan

[6]

Lap

[5]

FFI

Gro

und

Trut

h

Figure 4: More Examples of (A)lbedo and (S)hading pre-dictions on MPI-Sintel (image split).