7
A Cubic Polynomial Model for Fisheye Camera Haijiang Zhu, Xiupu Yin School ofInfoation Science & Technology, Beijing University of Chemical Technology, 100029,Beijing, China. Email: [email protected] Abstract In this paper, we present a kind of cubic polynomial model for fisheye camera by using the lting strate, which point coordinates in low dimension space is lted to a vector in high dimension space. In contrast to existing the lting strategies, our lifting strate is to permit 3D point coordinates to appear in higher order polynomials. This paper demonstrates that the cubic polynomial model can effectively express the fisheye image points as the cubic polynomial function of world coordinates. Thus this allows a linear algorithm to estimate the nonlinear models, and in particular offers a simple solution to the estimation of the nonlinear relationship between 3D point and its corresponding fisheye image points. Experimental results with synthetic and real fisheye images are presented to show that the fisheye camera is accurately approximated by the explicit cubic polynomial model. Keywords: Cubic polynomial model, the liſting strategy, fisheye camera 1 Introduction Omnidirectional devices have been studied extensively in recent years because they can provide a wide field-of-view for photography, vision-based surveillance and virtual reality. Omnidirectional cameras have usually two types: catadioptric cameras, which are constructed by the combination of a pinhole camera and a quadric mirror [1, 2], and dioptric cameras, which are composed of the pinhole camera and a fisheye lens [4-15]. There have been many works on the estimation of an omnidirectional camera model with lens distortion during more than ten years. The distortion correction models of the omnidirectional camera have several common models as follows: The Field of View (FOV) Model [4]. This model can obtain an excellent distortion correction by using the projection of a line in space projected onto the camera is still a line. The Polynomial Model. Hartley [5] expanded the radial distortion function as a Taylor series and removes distortion by estimating coefficients of polynomial function. From the angle e and the distance r ( e is between the optical axis and the incoming ray and r is between image point and the principal point), Kannala[6] proposed a polynomial model between e and r, and estimated lens distortion correction parameters by using 3D control point and corresponding 2d fisheye image point. Assuming the image projection function is described by a Taylor series expansion, Scaramuzza[7] presented a calibration technique for catadioptric camera from single image and the Taylor polynomial coefficients are estimated by solving a two-step least-squares linear minimization problem. Shah et al.[8] used a polynomial transfoation between 3D control points in the world coordinate system and their corresponding image plane locations, and the coefficients of the polynomial were deteined using the Lagrangian estimation. Xiong et al.[9] presented also a polynomial model between e and r . The Division Model. Fitzgibbon [10] kept only the first nonlinear even te based on the polynomial model. And lens distortion parameter and epipolar geometry between multiple images were computed simultaneously. Micusik[11] generalized Fitzgibbon's method[1O] to omnidirectional cameras with FOV than 180 degree. Assuming only point correspondences, an appropriate omnidirectional camera model was derived and estimated from epipolar geometry. The Bicubic Model [12]. This model is described by using the bicubic relation between the ideal image point and the extracted image point, and the higher order tes did not compensate for the rational distortion. The Rational Function Model. Claus[13] proposed a Rational Function Model and explored the mapping from image point to the incoming ray. The distortion model is written as a matrix and the image point is liſted a six dimensional space by using the quadratic tes. For parabolic catadioptirc cameras, Geyer [14] and Barreto[15] described the distortion model by liſting image to a four dimensional circle space using quadratic tes and called the Circle Space Model. The fundamental matrix and 3D reconstruction were estimated from two or three omnidirectional views. Sturm [16] used also liſting strategy to obtain model for back-projection of the incoming rays into images from affine, perspective, 978-1-4244-9630-3/10/$26.00 ©2010 IEEE

[IEEE 2010 25th International Conference of Image and Vision Computing New Zealand (IVCNZ) - Queenstown, New Zealand (2010.11.8-2010.11.9)] 2010 25th International Conference of Image

Embed Size (px)

Citation preview

Page 1: [IEEE 2010 25th International Conference of Image and Vision Computing New Zealand (IVCNZ) - Queenstown, New Zealand (2010.11.8-2010.11.9)] 2010 25th International Conference of Image

A Cubic Polynomial Model for Fisheye Camera

Haijiang Zhu, Xiupu Yin School ofInfonnation Science & Technology, Beijing University of Chemical Technology, 100029,Beijing,

China.

Email: [email protected]

Abstract

In this paper, we present a kind of cubic polynomial model for fisheye camera by using the lifting strategy, which

point coordinates in low dimension space is lifted to a vector in high dimension space. In contrast to existing the

lifting strategies, our lifting strategy is to permit 3D point coordinates to appear in higher order polynomials.

This paper demonstrates that the cubic polynomial model can effectively express the fish eye image points as the

cubic polynomial function of world coordinates. Thus this allows a linear algorithm to estimate the nonlinear models, and in particular offers a simple solution to the estimation of the nonlinear relationship between 3D point and its corresponding fisheye image points. Experimental results with synthetic and real fisheye images

are presented to show that the fisheye camera is accurately approximated by the explicit cubic polynomial model.

Keywords: Cubic polynomial model, the lifting strategy, fisheye camera

1 Introduction Omnidirectional devices have been studied extensively in recent years because they can provide a wide field-of-view for photography, vision-based surveillance and virtual reality. Omnidirectional cameras have usually two types: catadioptric cameras,

which are constructed by the combination of a pinhole camera and a quadric mirror [1, 2], and dioptric cameras, which are composed of the pinhole camera and a fisheye lens [4-15].

There have been many works on the estimation of an omnidirectional camera model with lens distortion during more than ten years. The distortion correction models of the omnidirectional camera have several common models as follows:

The Field of View (FOV) Model [4]. This model can obtain an excellent distortion correction by using the projection of a line in space projected onto the camera is still a line.

The Polynomial Model. Hartley [5] expanded the radial distortion function as a Taylor series and removes distortion by estimating coefficients of

polynomial function. From the angle e and the

distance r ( e is between the optical axis and the

incoming ray and r is between image point and the principal point), Kannala[6] proposed a polynomial

model between e and r , and estimated lens distortion

correction parameters by using 3D control point and

corresponding 2d fisheye image point. Assuming the

image projection function is described by a Taylor series expansion, Scaramuzza[7] presented a calibration technique for catadioptric camera from single image and the Taylor polynomial coefficients are estimated by solving a two-step least-squares

linear minimization problem. Shah et al.[8] used a polynomial transfonnation between 3D control points in the world coordinate system and their corresponding image plane locations, and the coefficients of the polynomial were detennined using the Lagrangian estimation. Xiong et al.[9] presented also a polynomial model between e and r .

The Division Model. Fitzgibbon [10] kept only the

first nonlinear even tenn based on the polynomial model. And lens distortion parameter and epipolar geometry between multiple images were computed simultaneously. Micusik[ 11] generalized Fitzgibbon's method[1O] to omnidirectional cameras with FOV than 180 degree. Assuming only point correspondences, an appropriate omnidirectional camera model was derived and estimated from epipolar geometry.

The Bicubic Model [12]. This model is described by using the bicubic relation between the ideal image point and the extracted image point, and the higher order tenns did not compensate for the rational distortion.

The Rational Function Model. Claus[13] proposed a Rational Function Model and explored the mapping from image point to the incoming ray. The

distortion model is written as a matrix and the image

point is lifted a six dimensional space by using the quadratic tenns. For parabolic catadioptirc cameras,

Geyer [14] and Barreto[15] described the distortion model by lifting image to a four dimensional circle space using quadratic tenns and called the Circle Space Model. The fundamental matrix and 3D reconstruction were estimated from two or three omnidirectional views. Sturm [16] used also lifting strategy to obtain model for back-projection of the incoming rays into images from affine, perspective,

978-1-4244-9630-3/10/$26.00 ©2010 IEEE

Page 2: [IEEE 2010 25th International Conference of Image and Vision Computing New Zealand (IVCNZ) - Queenstown, New Zealand (2010.11.8-2010.11.9)] 2010 25th International Conference of Image

and para-catadioptric cameras. These models were determined actually by finding the mapping from image points to the corresponding incidence rays. X. Ying[17] and Geyer[IS] proposed a two-step projection via a quadric surface or an unit sphere and tried to unify the catadioptric camera model or fisheye camera model.

Comparison with catadioptric cameras, fisheye camera models are difficulty to represent by a perfect mathematical analytic function and have not been well researched because of the severe distortions. In

this paper we choose the other lifting strategy to build a cubic polynomial model for fisheye camera. Our lifting strategy is to permit 3D point coordinates to appear in higher order polynomials.

This paper is in three main parts. We introduce mainly generic camera models and the lifting strategy in Section 2. The cubic polynomial model for fisheye camera is discussed in Section 3, and the mapping relationship from 3D points to its 2D image point is described in detail. The experimental results are presented in Section 4 and summary is given in Section 5.

2 Background

Generally, there are two kinds of models for camera

in the reported literature. The first model is a linear

model, and the second one is a nonlinear model. These models describe actually the mapping from world 3D points to image 2D points. For a linear camera this mapping is represented by

Am = K[R, T]M w

[I s uaj where K = 0 r4 va is

o 0 I

(1)

the intrinsic parameter

matrix of camera, and 1 is the focal length, (ua, va) is

principal point, a is aspect ratio, s is skew. And

M w = [X w' Yw, Z w' 1 f is the homogeneous coordinate

of 3D point in the world coordinate system,

m = [u, v,lf is the homogeneous coordinate of 2D

point in the image plane, A is a non-zero scale, R is a

rotation matrix and T a translation vector. And for a nonlinear camera, the mapping from a

3D point to a image point includes two steps [2,lS]: first a 3D point in the world coordinate system is projected linearly to a 3D point on the unit spherical image (i.e. in the camera coordinate system), then the 3D point on the unit sphere is nonlinear mapped on the image. Then, the nonlinear camera is represented by

Am = g(K . [R, T]M w) (2)

where gO represents the nonlinear projection model

of the camera. Recently, the Rational Function model

is developed to model nonlinear distortion on catadioptric and general cameras [l3,14,15,16]. For parabolic catadioptric camera, authors described an algorithm of estimation egomotin and lens geometry by lifting image points to a four dimensional circle space [14]. And a quadratic order polynomial is used to describe the function relating a 3D ray and its image point in the omnidirectional camera [13]:

d(u, v) = AX(u, v)

uv

l an a12 a13 a14 a15 a"

J v2

= a2l a22 a23 a24 a25 a26 u a3l a32 a33 a34 a35 a36 v

1

where d(u, v) represents an mcommg ray m

R3 corresponding to pixels (u, v) in R2

, A is a

3 x 6 matrix and X(u, v) is defined as the lifting point

of a true image point (u,v) .

This paper uses also the lifting strategy to present a nonlinear model for fisheye camera. In contrast to existing the lifting strategies, our lifting strategy is to permit 3D point coordinates to appear in higher order

polynomials.

3 The Cubic Polynomial Model The nonlinear model of the fisheye camera is presented using the lifting coordinates of 3D point in the section, and the model is the explicit cubic polynomial model.

3.1 Mathematical Formulation

In the fisheye camera, suppose a 3D point in the world coordinate system is projected on the spherical image by

� � Mc = [R, T]Mw (3)

� T where Me = [XoYoZe, 1] IS the homogeneous

coordinate of 3D point in the camera coordinate system. And we explore the mapping of the 3D point on the camera coordinate system and its corresponding image point by lifting the 3D point

M c to a twenty dimensional vector

<J>(Xc' Yc ,Zc) = [X;, Yc3 , Z;, X�Yc' X� Zc' 2 2 2 2 Yc XC'�. ZC'ZcXc,Zc Yc'

2 2 2 XeYeZe,Xe , Ye ,Ze , XeYe, T YcZc ,XcZc , Xc, �., Zc, 1]

(4)

Page 3: [IEEE 2010 25th International Conference of Image and Vision Computing New Zealand (IVCNZ) - Queenstown, New Zealand (2010.11.8-2010.11.9)] 2010 25th International Conference of Image

From equations (2) and (3), our imaging model IS given by

Am=g(K·MJ=P·<D(XC'YoZc) (5) Where m is a fisheye image point, P is a 3 x 20

matrix which is a linear combination of the distortion parameters. The nonlinear relation in equation (2) can be described by equations (4) and (5). Let

PI,lO P2.lO

p = l;�: ;�: P31 P32 P3,10

... Pl.20 j

... P2,20 P3,20

From equation (5), we have

{J � p

·"(X" Y"Z,) (6)

After eliminating the scale /L from equations (4), (5) and (6), we get

(7)

where

fl(Xc,Yc,ZJ = PllX� + PI2Y; + P13Z� + PI4X;Yc 2 2 2

+ P1SXcZc + P16Yc Xc + P17Yc Zc 2 2 + P1SZc Xc + P19Zc Yc + Pl,lOXcYcZc

2 2 2 + Pl,l lXc + Pl,12�' + Pl,13Zc + Pl,14Xc�' + Pl,lS�.zc + Pl,16XcZc + Pl,17 Xc + Pl,lSYc + Pl,19Zc + Pl,20

P2(Xc,Yc'Zc) = P21X� + P22�3 + P23Z� + P24X;Yc

2 2 2 + P2SXcZc + P26Yc Xc + P27Yc Zc

2 2 + P2SZc Xc + P29Zc Yc + P2,lOX cYcZc 2 2 2

+ P2,IlXc + P2.l2Yc + P2,13Zc + P2,14XcYc + P2,1SYcZc + P2,16XcZc + P2,17Xc + P2,18� + P2,19Zc + P2,20

P3(Xc, Yc,ZJ = P3IX� + P32Yc3 + P33Z� + P34X;Yc 2 2 2 + P3SXcZc + P36Yc Xc + P37Yc Zc 2 2

+ P38ZcXc + P39ZcYc + P3,lOXcYcZc 2 2 2

+ P3,l lXc + P3,12�' + P3,13Zc + P3,14Xc�' + P3,lS�.zc + P3,16XcZc + P3,17Xc + P3,18Yc + P3,19Zc + P3,20

3.2 Estimation of Model Parameters

Given a set of 3D space point to image point

correspondence Me B m the distortion

parameters P can be computed, namely the

coefficients of polynomials fl (Xc, Yc, Z c) , P2(XC'YoZc) and P;(XC'�"ZJ. From equation (7),

we get

{U.' P3(Xc,Yc,Zc) � fl (Xc, Yc,ZJ:

0 (8)

v P3(XC'YC'ZJ P2(XC'YC'ZJ - 0

From equation (8) two constraints on elements of P matrix are obtained, and we may rewrite the equation (8) as a system of linear equation for n

correspondences between world points and image points by stacking all constraints of elements in P matrix, that is:

XZ1, y,�" .. ,1,0,0"" ,O'-Ul X;'1 ,-Ul y,;" .. ,-Ul 0,0"" ,0, X;'1, y,;" .. ,1'-Vl XZ1 '-Vl y,;" .. ,-Vl

XZn'�;"" ,1,0,0,.·· ,0,-unXZn ,-un�;"" '-Un 0,0"" ,0, X;'n, Y,;" .. ,1,-vnXZn '-Vn Y'�l"" '-Vn

where

·C=o

(9)

C = (PII,PI2"",P1,20,P21,P22"",P2,20,P31,P32"",P3,20Y is a 60-dimensional vector. Because there are a lot of 60 unlmown parameters in the equation (9), at least 30 points correspondences are required to solve for the equation. These coefficients can be solved by using the linear least-squares minimization [5].

As we have the estimation for the coefficients of the cubic polynomial model, the relationship matrix between 3D points and image points can be computed by equation (5). These camera parameters are refined

by minimizing the sum of squared distances between the measured and re-projected point using the Levenberg-Marquardt algorithm. The sum of squared distances is estimated by the following step.

1. Compute the 3 x 20 matrix P by a set of

3D space point to image point

correspondence Mc B m and equation (8).

2. Estimate the re-projected image point m' by using equation (7).

3. Suppose there are N image points in the

fisheye image, then the sum of squared distances is complicated by equation (10).

N r = Ld(mim;)2

(10) i=1

3.3 Summary

Given the homogeneous coordinates of 3D point in the camera coordinate system, and algorithm for computing the cubic polynomial model is as follows:

Page 4: [IEEE 2010 25th International Conference of Image and Vision Computing New Zealand (IVCNZ) - Queenstown, New Zealand (2010.11.8-2010.11.9)] 2010 25th International Conference of Image

1. Take a few fisheye images of the 3D calibration object including white and black squares.

2. Detect the comer points in the fisheye Images.

3. Construct the lifting 3D coordinates from equation (4).

4. Compute the 3 x 20 matrix P , and obtain an

initial estimation of the coefficients of the cubic polynomial model.

Refine these coefficients by minimizing the sum of

squared distances equation (10) between the measured and re-projected point using the Levenberg-Marquardt algorithm.

4 Experimental Results In these experiments, we choose the equidistance projection as the projections of fisheye camera. Synthetic experiments are firstly presented, and experiments with real fisheye images are given.

4.1 Experiments with Synthetic Data

In synthetic experiments, the approximate projection model of the fisheye camera is chosen as:

r "" Ie + kj e2 + k2e3 + k3e4 + k4e5 where f is the focal length, e is the angel between

the incoming ray and the optical axis and r is the distance from the image point to the principal point. The intrinsic parameters of the simulated fisheye camera IS

I = 150.0 , a = 1.0 , ua = 320 , va = 240 ,

k1 = 15.0 , k2 = -20.0 , k3 = 2.0 , k4 = -10.0 .

And the extrinsic parameters of two simulated fisheye Images are

[ 0.9980 0.0092

R1 = - 0.0072 0.9995

- 0.0621 0.0307

[- 232.3595: 1]= -182.8318

- 492.5258

[ 0.9982 0.0415

R2 = - 0.0388 0.9974

- 0.0466 0.0584

T2 = [= �;:::��:] - 600.6887

0.0618 ] - 0.0312

0.9976

0.0442 ] - 0.0602

0.9972

Simulated 3D space points are shown in Figure1, and 64 place points on XOY, YOZ and XOZ plane are taken, respectively. These 3D points are linearly projected to the unit sphere in terms of the extrinsic parameters, and these points on the unit sphere are

mapped on the fisheye image by using the intrinsic parameters of synthetic fisheye camera. Two simulated images are shown in Figure2 and their sizes

are 1000 x 1000 pixels.

Figure1 :The synthetic 3D space points

1500 ,--�-�-��-�-�---.-�-�--,

1400

1300

1200

1100

1000

900

800

700

600

5��00�11�OO�1�20�0�13�OO�14�00�1�50�0�16�00�17�00�1�80�0-1�90�0-=2000

(a)

1500 ,-___.-�-�___.-�-�___.-�-�---,

1400

1300

1200

1100

1000

900

BOO

700

GOO

.+ • •

........... •••••• +

.+ .+ ••

+ • .-.... -:: ...

• • + •

/\\.�: . . .,::: :�nD�

.:�:i:: .:' .'

. . '.

.......... ..........

500 '-------'---'----'-------'---'----'-------'---'----'------' 1100 1200 1300 1400 1500 1600 1700 1800 1900 2000 2100

(b) Figure 2:Two simulated fisheye images

We take firstly higher order coordinate of 3D points (shown as in Figurel) using our lifting strategy,

and the 3 x 20 matrix P is estimated by a set of image

points in Figure2 and the lifting higher dimensional coordinates of 3D points. After estimating the matrix P, the 3D points are re-projected onto the image plane,

Page 5: [IEEE 2010 25th International Conference of Image and Vision Computing New Zealand (IVCNZ) - Queenstown, New Zealand (2010.11.8-2010.11.9)] 2010 25th International Conference of Image

and all image points are marked by black "+" in Figure3 (a) and (b). It can be seen that the lines in the synthetic fisheye image are almost become straight lines. These experimental results indicate that this model is feasible and reliable. These experimental results indicate that this model is feasible and reliable.

1400

600

5%O�O�11=OO�12=00�1=30�O�1�40�O-1�50=O�16=OO�17=OO�18=0�O�1=90�O�2�OOO

(a)

1500,----�--�-�,___-�--�-�

1400

1300 ••

1200

1100

1000

900

800

700

600

500 '------'----'-------"'------'----'-------" 1000 1200 1400 1600 1800 2000 2200

(b) Figure3: Corrected image of the synthetic fisheye image in Figure 2.

4.2 Experiments with Real Fisheye Image

Multiple fisheye images were taken to use a fixed camera with fisheye lens in front of a 3D calibration object. One of these images is shown in Figure 4, and

its size is 1024 x 768 pixels. In this experiment, 3D

coordinates of comer points in the 3D calibration object are given, and fisheye image point are detected by using the function in OpenCV library. Then the

3 x 20 matrix P is estimated by using algorithm in

Section 3 from a set of 3D points and image 2d point correspondences. The matrix P is showed as follows:

[0.0047 0.0008 -0.0026 -0.0227 0.0278

p = -0.0672 0.2486 -0.4052 0.1839 -0.2482

-0.0008 0.0003 0.0007 -0.0002 0.0023

0.0203 -0.0297 -0.0449 -0.0627 0.0750

-0.0247 -0.3611 -0.0605 0.0648 0.2094

0.0000 -0.0010 -0.0041 -0.0010 0.0016

-0.1032 -0.l663 0.0015 -0.0044 0.0128

-0.2134 -0.1896 -0.2528 0.0016 -0.0048

-0.0013 -0.0008 -0.0023 0.0000 -0.0001

-0.2016 -0.0180 0.0002 0.0244 0.1637] 0.0759 0.0309 -0.2459 0.3483 0.1906

0.0018 0.0000 -0.0003 0.0008 0.0008

After computed the projective matrix, we may calculate the projected point in fisheye image by using equation (7) and the real fisheye image is corrected in Figure4, and its result is in Figure5. We can see that those heavily distorted lines around the fisheye image have become almost straight lines. These experimental results show that the cubic polynomial function can be modeled preferably the fisheye camera.

Figure 4: One of multiple real fisheye images.

Figure 5: Corrected fisheye image.

Page 6: [IEEE 2010 25th International Conference of Image and Vision Computing New Zealand (IVCNZ) - Queenstown, New Zealand (2010.11.8-2010.11.9)] 2010 25th International Conference of Image

In addition, some fisheye images of lab indoor

scene are taken, and their sizes are 640 x 480 pixels.

One of these fisheye images is shown in Figure 6(a), and points marked by red are used to calculate the parameter matrix. After estimating the projective matrix, the real fisheye image is corrected and shown in Figure 6(b). From this result, we know that these heavy distorted lines on the ceiling, floor and door are

almost become lines.

(a)

(b)

Figure 6: (a) One fisheye image of lab indoor scene, and (b) corrected fisheye image.

5 Conclusion In this paper, we have extended the lifting strategies [13, 14] from the 2D image points to the 3D points and described a cubic polynomial model for fisheye camera. Given a set of 3D point to 2D image point correspondences, our lifting strategy is implemented to obtain higher dimensional coordinates on 3D points. And a linear combination of the distortion parameters of fisheye camera was estimated by our method. Then, experiments with synthetic and real fisheye image were implemented, and the results have shown the cubic polynomial model for fisheye camera is reliable.

6 Acknowledgements This work was supported by the National Natural Science Foundation of China under grant No.60875023 and The Project-sponsored by SRF for ROCS, SEM. We thank Prof. Li Shigang at Department of Electrical and Electronic Engineering, Tottori University in Japan, for providing some real fisheye images.

References

[1] S. Nayar, "Catadioptric Omnidirectional Camera", In IEEE Conference Computer Vision and Pattern

Recognition, Puerto Rico,1997, ppA82-488. [2] C. Geyer and K. Daniilidis, "Catadioptirc

Projective Geometry", International Journal of Computer Vision, vo1.45, no.3,pp.223-243,2001.

[3] W. Robert, Physical Optics, The Macmillan Company, New York, 1911.

[4] F. Devemay and O. D. Faugeras, "Straight lines have to be straight", Machine Vision and applications, vo1.13, pp.14-24,2001.

[5] R. Hartley and A. Zisserman, Multiple view geometry in computer vision, the second edition, Cambridge: Cambridge Press, 2003.

[6] J. Kannala and S.S. Brandt, "A generic camera model and calibration method for conventenal,

wide-eye, and fish-eye lenses", IEEE Transactions

on Pattern Analysis and Machine Intelligence,

vo1.28, no.8, pp.1335-1340, 2006. [7] D. Scaramuzza, A. Martinelli and R. Siegwart,

"A Flexible Technique for Accurate Omnidirectional Camera Calibration and Structure from Motion", In Proceedings of IEEE

International Conference of Vision Systems, IEEE Computer Society,New York, January 5-7,2006.

[8] Shishir Shah, lK. Aggarwal, "A Simple Calibration Procedure for Fish-Eye (High­Distortion) Lens Camera", ICRA, 1994 , pp. 3422-3427.

[9] Y. Xiong, K. Turkowski, "Creating Image-Based VR Using a Self-Calibrating Fisheye Lens", Proceedings of CVPR, 1997, pp.237-243.

[lO]A.W. Fitzgibbon, "Simultaneous linear estimation of multiple view geometry and lens

distortion", In Proc. Computer Vision and Pattern

Recognition, 2001, pp.125-132. [ll]B. Micusik and T. Pajdla, "Estimation of

omnidirectional camera model from epipolar geometry", Int. Con! On CVPR,2003, ppA85-490.

[12]E. Kilpel'a, "Compensation of systematic errors of image and model coordinates", International Archives of Photogrammetry, 1980, ppA07--427.

[13]D. Claus and A. W. Fitzgibbon, "A rational function lens distortion model for general cameras", In Proc. CVPR, 2005, pp.213-219.

[14] C. Geyer and K. Daniilidis, "Structure and motion from uncalibrated catadioptric views", In Proc. CVPR, 2001, pp.279-286.

Page 7: [IEEE 2010 25th International Conference of Image and Vision Computing New Zealand (IVCNZ) - Queenstown, New Zealand (2010.11.8-2010.11.9)] 2010 25th International Conference of Image

[15] J.P. Barreto and K Daniilidis, "Fundamental matrix for cameras with radial distortion", IEEE

International Conference on Computer Vision, 2005, pp.625- 632.

[16]P. Sturm and S. Ramalingam, "A generic concept for camera calibration", In Proc. ECCV,2004, pp.I-13.

[17] X. Ying and Z. Hu, "Can We Consider Central Catadioptric Cameras and Fisheye Cameras within a Unified Imaging Model", In Proc. ECCV, 2004, pp. 442-455.

[18] C. Geyer and K. Daniilidis, "A Unifying Theory for Central Panoramic Systems and Practical

Implications", In Proc. ECCV, 2000, pp. 445-462.