Upload
ull
View
35
Download
0
Embed Size (px)
DESCRIPTION
Quantitative Underwater 3-Dimensional Imaging and Mapping. Jeff Ota Mechanical Engineering PhD Qualifying Exam Thesis Project Presentation XX March 2000. The Presentation. What I’d like to accomplish Why the contribution is important What makes this problem difficult - PowerPoint PPT Presentation
Citation preview
Quantitative Underwater 3-Dimensional Imaging and Mapping
Jeff OtaMechanical Engineering PhD Qualifying Exam
Thesis Project PresentationXX March 2000
The Presentation
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
• What I’d like to accomplish
• Why the contribution is important
• What makes this problem difficult
• How I’ve set out to tackle the problem• What work I’ve done so far
• Refining the contribution to knowledge
What am I out to accomplish?
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
• Generate a 3D map from a moving (6 degree-of-freedom) robotic platform without precise knowledge of the camera positions
• Quantify the errors for both intra-mesh and inter-mesh distance measurements
• Investigate the potential of error reduction of the inter-mesh stitching through a combination of yet-to-be-developed system-level calibration techniques and oversampling of a region.
Why is this important?
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
Marine Archaeology
• Shipwreck 3D image reconstruction
• Analysis of shipwreck by multiple scientists after the mission
• Feature identification and confirmation
Why is this important?
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
Marine Arachaeology
Quantitative information
• Arctic Ocean shipwreck
• Which ship among the thousands that were known to be lost is this one?
• In this environment, 2D capture washed out some of the ridge features
• Shipwreck still unidentified
Why is this important?
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
Hydrothermal Vent Research
Scientific Exploration
• Analysis of vent features and surrounding biological life is integral to understanding the development of life on extra-terrestrial oceans (Jovian moons and Mars)
• Vent research in extreme environments on Earth
Image courtesy of Hanu Singh, Woods Hole Oceanographic Institute
Why is this important?
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
Hydrothermal Vent Research
How does vision-based quantitative 3D help?
• Measure height and overall size of vent and track growth over time
• Measure size of biological creatures surrounding the vent
Why not sonar or laser line scanning?
Why is this important?
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
Other mapping venues
Airships
Airplanes
Land Rovers
Hand-held digital cameras
What makes this problem difficult?
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
Visibility: Mars Pathfinder comparison• Mars Pathfinder generated its map from a stationary position
• Vision environment was excellent
• Imaging platform was tripod-based
What makes this problem difficult?
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
Visibility: Underwater differences• Tripod-style imaging platform not optimal
• Difficulty in establishing a stable imaging platform
• Poor lighting and visibility (practically limited to about 10 feet)
• 6 DOF environment with inertial positioning system makes precision camera position knowledge difficult
How I’ve set out to tackle the problem
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
• Define the appropriate underwater 3D mapping methodology
• Prove feasibility of underwater 3D mesh generation
• Confirm that underwater cameras could generate proper inputs to a 3D mesh generation system
• Research and apply as much “in air” computer vision knowledge as possible while ensuring that my research goes beyond just a conversion of known techniques to underwater
• Continuously refine and update the specific contribution that this research will generate for both underwater mapping and computer vision in general
3D Mapping Methodology
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
NASA Ames Stereo Pipeline
Left camera
Right camera PositionKnowledge Stitching
algorithm
3D Mesh
VRML/Open InventorMap Viewer with measuring tools
Image Capture System 3D Processing 3D Stitching
3D Mapping Methodology
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
NASA Ames Stereo Pipeline
Left camera
Right camera
Image Capture System
3D Processing
Radially distorted image
Radially distorted image
L/R Lens properties
Imaging geometry
Distortion correction algoritm
Distortion-free image
Distortion-free image
Distortion-free image/L
Distortion-free image/R
3D Mesh
• Known mesh vs. camera position
• Quantifiable object measurements with known error
(Pinhole Camera Model)
3D Mapping Methodology
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
NASA Ames Stereo Pipeline
Left camera
Right camera PositionKnowledge Stitching
algorithm
3D Mesh
VRML/Open InventorMap Viewer with measuring tools
Image Capture System 3D Processing 3D Stitching
3D Mapping Methodology
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
3D Stitching
3D Mesh
• Known mesh vs. camera position
• Quantifiable object measurements with known error
Vehicle/Camera position readings from inertial
positioning system
multiple mesh/
positioninputs
3D mapKnown error in every
possible measurementquantified and optimized
Error Quantification
Algorithm
Jeff’s Proposed Contribution
Error Reduction Algorithm
Feature-based mesh stitching algorithm
Camera position based mesh stitching
algorithm
3D Mapping Methodology
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
NASA Ames Stereo Pipeline
Left camera
Right camera
Image Capture System
3D Processing
Radially distorted image
Radially distorted image
L/R Lens properties
Imaging geometry
Distortion correction algoritm
Distortion-free image
Distortion-free image
Distortion-free image/L
Distortion-free image/R
3D Mesh
• Known mesh vs. camera position
• Quantifiable object measurements with known error
(Pinhole Camera Model)
Feasibility of Underwater3D Mesh Generation
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
NASA Ames Stereo Pipeline
Left camera
Right camera
Image Capture System
3D Processing
Radially distorted image
Radially distorted image
L/R Lens properties
Imaging geometry
Distortion correction algoritm
Distortion-free image
Distortion-free image
Distortion-free image/L
Distortion-free image/R
3D Mesh
• Known mesh vs. camera position
• Quantifiable object measurements with known error
(Pinhole Camera Model)
Can the Mars Pathfinder “stereo pipeline” algorithm work with
underwater images?
3D Mesh Processing
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
Will the Mars Pathfinder correlationalgorithm work underwater?
Resources• Access to Mars Pathfinder 3D mesh generation source code (also
known as the NASA Ames “Stereo Pipeline”)• Already had a working relationship with MP 3D imaging team
• As a NASA Ames civil servant, I was assigned to work with 2001 Mars Rover technology development team
• Arctic Ocean research opportunity provided impetus to test MP 3D imaging technology for underwater mapping
Concerns• Author of Stereo Pipeline code and MP scientist doubtful that captured
underwater images would produce a 3D mesh but wanted to perform a feasibility test in a real research environment
• Used off-the-shelf, inexpensive black-and-white cameras (Sony XC-75s) for image capture compared to near-perfect IMP camera
ftp captured images
to SGI O2
3D Mesh Processing
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
Will the Mars Pathfinder correlationalgorithm work underwater?
System Block DiagramThree month development timeJune 1998 - August 1998
Stereo Cameras (Sony XC75)mounted on the front
of the vehicle
Left Right
Sent on Red and Green channels
analog signalup the tether
Matrox RGBDigitizing Board
process raw images and “send” them through stereo pipeline
Display 3D Mesh
Mars Pathfinder 3D image processing software
Known error sources ignorded due to time constraints• No camera calibration
• Images not dewarped (attempt came up short)
It worked!!!
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
Image from left camera
Image from right camera
3D mesh of starfish
+ =
3D Mesh Processing
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
Findings
• Mars Pathfinder correlation algorithm did work underwater
• Images from inexpensive black and white cameras and flaky video system were satisfactory as inputs to the pipeline
Arctic Mission Results
• Poor camera geometry resulted in distorted 3D images
• Limited knowledge of camera geometry and lack of calibration prevented quantitative analysis of images
Image from left camera Image from right camera 3D mesh of starfish
3D Mapping Methodology
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
NASA Ames Stereo Pipeline
Left camera
Right camera
Image Capture System
3D Processing
Radially distorted image
Radially distorted image
L/R Lens properties
Imaging geometry
Distortion correction algoritm
Distortion-free image
Distortion-free image
Distortion-free image/L
Distortion-free image/R
3D Mesh
• Known mesh vs. camera position
• Quantifiable object measurements with known error
(Pinhole Camera Model)
3D Mapping Methodology
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
NASA Ames Stereo Pipeline
Left camera
Right camera
Image Capture System
3D Processing
Radially distorted image
Radially distorted image
L/R Lens properties
Imaging geometry
Distortion correction algoritm
Distortion-free image
Distortion-free image
Distortion-free image/L
Distortion-free image/R
3D Mesh
• Known mesh vs. camera position
• Quantifiable object measurements with known error
(Pinhole Camera Model)
Single Camera Calibration
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
Pinhole camera model
CCD
Image plane
Calibration goal:Quantify error in modeling a complex lens system as a pinhole camera
Single Camera Calibration
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
Pinhole camera model
• Calibration requirement: find distance ‘f’ and ‘h’ for this simplification
h
CCD
Image plane
f
Single Camera Calibration
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
Thin lens example
• Ray tracing technique a bit complex
hCCD
Image plane
Single Camera Calibration
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
hCCD
Image plane
Real world problem: Underwater structural requirements
Underwater camera housing
Spherical glass port
Single Camera Calibration
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
Real world problem: Water adds another factor
hCCD
Image plane
Index of refraction for water = 1.33
Index of refraction for air = 1.00
waterair
glass
Single Camera Calibration
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
Dewarp knocks out lens distortion
hCCD
Image plane
Index of refraction for water = 1.33
Index of refraction for air = 1.00
waterair
glass
Calibration fix #1
Single Camera Calibration
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
Dewarp compensates out lens distortion
hCCD
Image plane
Index of refraction for water = 1.33
Index of refraction for air = 1.00
waterair
glass
Calibration fix #1
Single Camera Calibration
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
Underwater data collection compensates out index of refraction differences
Index of refraction for water = 1.33
h
CCD
Image plane
f
Calibration fix #2
Single Camera Calibration
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
Calibration research currently in progress
• Calibration rig designed and built
• Calibrated MBARI HDTV camera
• Calibrated MBARI Tiburon camera
• Parameters ‘f’ and ‘h’ calculated using least- squares curve fit
Upcoming improvements
• Spherical distortion correction (dewarp)
• Center pixel determination
• Stereo camera setup
• Optimal target image (grid?)
Single Camera Calibration
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
Other problems that need to be accounted for
• Frame grabbing problems
• Mapping of CCD array to actual grabbed image
• Example: Sony XC-75 has a CCD of 752(H) by 582(V) pixels which have dimensions of 8.4µm(H) by 9.8µm(V) while the frame grab is 640 by 480 with has a square pixel display.
Single Camera Calibration
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
Summary of one camera calibration Removal of spherical distortion (dewarp)
Center pixel determination
Thin lens model for underwater multi-lens system
Logistical
Platform construction
Gather data from cameras to test equations
Analysis
Focal point calculation (‘f’ and ‘h’)
Focal point calculation with spherical distortion removed (will complete the pinhole approximation)
3D Mesh ProcessingInitial Error Analysis
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
Stereo Correlation
• How do you know which pixels match?
• Correlation options
• Brightness comparisons• Pixel• Window• Glob
• Edge detection
• Combination edge enhancement and brightness comparison
Stereo Vision
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
p (unknown depth and position)
f
B
(xL, yL)(xR, yR)
Baseline (B) = separation between center of two cameras
(xC, yC)
C = xR- xC
c
Geometry behind the process
Stereo Vision
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
p (unknown depth and position)
f
B
(xL, yL)(xR, yR)
Baseline (B) = separation between center of two cameras
(xC, yC)
C = xR- xC
c
Problem #1: CCD placement error
Geometry behind the process
Stereo Vision
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
p (unknown depth and position)
f
B
(xL, yL)(xR, yR)
Baseline (B) = separation between center of two cameras
(xC, yC)
C = xR- xC
c
x
Problem #1: CCD placement error
Geometry behind the process
Stereo Vision
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
p (unknown depth and position)
f
B
(xL, yL)(xR, yR)
Baseline (B) = separation between center of two cameras
(xC, yC)
C = xR- xC
c
x
depth
Problem #2: Depth accuracy sensitivity
Geometry behind the process
Stereo Vision
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
p (unknown depth and position)
f
B
(xL, yL)(xR, yR)
Baseline (B) = separation between center of two cameras
(xC, yC)
C = xR- xC
c
x
depth
dZ = -z2
dD fBDepth vs. disparity sensitivity:
Problem #2: Depth accuracy sensitivity
Geometry behind the process
Stereo Vision
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
p (unknown depth and position)
f
B
depth
dZ = -z2
dD fBDepth vs. disparity sensitivity:
Example: Z = 1m = 1000mm (varies)f = 3cm = 30mmB = 10cm = 100mm
dZ = -10002
dD 30*100 = 333
In Sony XC-75 approx 100 pixels/mmdeltaZ = deltaD * 333
for 1 pixeldeltaD = 1 pixel * (1mm/100pixels)
deltaZ = .01*333 = 3.33mm/pixel for Z = 1m
only!
Z
Problem #2: Depth accuracy sensitivity
Geometry behind the process
Stereo Vision
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
Two-camera problems
• Inconsistent CCD placement
• Baseline error
• Matched focal points
Calibration fixes
• Find center pixel through spherical distortion calibration
• Dewarp image from calculated center pixel
• Account for potential baseline and focal point error in sensitivity calculation
Error Summary
Stereo Vision
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
So now what do we have?
• A left and right image
• Dewarped
• Known center pixel
• Known focal point
• Known geometry between the two images
• Ready for the pipeline!
What’s next?
• 3D Mesh building
3D Mapping Methodology
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
NASA Ames Stereo Pipeline
Left camera
Right camera
Image Capture System
3D Processing
Radially distorted image
Radially distorted image
L/R Lens properties
Imaging geometry
Distortion correction algoritm
Distortion-free image
Distortion-free image
Distortion-free image/L
Distortion-free image/R
3D Mesh
• Known mesh vs. camera position
• Quantifiable object measurements with known error
(Pinhole Camera Model)
3D Mapping Methodology
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
3D Stitching
3D Mesh
• Known mesh vs. camera position
• Quantifiable object measurements with known error
Vehicle/Camera position readings from inertial
positioning system
multiple mesh/
positioninputs
3D mapKnown error in every
possible measurementquantified and optimized
Error Quantification
Algorithm
Jeff’s Proposed Contribution
Error Reduction Algorithm
Feature-based mesh stitching algorithm
Camera position based mesh stitching
algorithm
Proposed Research Contributionsand Corresponding Approach
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
• Develop error quantification algorithm for a 3D map generated from a 6 degree-of-freedom moving platform with rough camera position knowledge
• Account for intra-mesh (camera and image geometry) and inter-mesh (rough camera position knowledge) errors and incorporate in final map parameters for input into analysis packages
• Develop mesh capturing methodology to reduce inter-mesh errors
• Current hypothesis suggests the incorporation of multiple overlapping meshes and cross-over (Fleischer ‘00) paths will reduce the known error for the inter-mesh stitching.
• Utilize a combination of camera position knowledge and computer vision mesh “zipping” techniques
3D Mesh Stitching (cont’d)
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
• Camera Position Knowledge
• Relative positions from a defined initial frame
• Inertial navigation package will output data that will allow the calculation of positioning information for the vehicle and camera
• New Doppler-based navigation (1cm precision for X-Y)
• Feature-based “zippering” algorithm for computer vision will be used to stitch meshes and provide another “opinion” of camera position.
• Investigate and characterize the error reducing potential of a system level calibration
• Would characterizing the camera and vehicle as one system instead of quantifying error in separate instruments reduce the error significantly?
Tentative Schedule
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
Single Camera Calibration• Winter - Spring 2000
Stereo Camera Pair Calibration• Spring - Fall 2000
3D Mesh Processing Calibration• Fall 2000 - Winter 2001
3D Mesh Stitching• Winter 2001 - Fall 2001
Acknowledgements
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
StanfordProf. Larry LeiferProf. Steve RockProf. Tom KennyProf. Ed Carryer
Prof. Carlo TomasiProf. Marc Levoy
Jason RifeChris Kitts
The ARL Kids
NASA AmesCarol StokerLarry LemkeEric Zbinden
Ted BlackmonKurt SchwehrAlex Derbes
Hans ThomasLaurent Nguyen
Dan Christian
Santa Clara UniversityJeremy BatesAaron WeastChad Bulich
Technology Steering Committee
MBARIDan Davis
George MatsumotoBill Kirkwood
WC&PRURC (NOAA)Geoff Wheat
Ray Highsmith
US Coast GuardPhil McGillivaryWHOI
Hanumant SinghDeep Ocean Engineering
Phil BallouDirk Rosen
U MiamiShahriar Negahdaripour
Referenced Work
Stanford University School of Engineering Department of Mechanical EngineeringXX February 2000
Mention all referenced work here? (Papers, etc.)