Raskar, Camera Culture, MIT Media Lab
Camera Culture
Ramesh Raskar
Alyosha Efros Ramesh Raskar
Steve Seitz
Siggraph 2009 Curated CourseNext Billion Cameras
http://raskar.scripts.mit.edu / nextbillioncameras /
A. Introduction 5 minutes ‐‐
B. Cameras of the future (Raskar, 30 minutes) * Form factors, Modalities and Interaction * Enabling Visual Social Computing
C. Reconstruction the World (Seitz, 30 minutes) * Photo tourism and beyond * Image based modeling and rendering on a massive scale ‐* Scene summarization
D. Understanding a Billion Photos (Efros, 30 minutes) * What will the photos depict? * Photos as visual content for computer graphics * Solving computer vision
E. Discussion 10 minutes ‐‐
Next Billion Cameras
Alexei (Alyosha) Efros [CMU]Alexei (Alyosha) Efros [CMU]Assistant professor at the Robotics Institute and Assistant professor at the Robotics Institute and
the Computer Science Department at the Computer Science Department at Carnegie Carnegie Mellon UniversityMellon University. .
His research is in the area of computer vision and His research is in the area of computer vision and computer graphics, especially at the computer graphics, especially at the intersection of the two. He is particularly intersection of the two. He is particularly interested in using data-driven techniques to interested in using data-driven techniques to tackle problems which are very hard to model tackle problems which are very hard to model parametrically but where large quantities of parametrically but where large quantities of data are readily available. Alyosha received his data are readily available. Alyosha received his PhD in 2003 from UC Berkeley and spent the PhD in 2003 from UC Berkeley and spent the following year as a post-doctoral fellow in following year as a post-doctoral fellow in Oxford, England. Alyosha is a recipient of the Oxford, England. Alyosha is a recipient of the NSF CAREER award (2006), the Sloan NSF CAREER award (2006), the Sloan Fellowship (2008), the Guggenheim Fellowship Fellowship (2008), the Guggenheim Fellowship (2008), and the Okawa Grant (2008). (2008), and the Okawa Grant (2008).
http://www.cs.cmu.edu/~efros/http://www.cs.cmu.edu/~efros/
Ramesh Raskar [MIT] Ramesh Raskar [MIT] Associate Professor at the Associate Professor at the MIT Media Lab MIT Media Lab and and
heads the Camera Culture research group. heads the Camera Culture research group.
The group focuses on creating a new class for The group focuses on creating a new class for imaging platforms to better capture and share imaging platforms to better capture and share the visual experience. This research involves the visual experience. This research involves developing novel cameras with unusual optical developing novel cameras with unusual optical elements, programmable illumination, digital elements, programmable illumination, digital wavelength control, and femtosecond analysis wavelength control, and femtosecond analysis of light transport, as well as tools to of light transport, as well as tools to decompose pixels into perceptually meaningful decompose pixels into perceptually meaningful components. components.
Raskar is a receipient of Alfred P Sloan research Raskar is a receipient of Alfred P Sloan research fellowship 2009, the TR100 Award 2004, Global fellowship 2009, the TR100 Award 2004, Global Indus Technovator Award 2003. He holds 35 US Indus Technovator Award 2003. He holds 35 US patents and has received four Mitsubishi patents and has received four Mitsubishi Electric Invention Awards. He is currently co-Electric Invention Awards. He is currently co-authoring, with Jack Tumblin, a book on authoring, with Jack Tumblin, a book on computational photography.computational photography.
http://www.media.mit.edu/~raskar http://www.media.mit.edu/~raskar
Steve Seitz [U-Washington]Steve Seitz [U-Washington]Professor in the Department of Computer Science Professor in the Department of Computer Science
and Engineering at the and Engineering at the University of University of Washington. Washington.
He received Ph.D. in computer sciences at the He received Ph.D. in computer sciences at the University of Wisconsin, Madison in 1997. He University of Wisconsin, Madison in 1997. He was twice awarded the David Marr Prize for was twice awarded the David Marr Prize for the best paper at the International Conference the best paper at the International Conference of Computer Vision, and has received an NSF of Computer Vision, and has received an NSF Career Award, an ONR Young Investigator Career Award, an ONR Young Investigator Award, and an Alfred P. Sloan Fellowship. His Award, and an Alfred P. Sloan Fellowship. His work on Photo Tourism (joint with Noah work on Photo Tourism (joint with Noah Snavely and Rick Szeliski) formed the basis of Snavely and Rick Szeliski) formed the basis of Microsoft's Photosynth technology. Professor Microsoft's Photosynth technology. Professor Seitz is interested in problems in computer Seitz is interested in problems in computer vision and computer graphics. His current vision and computer graphics. His current research focuses on capturing the structure, research focuses on capturing the structure, appearance, and behavior of the real world appearance, and behavior of the real world from digital imagery. from digital imagery.
http://www.cs.washington.edu/homes/seitz/http://www.cs.washington.edu/homes/seitz/
Where are the ‘camera’s?Where are the ‘camera’s?
Where are the ‘camera’s?Where are the ‘camera’s?
Raskar, Camera Culture, MIT Media Lab
Camera Culture
Ramesh Raskar
Alyosha Efros Ramesh Raskar
Steve Seitz
Siggraph 2009 Course
Next Billion Cameras
http://raskar.info/photo/
Raskar, Camera Culture, MIT Media Lab
Camera Culture
Ramesh Raskar
Alyosha Efros Ramesh Raskar
Steve Seitz
Siggraph 2009 Course
Next 100 Billion Cameras
http://raskar.info/photo/
Key MessageKey Message• Cameras will not look like anything today
– Emerging optics, illumination, novel sensors• Visual Experience will differ from viewfinder
– Photos will be ‘computed’– Remarkable post-capture control– Crowdsource the photo collection– Exploit priors and online collections
• Visual Essence will dominate– Superior Metadata tagging for effective sharing– Fusion with non-visual data
Can you look around a corner ?
Can you decode a 5 micron feature Can you decode a 5 micron feature from 3 meters away from 3 meters away
with an ordinary camera ?with an ordinary camera ?
Convert LCD into a big flat camera?Beyond Multi-touch
Pantheon
How do we move through a space?
What is ‘interesting’ here?
Record what you ‘feel’ not what you ‘see’
Raskar, Camera Culture, MIT Media Lab
Camera Culture
Ramesh Raskar
Ramesh Raskar
Camera Culture
http://raskar.scripts.mit.edu / nextbillioncameras /
““Visual Social Computing”Visual Social Computing”• Social Computing (SoCo)Social Computing (SoCo)
– Computing Computing by the people, by the people, for the people, for the people, of the people of the people
• Visual SoCoVisual SoCo– Participatory, CollaborativeParticipatory, Collaborative– Visual semanticsVisual semantics– http://raskar.scripts.mit.edu / nextbillioncamerashttp://raskar.scripts.mit.edu / nextbillioncameras
?
CrowdsourcingCrowdsourcing
http://www.wired.com/wired/archive/14.06/crowds.html
Object RecognitionFakesTemplate matching
Amazon Mechanical Turk: Steve Fossett search
ReCAPTCHA=OCR
Participatory Urban SensingParticipatory Urban Sensing
Deborah Estrin et al
Static/semi-dynamic/dynamic data
A. City Maintenance
-Side Walks
B. Pollution
-Sensor network
C. Diet, Offenders
-Graffiti
-Bicycle on sidewalk
Future ..
Citizen SurveillanceHealth Monitoring
http://research.cens.ucla.edu/areas/2007/Urban_Sensing/
(Erin Brockovich)n
Community Photo CollectionsCommunity Photo Collections U of Washington/Microsoft: Photosynth
Beyond Visible SpectrumBeyond Visible Spectrum
CedipRedShift
Trust in ImagesTrust in Images
From Hany Farid
Trust in ImagesTrust in Images
From Hany Farid
LA Times March’03
Cameras in Developing CountriesCameras in Developing Countries
http://news.bbc.co.uk/2/hi/south_asia/7147796.stm
Community news program run by village women
Vision thru tongueVision thru tongue
http://www.pbs.org/kcet/wiredscience/story/97-mixed_feelings.html
Solutions for the Visually ChallengedSolutions for the Visually Challenged
http://www.seeingwithsound.com/
New Topics in Imaging ResearchNew Topics in Imaging Research
• Imaging Devices, Modern Optics and LensesImaging Devices, Modern Optics and Lenses• Emerging Sensor TechnologiesEmerging Sensor Technologies• Mobile PhotographyMobile Photography• Visual Social Computing and Citizen JournalismVisual Social Computing and Citizen Journalism• Imaging Beyond Visible SpectrumImaging Beyond Visible Spectrum• Computational Imaging in Sciences (Medical)Computational Imaging in Sciences (Medical)• Trust in Visual MediaTrust in Visual Media• Solutions for Visually ChallengedSolutions for Visually Challenged• Cameras in Developing CountriesCameras in Developing Countries
– Social Stability, Commerce and GovernanceSocial Stability, Commerce and Governance• Future Products and Business ModelsFuture Products and Business Models
Traditional PhotographyTraditional Photography
Lens
Detector
Pixels
Image
Mimics Human Eye for a Single Snapshot:
Single View, Single Instant, Fixed Dynamic range and Depth of field for given Illumination in a Static world Courtesy: Shree
Nayar
Computational Illumination
Computational Camera
Scene: 8D Ray Modulator
Display
GeneralizedSensor
Generalized OpticsProcessing
4D Ray BenderUpto 4D
Ray Sampler
Ray Reconstruction
Generalized Optics
Recreate 4D Lightfield
Light Sources
Modulators
4D Incident Lighting
4D Light Field
Computational PhotographyComputational Photography
Computational Photography [Raskar and Tumblin]
1. Epsilon Photography– Low-level vision: Pixels– Multi-photos by perturbing camera parameters– HDR, panorama, …– ‘Ultimate camera’
2. Coded Photography– Mid-Level Cues:
• Regions, Edges, Motion, Direct/global– Single/few snapshot
• Reversible encoding of data– Additional sensors/optics/illum– ‘Scene analysis’
3. Essence Photography– High-level understanding
• Not mimic human eye• Beyond single view/illum
– ‘New artform’
captures a machine-readable representation of our world tohyper-realistically synthesize the essence of our visual experience.
Goal and Experience
Low Level Mid Level HighLevel
Hyper realism
Raw
Angle, spectrum
aware
Non-visual Data, GPS
Metadata
Priors
Comprehensive
8D reflectance field
Digital
Epsilon
Coded
Essence
Computational Photography aims to make progress on
both axis
Camera ArrayHDR, FoV Focal stack
Decomposition problems
Depth
Spectrum
LightFields
Human Stereo Vision
Transient Imaging
Virtual Object Insertion
Relighting
Augmented Human
Experience
Material editing from single photo
Scene completion from photos
Motion Magnification
Phototourism
2nd International Conference on Computational Photography
Papers due November 2,
2009
http://cameraculture.media.mit.edu/iccp10
• Ramesh Raskar and Jack Tumblin
• Book Publishers: A K Peters• Siggraph 2009 booth: 20% off • Booth #2527
• ComputationalPhotography.org
• Meet the Authors• Thursday at 2pm-2:30pm
Computational Photography[Raskar and Tumblin]
1. Epsilon Photography– Low-level vision: Pixels– Multi-photos by perturbing camera parameters– HDR, panorama, …– ‘Ultimate camera’
2. Coded Photography– Single/few snapshot– Reversible encoding of data– Additional sensors/optics/illum– ‘Scene analysis’ : (Consumer software?)
3. Essence Photography– Beyond single view/illum– Not mimic human eye– ‘New art form’
Epsilon PhotographyEpsilon Photography• Dynamic range
– Exposure bracketing [Mann-Picard, Debevec]
• Wider FoV – Stitching a panorama
• Depth of field – Fusion of photos with limited DoF [Agrawala04]
• Noise– Flash/no-flash image pairs
• Frame rate– Triggering multiple cameras [Wilburn04]
Goal: High Dynamic RangeGoal: High Dynamic Range
Short ExposureShort Exposure
Long ExposureLong Exposure
Dynamic Range
Epsilon PhotographyEpsilon Photography• Dynamic range
– Exposure braketing [Mann-Picard, Debevec]
• Wider FoV – Stitching a panorama
• Depth of field – Fusion of photos with limited DoF [Agrawala04]
• Noise– Flash/no-flash image pairs [Petschnigg04, Eisemann04]
• Frame rate– Triggering multiple cameras [Wilburn05, Shechtman02]
Computational PhotographyComputational Photography
1. Epsilon Photography– Low-level Vision: Pixels– Multiphotos by perturbing camera parameters– HDR, panorama– ‘Ultimate camera’
2. Coded Photography– Mid-Level Cues:
• Regions, Edges, Motion, Direct/global– Single/few snapshot
• Reversible encoding of data– Additional sensors/optics/illum– ‘Scene analysis’
3. Essence Photography– Not mimic human eye– Beyond single view/illum– ‘New artform’
• 3D– Stereo of multiple cameras
• Higher dimensional LF– Light Field Capture
• lenslet array [Adelson92, Ng05], ‘3D lens’ [Georgiev05], heterodyne masks [Veeraraghavan07]
• Boundaries and Regions– Multi-flash camera with shadows [Raskar08]
– Fg/bg matting [Chuang01,Sun06]
• Deblurring– Engineered PSF– Motion: Flutter shutter[Raskar06], Camera Motion [Levin08]
– Defocus: Coded aperture [Veeraraghavan07,Levin07], Wavefront coding [Cathey95]
• Global vs direct illumination– High frequency illumination [Nayar06]
– Glare decomposition [Talvala07, Raskar08]
• Coded Sensor– Gradient camera [Tumblin05]
Marc Levoy
Digital Refocusing using Light Field Camera
125μ square-sided microlenses[Ng et al 2005]
• 3D– Stereo of multiple cameras
• Higher dimensional LF– Light Field Capture
• lenslet array [Adelson92, Ng05], ‘3D lens’ [Georgiev05], heterodyne masks [Veeraraghavan07]
• Boundaries and Regions– Multi-flash camera with shadows [Raskar08]
– Fg/bg matting [Chuang01,Sun06]
• Deblurring– Engineered PSF– Motion: Flutter shutter[Raskar06], Camera Motion [Levin08]
– Defocus: Coded aperture [Veeraraghavan07,Levin07], Wavefront coding [Cathey95]
• Global vs direct illumination– High frequency illumination [Nayar06]
– Glare decomposition [Talvala07, Raskar08]
• Coded Sensor– Gradient camera [Tumblin05]
Multi-flash Camera for Detecting Depth Edges
Depth Depth EdgesEdges
LeftLeft TopTop RightRight BottomBottom
Depth EdgesDepth EdgesCanny EdgesCanny Edges
• 3D– Stereo of multiple cameras
• Higher dimensional LF– Light Field Capture
• lenslet array [Adelson92, Ng05], ‘3D lens’ [Georgiev05], heterodyne masks [Veeraraghavan07]
• Boundaries and Regions– Multi-flash camera with shadows [Raskar08]
– Fg/bg matting [Chuang01,Sun06]
• Deblurring– Engineered PSF– Motion: Flutter shutter[Raskar06], Camera Motion [Levin08]
– Defocus: Coded aperture [Veeraraghavan07,Levin07], Wavefront coding [Cathey95]
• Global vs direct illumination– High frequency illumination [Nayar06]
– Glare decomposition [Talvala07, Raskar08]
• Coded Sensor– Gradient camera [Tumblin05]
Flutter Shutter CameraFlutter Shutter CameraRaskar, Agrawal, Tumblin Raskar, Agrawal, Tumblin
[Siggraph2006][Siggraph2006]
LCD opacity switched LCD opacity switched in coded sequencein coded sequence
TraditioTraditionalnal
Coded Coded ExposuExposu
rere
Image of Image of Static Static ObjectObject
Deblurred Deblurred ImageImage
Deblurred Deblurred ImageImage
• 3D– Stereo of multiple cameras
• Higher dimensional LF– Light Field Capture
• lenslet array [Adelson92, Ng05], ‘3D lens’ [Georgiev05], heterodyne masks [Veeraraghavan07]
• Boundaries and Regions– Multi-flash camera with shadows [Raskar08]
– Fg/bg matting [Chuang01,Sun06]
• Deblurring– Engineered PSF– Motion: Flutter shutter[Raskar06], Camera Motion [Levin08]
– Defocus: Coded aperture [Veeraraghavan07,Levin07], Wavefront coding [Cathey95]
• Decomposition Problems– High frequency illumination, Global/direct illumination [Nayar06]
– Glare decomposition [Talvala07, Raskar08]
• Coded Sensor– Gradient camera [Tumblin05]
"Fast Separation of Direct and Global Components of a Scene using High Frequency Illumination," S.K. Nayar, G. Krishnan, M. D. Grossberg, R. Raskar, ACM Trans. on Graphics (also Proc. of ACM SIGGRAPH), Jul, 2006.
Computational Photography [Raskar and Tumblin]
1. Epsilon Photography– Multiphotos by varying camera parameters– HDR, panorama– ‘Ultimate camera’: (Photo-editor)
2. Coded Photography– Single/few snapshot– Reversible encoding of data– Additional sensors/optics/illum– ‘Scene analysis’ : (Next software?)
3. Essence Photography– High-level understanding
• Not mimic human eye• Beyond single view/illum
– ‘New artform’
Blind CameraBlind Camera
Sascha Pohflepp, Sascha Pohflepp, U of the Art, Berlin, 2006U of the Art, Berlin, 2006
Capturing the Essence of Visual Experience
– Exploiting online collections• Photo-tourism [Snavely2006]• Scene Completion [Hays2007]
– Multi-perspective Images• Multi-linear Perspective [Jingyi Yu, McMillan 2004]• Unwrap Mosaics [Rav-Acha et al 2008]• Video texture panoramas [Agrawal et al 2005]
– Non-photorealistic synthesis• Motion magnification [Liu05]
– Image Priors• Learned features and natural statistics• Face Swapping: [Bitouk et al 2008]• Data-driven enhancement of facial attractiveness [Leyvand et al 2008]• Deblurring [Fergus et al 2006, Several 2008 and 2009 papers]
Scene Completion Using Millions of PhotographsHays and Efros, Siggraph 2007
Community Photo Collections U of Washington/Microsoft: Photosynth
Can you look around a corner ?
Can you look around a corner ?
Kirmani, Hutchinson, Davis, Raskar 2009Accepted for ICCV’2009, Oct 2009 in Kyoto
Impulse Response of a Scene
Femtosecond Laser as Light SourcePico-second detector array as
Camera
Coded Aperture CameraCoded Aperture Camera
The aperture of a 100 mm lens is modified
Rest of the camera is unmodifiedInsert a coded mask with chosen binary pattern
Captured Blurred Photo
Refocused on Person
• Smart Barcode size : 3mm x 3mm• Ordinary Camera: Distance 3 meter
Computational Probes: Computational Probes: Long Distance Bar-codesLong Distance Bar-codes
Mohan, Woo,Smithwick, Hiura, RaskarAccepted as Siggraph 2009 paper
MIT Media Lab Camera Culture
Bokode
MIT media lab camera culture
Barcodesmarkers that assist machines in
understanding the real world
MIT media lab camera culture
Bokode:
ankit mohan, grace woo, shinsaku hiura,quinn smithwick, ramesh raskar
camera culture group, MIT media lab
imperceptible visual tags for camera based interaction from a distance
MIT Media Lab Camera Culture
Defocus blur of Bokode
MIT Media Lab Camera Culture
Image greatly magnified.
Simplified Ray Diagram
MIT Media Lab Camera Culture
Our Prototypes
MIT media lab camera culture
street-view tagging
Converting LCD Screen = large Camera for 3D Interactive HCI and Video Conferencing
Matthew Hirsch, Henry HoltzmanDoug Lanman, Ramesh Raskar
BiDi Screen*
Beyond Multi-touch: Mobile
Laptops
Mobile
Light Sensing Pixels in LCD
Dis
play
with
em
bedd
ed o
ptic
al s
enso
rs
Sharp Microelectronics Optical Multi-touch Prototype
Design Overview
Dis
play
with
em
bedd
ed o
ptic
al s
enso
rs
LCD , displaying mask
Opt
ical
sen
sor a
rray
~2.5 cm~50 cm
Beyond Multi-touch: Hover Interaction
• Seamless transition of multitouch to gesture
• Thin package, LCD
Design Vision
Object Collocated Captureand Display
Bare Sensor
Spa
tial L
ight
Mod
ulat
or
Touch + Hover using Depth Sensing LCD Sensor
Overview: Sensing Depth from Array of Virtual Cameras in
LCD
A. Introduction 5 minutes ‐‐
B. Cameras of the future (Raskar, 30 minutes) * Form factors, Modalities and Interaction * Enabling Visual Social Computing
C. Reconstruction the World (Seitz, 30 minutes) * Photo tourism and beyond * Image based modeling and rendering on a massive scale ‐* Scene summarization
D. Understanding a Billion Photos (Efros, 30 minutes) * What will the photos depict? * Photos as visual content for computer graphics * Solving computer vision
E. Discussion 10 minutes ‐‐
Next Billion Cameras
Camera Culture Group, MIT Media Lab Ramesh Raskar http://raskar.info
• Visual Social Computing
• Computational Photography• Digital• Epsilon• Coded• Essence
• Beyond Traditional Imaging• Looking around a corner• LCDs as virtual cameras• Computational probes (bokode)
Cameras of the Future
Digital
Epsilon
Coded
Essence
Computational Photography aims to make progress on both axis
Camera Array
HDR, FoV Focal stack
Decomposition problems
Depth
Spectrum
LightFields
Human Stereo Vision
Transient Imaging
Virtual Object Insertion
Relighting
Augmented Human
Experience
Material editing from single
photo
Scene completion from
photos
Motion Magnification
Phototourism
Raskar, Camera Culture, MIT Media Lab
Camera Culture
Ramesh Raskar
Alyosha Efros Ramesh Raskar
Steve Seitz
Siggraph 2009 CourseNext Billion Cameras
http://raskar.info/photo/
A. Introduction 5 minutes ‐‐
B. Cameras of the future (Raskar, 30 minutes) * Form factors, Modalities and Interaction * Enabling Visual Social Computing
C. Reconstruction the World (Seitz, 30 minutes) * Photo tourism and beyond * Image based modeling and rendering on a massive scale ‐* Scene summarization
D. Understanding a Billion Photos (Efros, 30 minutes) * What will the photos depict? * Photos as visual content for computer graphics * Solving computer vision
E. Discussion 10 minutes ‐‐
Next Billion Cameras
CaptureOvercome Limitations of Cameras
Capture Richer DataMultispectral
New Classes of Visual SignalsLightfields, Depth, Direct/Global, Fg/Bg separation
Hyperrealistic Synthesis
Post-capture Control
Impossible Photos
Close to Scientific Imaging
Computational Photographyhttp://raskar.info/photo/
http://raskar.scripts.mit.edu / nextbillioncamerashttp://raskar.scripts.mit.edu / nextbillioncameras
Raskar, Camera Culture, MIT Media Lab
Questions• What will a camera look like in 10,20 years?• How will a billion networked and portable cameras change
the social culture? • How will online photo collections transform visual social
computing?• How will movie making/new reporting change?• computational-journalism.com
Fernald, Science [Sept 2006]
Shadow
Refractive
Reflective
Tools for
Visual Computin
g
Cameras and their ImpactCameras and their Impact• Beyond Traditional Imaging Analysis and synthesis
– Emerging optics, illumination, novel sensors– Exploit priors and online collections
• Applications– Better scene understanding/analysis– Capture visual essence– Superior Metadata tagging for effective sharing– Fuse non-visual data
• Impact on Society – Beyond entertainment and productivity– Sensors for disabled, new art forms, crowdsourcing, bridging
cultures, social stability
2nd International Conference on Computational Photography
Papers due November 2,
2009
http://cameraculture.media.mit.edu/iccp10
• Ramesh Raskar and Jack Tumblin
• Book Publishers: A K Peters• Siggraph 2009 booth: 20% off • Booth #2527
• ComputationalPhotography.org
• Meet the Authors• Thursday at 2pm-2:30pm
http://raskar.scripts.mit.edu / nextbillioncamerashttp://raskar.scripts.mit.edu / nextbillioncameras
• Visual Social Computing
• Computational Photography• Digital• Epsilon• Coded• Essence
• Beyond Traditional Imaging• Looking around a corner• LCDs as virtual cameras• Computational probes (bokode)
Next Billion Cameras
Digital
Epsilon
Coded
Essence
Computational Photography aims to make progress on both axis
Camera Array
HDR, FoV Focal stack
Decomposition problems
Depth
Spectrum
LightFields
Human Stereo Vision
Transient Imaging
Virtual Object Insertion
Relighting
Augmented Human
Experience
Material editing from single
photo
Scene completion from
photos
Motion Magnification
Phototourism
A. Cameras of the future (Raskar, 30 minutes) * Enabling Visual Social Computing* Computational Photography* Beyond Traditional Imaging
B. Reconstruction the World (Seitz, 30 minutes) * Photo tourism and beyond * Image based modeling and rendering on a massive scale ‐* Scene summarization
C. Understanding a Billion Photos (Efros, 30 minutes) * What will the photos depict? * Photos as visual content for computer graphics * Solving computer vision
Next Billion Camerashttp://raskar.scripts.mit.edu / nextbillioncamerashttp://raskar.scripts.mit.edu / nextbillioncameras
Course Evaluation (prize: free mug for each course!) http://www.siggraph.org/courses_evaluation
IntConf on Computational Photography, Mar’2010Papers due Nov 2, 2009 http://cameraculture.info/iccp10Book: Computational Photography [Raskar and Tumblin]
AkPeters Booth #2527, 20% coupons here, Meet Authors Thu 2pm