19
INCAPS An Intelligent Camera-Projection System Framework Roberto Valenti Jurjen Pieters

INCAPS An Intelligent Camera-Projection System Framework Roberto Valenti Jurjen Pieters

Embed Size (px)

Citation preview

INCAPSAn Intelligent Camera-Projection System

Framework

Roberto ValentiJurjen Pieters

Overview

• The State of The Art

• What are we going to do

• The Framework

• First Year(s) Project Plan

• Examples

• References

• Equipment Costs Estimation

State of The Art

State of The Art

State of The Art

State of The Art

The Situation

CompanyHardware

Company Company

Company

Hardware

Hardware

Hardware

Software

SoftwareSoftware

Software

Databases

Algorithms

AlgorithmsAlgorithms

Algorithms

Databases

Databases

Databases ?

What are we going to do?

• The scene needs a framework– Easier way to implement application– Interoperability– New possibilities (more on this later)

The framework will extract information from “the world”, and will give the

opportunity to select what is relevant

The Framework

Input Devices(Camera)

Output Devices(Projector)

Input Analysis

Picture Processing

Internal world representation

Users or Companies

Modules

DB

INCAPS

The world as a source..

• We have to represent the world internally to understand it better

• Single components are not enough

• All the input devices are used to sample the world on different aspects (a lot of information out there!)

• REALTIME!

The Project Planning

• First year: Research and tests of algorithms on multiple input devices, setup and tuning.

• Second year: Have an efficient internal world representation.

• Third year: Addition of multiple input/output devices, creation of API’s

Reality 2 Digital

Reality 2 Digital

3D Information Extraction

• We have an internal world-map, we can specify rules (I.e. pedestrians)

• Companies/Users can decide what is important (tumors, broken bones, road, traffic signs,components)

• Possibility to apply further algorithms to subsets of world-data

• Depending on the application, we can recognize and label objects, matching them with databases and ad-hoc algorithms

Digital 2 Reality

• The extracted information is displayed to the user in the most useful way, depending on the application.

Digital 2 Reality: Examples

Pedestrian detection

Night Vision

Equipment Costs Estimation

• Input Devices:– Stereo Camera– Radars– IR Night-viewer– Database System– Wireless

Communication– GPS

• Output Devices :– One or two out of:

• Projector• VR glasses• Translucent screen• HUD

Total: ~30.000 €

A very powerful computer, a car and ..

References

[1] Y. Baillot, D. Brown, and S. Julier. Authoring of Physical Models Using Mobile Computers. Fifth International Symposium on Wearable Computers, pages 39–46, October 7-10,2001.

[2] J.F. Bartlett. Rock ’n’ Scroll is Here to Stay. IEEE Computer Graphics and Applications, 20(3):40–45, May/June 2000.

[3] U.S. Census Bureau. Topologically Integrated Geographic Encoding and Referencing System, http://www.census.gov/geo/www/tiger. 2002.

[4] A. Cheyer, L. Julia, and J. Martin. A Unified Framework for Constructing Multimodal Applications. Conference on Cooperative Multimodal Communication (CMC98), pages 63–69,January 1998.

[5] W.J. Clinton. Statement by the President Regarding the United States’ Decision to Stop Degrading Global Positioning System Accuracy. Office the the Press Secretary, The White House, May 1, 2000.

[6] P.R. Cohen, M. Johnston, D. McGee, S. Oviatt, J. Pittman,I. Smith, L. Chen, and J. Clow. Quickset: Multimodal Interaction for Distributed Applications. ACM International Multimedia Conference, pages 31–40, 1997.

[7] R.T. Collins, A.R. Hanson, and E.M. Riseman. Site Model Acquisition under the UMass RADIUS Project. Proceedings of the ARPA Image Understanding Workshop, pages 351–358,1994.

[8] D. Davis, T.Y. Jiang, W. Ribarsky, and N. Faust. Intent, Perception, and Out-of-Core Visualization Applied to Terrain. IEEE Visualization, pages 455–458, October 1998.

[9] D. Davis, W. Ribarsky, T.Y. Jiang, N. Faust, and Sean Ho. Real-Time Visualization of Scalably Large Collections of Heterogeneous Objects. IEEE Visualization, pages 437–440,October 1999.

[10] N. Faust, W. Ribarsky, T.Y. Jiang, and T. Wasilewski. Real-Time Global Data Model for the Digital Earth. IEEE Visualization,March 2000.

[11] T.L. Haithcoat, W. Song, and J.D. Hipple. Building Footprint Extraction and 3-D Reconstruction from LIDAR.Data Remote Sensing and Data Fusion over Urban Areas,IEEE/ISPRS Joint Workshop, pages 74–78, 2001.

[12] K. Hinckley, J.S. Pierce, M. Sinclair, and E. Horvitz. Sensing Techniques for Mobile Interaction. ACM User Interface Software and Technology, pages 91–100, November 5-8, 2000.

[13] J.M. Kahn, R.H. Katz, and K.S.J. Pister. Mobile Networking for Smart Dust. ACM/IEEE International Conference on Mobile Computing and Networking (MobiCom 99), pages 116–122, August 17-19, 1999.

[14] D.M. Krum, R. Melby, W. Ribarsky, and L.F. Hodges.IsometricPointer Interfaces for Wearable 3D Visualization. Submission to ACM CHI Conference on Human Factors in Computing System, April 5-10, 2003.

[15] D.M. Krum, O. Omoteso, W. Ribarsky, T. Starner, and L.F.Hodges. Speech and Gesture Control of a Whole Earth 3D Visualization Environment. Joint Eurographics - IEEE TCVG Symposium on Visualization , May 27-29, 2002.

[16] D.M. Krum, O. Omoteso, W. Ribarsky, T. Starner, and L.F.Hodges. Evaluation of a Multimodal Interface for 3D Terrain Visualization. IEEE Visualization, October 27-November 1, 2002.

[17] D.M. Krum, W. Ribarsky, C.D. Shaw, L.F. Hodges, andN. Faust. Situational Visualization. ACM Symposium on Virtual Reality Software and Technology , pages 143–150, November 15-17, 2001.

[18] K. Lyons and T. Starner. Mobile Capture for Wearable Computer Usability Testing. Fifth International Symposium on Wearable Computers .

[19] J. Murray.Wearable Computers in Battle: Recent Advances in the LandWarrior System. Fourth International Symposium on Wearable Computers , pages 169–170, October 18-21, 2000.

[20] S.L. Oviatt. Mutual Disambiguation of Recognition Errors in a Multimodal Architecture. Proceedings of the Conference on Human Factors in Computing Systems (CHI’99), pages 576–583, May 15-20, 1999.

[21] W. Piekarski and B.H. Thomas. Tinmith-Metro: New Outdoor Techniques for Creating City Models with an Augmented Reality Wearable Computer. Fifth International Symposium on Wearable Computers, pages 31–38, October 7-10,2001.

[22] J. Rekimoto. Tilting Operations for Small Screen Interfaces. ACM User Interface Software and Technology, pages 167–168, November 6-8, 1996.

[23] J.C. Spohrer. Information in Places. IBM Systems Journal, 38(4):602–628, 1999.[24] J. Vandekerckhove, D. Frere, T. Moons, and L. Van Gool.Semi-Automatic

Modelling of Urban Buildings from High Resolution Aerial Imagery. Computer Graphics International Proceedings, pages 588–596, 1998.

[25] T. Wasilewski, N. Faust, and W. Ribarsky. Semi-Automated and Interactive Construction of 3D Urban Terrains. Proceedings of the SPIE Aerospace, Defense Sensing, Simulation and Controls Symposium , 3694A, 1999.

[26] S. You, U. Neumann, and R. Azuma. Orientation Tracking for Outdoor Augmented Reality Registration. IEEE Computer Graphics and Applications, 19(6):36–42, Nov/Dec 1999.

Questions

?