1
See & Hear, Show & Tell In February 1994, we built the world’s first LINUX PC cluster supercomputer in order to test a new communications concept: the AGGREGATE FUNCTION NETWORK (AFN). This hardware provides a wide range of “global state” operations with just a few microseconds total latency, and also can synchronize nodes to within a fraction of a microsecond... but how could we demonstrate this? Video, and moreso audio, walls depend on tight coordination, so we created software for them... but it was hard to find sufficiently impressive images, so we also began working on techniques improving digital image quality. This paper briefly summarizes some of the sensing & display research that for us began as cluster-supercomputing spin-offs. Much of this research is done in close collaboration with members of the UNIVERSITY OF KENTUCKYS CENTER FOR VISUALIZATION &VIRTUAL ENVIRONMENTS (CVVE). Video Walls. Our first LINUX cluster video walls were built using 386/486 systems with dumb video cards, but by 1996 we had a cluster able to perform non-trivial computations while maintaining a 6,400× 4,800 pixel video wall. Some of our older computational video walls are shown above. VWLIB, freely available at Aggregate.Org, implements a simple but very powerful virtual frame buffer model. For example, at SC99, we were able to use a reference MPEG decoder to play videos on a wall simply by using VWLIB’s image file mapping. Multiple dynamically repartitionable walls are supported and generalized subpixel rendering enhances resolution of single-panel LCDs. Audio Walls. Our first LINUX cluster audio wall was demon- strated at SC94 – playing multi-voice music. Over the past few years, we have focussed on audio input and sound source location. The above figure shows simultaneous tracking of two sound sources (people) using 8 microphones. Multispectral Imaging. In the process of trying to remove chromatic abberations, we found a way to extract multispectral data from a single raw capture using an unmodified digital camera (a CANON G1 for the above). We have continued to develop low- cost multispectral imaging techniques. 3D Capture. CVVE has significant expertise in structured light 3D capture. Combining that with our work in high-quality digital imaging, we have been working on full-handprint flash 3D capture for DHS. The above image is a small crop from such a handprint; the left side is a conventional optical image, the right side is a 2D rendering of the 3D ridge details captured. Senscape.Org. ASENSCAPE is an integrated presentation of multidimensional sensory data allowing a human to understand and use properties of the environment that might otherwise have been beyond human perception. Our target applications range from AVA (the grid-like AMBIENT VIRTUAL ASSISTANT integrating hundreds of cameras, microphones, other sensors, projectors, and speakers) to FIRESCAPE (a helmet-mounted multi- sensor device to help firefighters navigate burning buildings). We are developing, integrating, and freely disseminating the systems technologies needed to make such applications highly effective and easily affordable. This document should be cited as: @techreport{sc08shst, author={Henry Dietz}, title={See and Hear, Show and Tell}, institution={University of Kentucky}, address={http://aggregate.org/WHITE/sc08shst.pdf}, month={Nov}, year={2008}}

See & Hear, Show & Tellaggregate.org/WHITE/sc08shst.pdf · See & Hear, Show & Tell In February 1994, we built the world’s first LINUX PC cluster supercomputer in order to test

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: See & Hear, Show & Tellaggregate.org/WHITE/sc08shst.pdf · See & Hear, Show & Tell In February 1994, we built the world’s first LINUX PC cluster supercomputer in order to test

See & Hear, Show & Tell

In February 1994, we built the world’s first LINUX PC clustersupercomputer in order to test a new communications concept:the AGGREGATE FUNCTION NETWORK (AFN). This hardwareprovides a wide range of “global state” operations with just afew microseconds total latency, and also can synchronize nodesto within a fraction of a microsecond... but how could wedemonstrate this? Video, and moreso audio, walls depend on tightcoordination, so we created software for them... but it was hardto find sufficiently impressive images, so we also began workingon techniques improving digital image quality. This paper brieflysummarizes some of the sensing & display research that for usbegan as cluster-supercomputing spin-offs. Much of this researchis done in close collaboration with members of the UNIVERSITYOF KENTUCKY’S CENTER FOR VISUALIZATION & VIRTUALENVIRONMENTS (CVVE).

Video Walls. Our first LINUX cluster video walls were builtusing 386/486 systems with dumb video cards, but by 1996we had a cluster able to perform non-trivial computations whilemaintaining a 6,400× 4,800 pixel video wall. Some of our oldercomputational video walls are shown above. VWLIB, freelyavailable at Aggregate.Org, implements a simple but verypowerful virtual frame buffer model. For example, at SC99, wewere able to use a reference MPEG decoder to play videos ona wall simply by using VWLIB’s image file mapping. Multipledynamically repartitionable walls are supported and generalizedsubpixel rendering enhances resolution of single-panel LCDs.

Audio Walls. Our first LINUX cluster audio wall was demon-strated at SC94 – playing multi-voice music. Over the past fewyears, we have focussed on audio input and sound source location.The above figure shows simultaneous tracking of two soundsources (people) using 8 microphones.

Multispectral Imaging. In the process of trying to removechromatic abberations, we found a way to extract multispectraldata from a single raw capture using an unmodified digital camera(a CANON G1 for the above). We have continued to develop low-cost multispectral imaging techniques.

3D Capture. CVVE has significant expertise in structured light3D capture. Combining that with our work in high-quality digitalimaging, we have been working on full-handprint flash 3D capturefor DHS. The above image is a small crop from such a handprint;the left side is a conventional optical image, the right side is a2D rendering of the 3D ridge details captured.Senscape.Org. A SENSCAPE is an integrated presentation ofmultidimensional sensory data allowing a human to understandand use properties of the environment that might otherwisehave been beyond human perception. Our target applicationsrange from AVA (the grid-like AMBIENT VIRTUAL ASSISTANTintegrating hundreds of cameras, microphones, other sensors,projectors, and speakers) to FIRESCAPE (a helmet-mounted multi-sensor device to help firefighters navigate burning buildings). Weare developing, integrating, and freely disseminating the systemstechnologies needed to make such applications highly effectiveand easily affordable.

This document should be cited as:@techreport{sc08shst,

author={Henry Dietz},

title={See and Hear, Show and Tell},

institution={University of Kentucky},

address={http://aggregate.org/WHITE/sc08shst.pdf},

month={Nov}, year={2008}}