Click here to load reader

STEFANO CARRINO

  • Upload
    butest

  • View
    995

  • Download
    2

Embed Size (px)

Citation preview

  • 1. STEFANO CARRINO
    http://home.hefr.ch/carrinos/
    PhD Student
    2008-2011
    Technologies Evaluation &
    State of the Art
    This document details technologies for gesture interpretation and analysis and proposes some parameters for a classification. The technologies proposed are
    TOC o " 1-3"Introduction PAGEREF _Toc217100831 h 3
    Our vision, in brief PAGEREF _Toc217100832 h 3
    Technologies Study PAGEREF _Toc217100833 h 3
    State of the Art: papers PAGEREF _Toc217100834 h 3
    Gesture recognition by computer vision PAGEREF _Toc217100835 h 3
    Gesture Recognition by Accelerometers PAGEREF _Toc217100836 h 5
    Technology PAGEREF _Toc217100837 h 7
    Technology Evaluation PAGEREF _Toc217100838 h 8
    Evaluation Criteria PAGEREF _Toc217100839 h 8
    Technology Comparison PAGEREF _Toc217100840 h 8
    Parameters weight PAGEREF _Toc217100841 h 8
    Comparison PAGEREF _Toc217100842 h 10
    Conclusions and Remarks PAGEREF _Toc217100843 h 11
    Accelerometers, gloves and cameras PAGEREF _Toc217100844 h 11
    Proposition PAGEREF _Toc217100845 h 11
    Divers PAGEREF _Toc217100846 h 12
    Observation PAGEREF _Toc217100847 h 12
    Some commonly features for gesture recognition by image analysis PAGEREF _Toc217100848 h 13
    Gesture recognition or classification methods PAGEREF _Toc217100849 h 13
    " Gorilla arm"PAGEREF _Toc217100850 h 14
    References PAGEREF _Toc217100851 h 14
    Attached PAGEREF _Toc217100852 h 16
    Introduction
    In the following sections we illustrate the state of the art in technologies for the acquisition of data for gesture recognition. After that we introduce some parameters for the evaluation of these approaches, motivating the weight of each parameter according to our vision. In the last section we highlight the conclusion of this research in the state of the art in this field.
    Our vision, in brief
    The AVATAR system will be composed by two elements:
    The Smart Portable Device (SPD).
    The Smart Environmental Device (SED).
    The SPD has to provide the gesture interpretation for all the applications that are environment independent for what may concern the data acquisition (i.e. the cause and effect actions, inputs, computing machine and out put are all inside the SPD self).
    The SED offers the gesture recognition where the SPD has not good performances. And, in addition, it could offer a layer for the connection of multiple SPD and the possibility of faster elaboration offering its computing power.
    In this first step of our work we will focus the attention on the SPD but keeping in mind the future developments.
    Technologies Study
    The choice of the employed technologies (input) for the gesture interpretation is very in important in order to achieve good results in the gesture recognition. In the last years the evolution of technology and materials has pushed forward the feasibility and the robustness of this kind of systems; also more complex algorithms are now ready for this kind of applications (augmented speed in the computing processes, in mobile devices too, make the real-time approach reality).
    State of the Art: papers
    Follow a simple list of articles we have read, after the name is attached a short description.
    Gesture recognition by computer vision
    Arm-pointing Gesture Interface Using Surrounded Stereo Cameras SystemREF _Ref216867245 h [1]
    - 2004
    - Surrounding Stereo Cameras (four stereo cameras in four corners of the ceiling)
    - Arm pointing
    - Setting: 12 frame/s
    - Recognition rate: 97.4% standing
    - Recognition rate: 94% sitting posture
    - The lighting environment had a slight influence
    Improving Continuous Gesture Recognition with Spoken ProsodyREF _Ref216867261 h [2]
    - 2003
    - Cameras and microphone
    - HMM - Bayesian Network
    - Gesture and Speech Synchronization
    - 72.4% of 1876 gestures were classified correctly
    Pointing Gesture Recognition based on 3DTracking of Face, Hands an Head OrientationREF _Ref216867302 h [3]
    - 2003
    - Stereo Camera (1)
    - HMM
    - 65% / 83% (without / with head orientation)
    - 90% after user specific training
    Real-time Gesture Recognition with Minimal Training Requirements and On-Line LearningREF _Ref216867288 h [4]
    - 2007
    - (SNM) HMMs modified for reduced training requirement
    - Viterbi inference
    - Optical, pressure, mouse/pen
    - Result: ???
    Recognition of Arm Gestures Using Multiple Orientation Sensors: gesture classificationREF _Ref216867331 h [5]
    - 2004
    - IS-300 Pro Precision Motion Tracker by InterSense
    - Results
    Vision-Based Interfaces for MobilityREF _Ref216867337 h [6]
    - 2004
    - Head-worn camera
    - AdaBoost
    - (Larger than 30x20 pixels) runs with 10 frames per second on a 640x480 sized video stream on a 3GHz desktop computer.
    - Interesting references
    - 93.76% postures were classified correctly
    GestureVR: Vision-Based 3D Hand interface for Spatial InteractionREF _Ref216867359 h [7]
    - 1998
    - 2 cameras 60Hz 3D space
    - 3 gestures
    - Finite state classification
    Gesture Recognition by Accelerometers
    Accelerometer Based Gesture Recognition for Real Time Applications
    - Input: Accelerometer Bluetooth
    - HMM
    - Gesture Recognized Correctly 96%
    - Reaction Time: 300ms
    Accelerometer Based Real-Time Gesture RecognitionREF _Ref216867368 h [8]
    - Input: Sony-Ericsson W910i (3 axial accel.)
    - 97.4% and 96% accuracy on a personalized gesture set
    - HMM & SVM (Support Vector Machine)
    - HMM (My algorithm was based on a recent Nokia Research Center paper [11] with some modifications. I have used the freely available JAHMM library for implementation.)
    - Runtime was tested on a new generation MacBook computer with a dual core 2 GHz processor and 1 GB memory.
    - Recognition time was independent from the number of teaching examples and averaged at 3.7ms for HMM and 0.4ms for SVM.
    Self-Defined Gesture Recognition on Keyless Handheld Devices using MEMS 3D AccelerometerREF _Ref216867392 h [11]
    - 2008
    - Input: Three-dimensional MEMS accelerometer and a Single Chip Microcontroller
    - 94% Arabic number recognition
    Gesture-recognition with Non-referenced TrackingREF _Ref216867430 h [12]
    - 2005-2006 (?)
    - Accelerometer Bluetooth (MEMS) + gyroscopes
    - 3motion
    - Particular algorithm for gesture recognition
    - No numerical results
    Real time gesture recognition using Continuous Time Recurrent Neural NetworksREF _Ref216867447 h [13]
    - 2007
    - Accelerometers
    - Continuous Time Recurrent Neural Networks (CTRNN)
    - Neuro Fuzzy system (in a previously project)
    - Isolated gesture: 98% was obtained for the training set and 94% for the testing set
    - Realistic environment: 80.5% and 63.6 %
    - Neuro fuzzy system can't work in dynamic (realistic situations)
    - G. Bailador, G. Trivino, and S. Guadarrama. Gesture recognition using a neuro-fuzzy predictor. In International Conference of Artificial Intelligence and Soft Computing. Acta press, 2006.
    ADL Classification Using Triaxial Accelerometers and RFIDREF _Ref216867468 h [14]
    - >2004
    - ADL = Activities of Daily living
    - 2 wireless (Zigbee homemade) accelerometers for 5 body states
    - Glove type RFID reader
    - 90% over 12 ADLs
    Technology
    The input devices used in the last years are:
    Accelerometers
    Wireless
    Non wireless
    CameraREF _Ref216868035 h [17]:
    Depth-aware cameras. Using specialized cameras one can generate a depth map of what is being seen through the camera at a short range, and use this data to approximate a 3d representation of what is being seen. These can be effective for detection of hand gestures due to their short-range capabilities.
    Stereo cameras. Using two cameras whose relations to one another are known, a 3d representation can be approximated by the output of the cameras. This method uses more traditional cameras, and thus does not hold the same distance issues as current depth-aware cameras. To get the cameras' relations, one can use a positioning reference such as a lexian-stripe (?) or infrared emitters.
    Single camera. A normal camera can be used for gesture recognition where the resources/environment wouldn't be convenient for other forms of image-based recognition. Although not necessarily as effective as stereo or depth aware cameras, using a single camera allows a greater possibility of accessibility to a wider audience.
    Angle Shape SensorREF _Ref216868069 h [18]:
    Exploiting the reflexion of the light inside optical fibre we are able to rebuild a 3D hand(s) model
    Available also in wireless (Bluetooth), the present solutions (gloves) have to be connected with
    Infrared technology.
    Ultrasound / UWB (Ultra WideBand)
    RFID
    Gyroscopes (two angular-velocity sensors)
    Controller-based gestures. These controllers act as an extension of the body so that when gestures are performed, some of their motion can be conveniently captured by software. Mouse gestures are one such example, where the motion of the mouse is correlated to a symbol being drawn by a person's hand, as is the Wii Remote, which can study changes in acceleration over time to represent gestures.
    Technology Evaluation
    Evaluation Criteria
    In the following table there is a list of parameters of evaluation for the technologies presented in previous section.
    Resolution: in relative amounts, resolution describes the degree to which a change can be detected. It is expressed as a fraction of an amount to which you can easily relate. For example, printer manufacturers often describe resolution as dots per inch, which is easier to relate to than dots per page.
    Accuracy: accuracy describes the amount of uncertainty that exists in a measurement with respect to the relevant absolute standard. It can be defined in several different ways and is dependent on the specification philosophy of the supplier as well as product design. Most accuracy specifications include a gain and an offset parameter.
    Latency: waiting time until the system firstly responses.
    Range of motion.
    User Comfort.
    Cost. In economic terms.
    Technology Comparison
    Parameters weight
    In this section we show how the weights in the previous table are chosen to characterize my personal choice.
    First) Cost: we are in a research context so is not so important to value the cost of our system following a marketing approach. But I agree with the idea forwarded by H. Ford: True progress is made only when the advantages of a new technology are within reach of everyone" . For this reason the cost too appears as parameter in the table: a concept without possible future practical application is useless (to use gloves for hands modelling with a cost of 5000 $ or more are quite hard to see in a cheaper form in the future).
    Second) User comfort: a technology completely invisible to the user will be ideal. In this perspective isnt easy deal with the challenge how to interface the user with the system.For example wondering about implementation of gesture recognition without any charge to the final user (gloves, camera, sensors) is not a dream, but, in the other hand, the output and the feedback have to be presented to the user. From this viewpoint a head-mounted display (we are wondering about application in the context of the augmented reality) looks like the first natural solution. At this point adding camera to this device doesnt make worse the situation with a huge advantage (and future possibilities):
    Possible uncoupling from the environment (if enough computational power is provided to the user): all the technology is on the user.
    In any case, if we need it, we can establish a network with other systems to gain more information and enrich our system.
    We are able to enter in the domain of wearable/mobile systems. It is a challenge but it makes valuable and richer our system.
    Third) Range of Motion: it is a direct consequence of the earlier point. With a wearable technology we can get rid of this problem; the range of motion is strictly related to the context and not dependents to our system. With other choices (e.g. cameras and sensors in the environment) the system will work in a specific environment and can lose in generality.
    Fourth) Latency: to deal with this problem at this level is quite untimely. The latency depends on the used technology, the applied algorithms for gesture recognition and the tracking, but, potentially, also on other parameters such as the distance between input system, elaboration system and output/feedback system. (For example if the vector of information is the sound, the time of flight may be not negligible in a real-time system.)
    Fifth) Accuracy & Resolution: first of all the system has to be reliable. Therefore these parameters are really meaningful in our application. As far as we are concerned we would like a tracking system able to discern correctly a little vocabulary of gestures and to make possible realistic interactions with three-dimensional virtual object in a three-dimensional mixed world.
    Comparison
    Analyzing input approach we have noticed two features:
    Some of the equipments presented here are the direct evolution of the previous;
    Nowadays some technologies are (of course in this domain) evidently inferior if compared with other technologies.
    According to the first sentence we discard from further analysis wired accelerometer; they have not advantages compared to the wireless equivalent solution.
    Depending on the second one we can exclude the RFID compared with the UWB.
    In previous section we add gyroscopes like possible technology this isnt completely correct; in reality this kind of technology have real applicability only if integrated with accelerometers or other sensors.
    TechnologiesParametersResolution - AccuracyLatencyRange of motionUser ComfortCostRESULTSAccelerometers - wireless3452555Camera - singled camera2454453Camera - Stereo cameras32?3 (?)326+3*?Camera - depth-aware cameras44 (?)53360Angle shape sensor (gloves)44521 (-100)54Infrared technology4454463Ultrasound2????10+XWeight54321
    From this table we have evaluated two approaches as most interesting:
    The infrared technology
    The depth-aware camera.
    In reality these two technologies are not uncorrelated. In deed the depth-aware cameras are often equipped with infrared emitters and receivers to calculate the position in the space of the object in the field of view of the cameraREF _Ref216868115 h [19].
    Conclusions and Remarks
    Chose a technology to implement our future work was not easy at all! Above all is that: the validity of a technology is strictly linked with its use. For example the results using a camera for gestures interpretation is strictly connected with the algorithms used to recognise the gestures. So it is impracticable to say THIS IS THE technology to use. Moreover there are others factors (as technical evolution) that we have to take into account.
    Computer vision offers the user a less cumbersome interface, requiring of them only that they remain within the field of view of the camera or cameras. By deducing features and movement in real-time from the images captured from the cameras, gesture and posture recognition. Computer vision typically also requires good lighting conditions and the occlusion issue makes this solution application dependent.
    Generally we can show there are two principal ways to tackle the issues tied to the gesture recognition:
    - Computer Vision;
    - Accelerometers (often coupled with gyroscopes or other sensors).
    Each approach has advantages and disadvantages. In general researches show a percentage of gesture recognition above the 80% (often the 90%) within a restrict vocabulary.
    However the evolution of new technology pushes these results toward higher level.
    Accelerometers, gloves and cameras
    The scenarios we have thought about are in the context of augmented reality, for this reason, it is ordinary wondering about head-mounted display and to add a lightweight camera will not change drastically the user comfort;
    Wireless technology provides us not so much cumbersome sensors but their integration on a human body is somewhat intrusive.
    Gloves are another simple device not too much intrusive (in my opinion), but the cost to have a reliable mapping in a 3D space nowadays have a cost not negligibleREF _Ref216868069 h [18].
    However considering generalized scenarios and the most various types of gesture (body, arms, hands) we dont discard the idea to bring together more kind of sensors.
    Proposition
    What we propose for the next step is to think about scientific problems such user identification and multiuser management, context dependence (tracking), definition of model/language of gesture, and gesture recognition (acquisition and analyses).
    All this fixing two goals for the future applications:
    Usability.
    That is:
    Robustness;
    Reliability.
    That not is (at this moment):
    Easy to wear (weight).
    Augmented / virtual reality applicability:
    Mobility;
    3D gesture recognition capability;
    Dynamic (and static?) gesture recognition.
    As next steps I will define the following:
    Work environment;
    Definition of a framework for gesture modelling (???);
    Acquisition technology selection;
    Delve into state of the art for what concerns:
    Gesture vocabulary definition
    Action theory
    Framework for gesture modelling
    The choice of the kind of gesture model will be effectuated in the forecast of the following step: to extend gesture interpretation to the environment. In this perspective we will need also a strategy to add a tracking system to determine the user position coupled with the head position and orientation. This will be necessary if we want to be independent from visual marker or similar solutions.
    Divers
    Observation [13]:
    Hidden Markov models, dynamic programming and neural networks have been investigated for gesture recognition with hidden Markov models being nowadays one of the predominant approach to classify sporadic gestures (e.g. classification of intentional gestures). Fuzzy systems expert has also been investigated for gesture recognition based on analyzing complex features of the signal like the Doppler spectrum. The disadvantage of these methods is that the classification is based on the separability of the features, therefore two different gestures with similar values for these features may be difficult to classify.
    Some commonly features for gesture recognition by image analysis [6]:
    Image moments.
    Skin tone Blobs.
    Coloured Markers.
    Geometric Features.
    Multiscale shape characterization.
    Motion History Images and Motion Energy Images.
    Shape Signatures.
    Polygonal approximation-based Shape Descriptor.
    Shape descriptors based upon regions and graphs.
    Gesture recognition or classification methodsREF _Ref217113918 h [16]
    Following are the list of gesture recognition or classification methods proposed in the literature so far:
    Hidden Markov Model (HMM).
    Time Delay Neural Network (TDNN).
    Elman Network.
    Dynamic Time Warping (DTW).
    Dynamic Programming.
    Bayesian Classifier.
    Multi-layer Perceptions.
    Genetic Algorithm.
    Fuzzy Inference Engine.
    Template Matching.
    Condensation Algorithm.
    Radial Basis Functions.
    Self-Organizing Map.
    Binary Associative Machines.
    Syntactic Pattern Recognition.
    Decision Tree.
    " Gorilla arm"
    " Gorilla arm"REF _Ref216868255 h [21] was a side-effect that destroyed vertically-oriented touch-screens as a mainstream input technology despite a promising start in the early 1980s.
    Designers of touch-menu systems failed to notice that humans aren't designed to hold their arms in front of their faces making small motions. After more than a very few selections, the arm begins to feel sore, cramped, and oversized -- the operator looks like a gorilla while using the touch screen and feels like one afterwards. This is now considered a classic cautionary tale to human-factors designers; " Remember the gorilla arm!"is shorthand for " How is this going to fly in real use?"
    Gorilla arm is not a problem for specialist short-term-use uses, since they only involve brief interactions which do not last long enough to cause gorilla arm.
    References
    Yamamoto, Y.; Yoda, I.; Sakaue, K.; Arm-pointing gesture interface using surrounded stereo cameras system, Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on Volume 4,23-26 Aug. 2004 Page(s):965 - 970 Vol.4
    Kettebekov, S.; Yeasin, M.; Sharma, R.; Improving continuous gesture recognition with spoken prosody, Computer Vision and Pattern Recognition, 2003. Proceedings. 2003 IEEE Computer Society Conference onVolume 1, 18-20 June 2003 Page(s):I-565 - I-570 vol.1
    Kai Nickel , Rainer Stiefelhagen, Pointing gesture recognition based on 3D-tracking of face, hands and head orientation, Proceedings of the 5th international conference on Multimodal interfaces, November 05-07, 2003, Vancouver, British Columbia, Canada
    Rajko, S.; Gang Qian; Ingalls, T.; James, J.; Real-time Gesture Recognition with Minimal Training Requirements and On-line Learning, Computer Vision and Pattern Recognition, 2007. CVPR '07. IEEE Conference on 17-22 June 2007 Page(s):1 - 8
    Lementec, J.-C.; Bajcsy, P.; Recognition of arm gestures using multiple orientation sensors: gesture classification, Intelligent Transportation Systems, 2004. Proceedings. The 7th International IEEE Conference on 3-6 Oct. 2004 Page(s):965 - 970
    Kolsch, M.; Turk, M.; Hollerer, T.; Vision-based interfaces for mobility, Mobile and Ubiquitous Systems: Networking and Services, 2004. MOBIQUITOUS 2004. The First Annual International Conference on 22-26 Aug. 2004 Page(s):86 - 94
    Jakub Segen , Senthil Kumar, Gesture VR: vision-based 3D hand interface for spatial interaction, Proceedings of the sixth ACM international conference on Multimedia, p.455-464, September 13-16, 1998, Bristol, United Kingdom
    Beedkar ,K.; Shah, D.; Accelerometer Based Gesture Recognition for Real Time Applications, Real Time Systems, Project description; MS CS Georgia Institute of Technology
    Zoltn Prekopcsk, Pter Halcsy, and Csaba Gspr-Papanek; Design and development of an everyday hand gesture interface in MobileHCI '08: Proceedings of the 10th international conference on Human computer interaction with mobile devices and services. Amsterdam, the Netherlands, September 2008.
    Zoltn Prekopcsk (2008) Accelerometer Based Real-Time Gesture Recognition in POSTER 2008: Proceedings of the 12th International Student Conference on Electrical Engineering. Prague, Czech Republic, May 2008.
    Zhang, Shiqi; Yuan, Chun; Zhang, Yan; Self-Defined Gesture Recognition on Keyless Handheld Devices using MEMS 3D Accelerometer, Natural Computation, 2008. ICNC '08. Fourth International Conference on Volume 4, 18-20 Oct. 2008 Page(s):237 - 241
    Keir, P.; Payne, J.; Elgoyhen, J.; Horner, M.; Naef, M.; Anderson, P.; Gesture-recognition with Non-referenced Tracking, 3D User Interfaces, 2006. 3DUI 2006. IEEE Symposium on25-29 March 2006 Page(s):151 - 158
    G.Bailador, D.Roggen, G.Trster, and G.Trivio. Real time gesture recognition using Continuous Time Recurrent Neural Networks. In 2nd Int. Conf. on Body Area Networks (BodyNets), 2007.
    Im, Saemi; Kim, Ig-Jae; Ahn, Sang Chul; Kim, Hyoung-Gon; Automatic ADL classification using 3-axial accelerometers and RFID sensor; Multisensor Fusion and Integration for Intelligent Systems, 2008. MFI 2008. IEEE International Conference on 20-22 Aug. 2008 Page(s):697 - 702
    S. Mitra, T. Acharya; Gesture Recognition- A Survey, Systems, Man, and Cybernetics, Part C: Applications and Reviews, IEEE Transactions on 2007
    Hafiz Adnan Habib. Gesture Recognition Based intelligent Algorithms for Virtual keyboard Development. A thesis submitted in partial fulfilment for the degree of Doctor of Philosophy.
    http://en.wikipedia.org/wiki/Gesture_recognition
    HYPERLINK " http://www.5dt.com/" http://www.5dt.com/see the attached documentation.
    HYPERLINK " http://www.3dvsystems.com/" http://www.3dvsystems.com/ see the attached documentation.
    http://en.wikipedia.org/wiki/Touchscreen
    Attached INCLUDEPICTURE" http://www.5dt.com/textures/sidetop.jpg"* MERGEFORMATINET
    5DT Data Glove 5 UltraProduct Description The 5DT Data Glove 5 Ultra is designed to satisfy the stringent requirements of modernMotion Capture and Animation Professionals. It offers comfort, ease of use, a small form factorand multiple application drivers. The high data quality, low cross-correlation and high data ratemake it ideal for realistic realtime animation.The 5DT Data Glove 5 Ultra measures finger flexure (1 sensor per finger) of the user's hand. The system interfaces with the computer via a USB cable. A Serial Port (RS 232 - platform independent) option is availible through the 5DT Data Glove Ultra Serial Interface Kit. It features 8-bit flexure resolution, extreme comfort, low drift and an open architecture. The 5DT Data Glove Ultra Wireless Kit interfaces with the computer via Bluetooth technology (up to 20m distance) for high speed connectivity for up to 8 hours on a single battery. Right- and left-handed models are available. One size fits many (stretch lycra). Features Advanced Sensor Technology Wide Application Support Affordable quality Extreme comfort One size fits manyAutomatic calibration - minimum 8-bit flexture resolutionPlatform independant - USB Or Serial interface (RS 232)Cross-platform SDK Bundled software High update rateOn-board processorLow crosstalk between fingersWireless version available (5DT Ultra Wireless Kit)Quick " hot release"connectionRelated Products 5DT Data Glove 14 Ultra5DT Data Glove 5 MRI (For Magnetic Resonance Imaging Applications)5DT Data Glove 16 MRI (For Magnetic Resonance Imaging Applications)5DT Wireless Kit Ultra5DT Serial Interface Kit Data SheetsData sheets must be viewed with a PDF-viewer. If you do not have a PDF-viewer, you can download Adobe Acrobat Reader from Adobe's site at http://www.adobe.com/products/acrobat/readstep.html. 5DT Data Glove Series Data Sheet: 5DTDataGloveUltraDatasheet.pdf (124 KB) Manuals Manuals must be viewed with a PDF-viewer. If you do not have a PDF-viewer, you can download Adobe Acrobat Reader from Adobe's site at http://www.adobe.com/products/acrobat/readstep.html. 5DT Data Glove 5 Manual: 5DT Data Glove Ultra - Manual.pdf (2,168 KB) Glove SDK Windows and Linux SDK (free):The current version of the windows SDK is 2.0 and Linux 1.04a. The driver works for all versions of the 5DT Data Glove Series. Please refer to the driver manual for instructions on how to install and use it. Windows users will need a program that can open ZIP files, such as WinZip, from www.winzip.com. For Linux, use the " unzip"command. Windows 95/98/NT/2000 SDK: GloveSDK_2.0.zip (212 KB)Linux SDK: 5DTDataGloveDriver1_04a.zip (89.0 KB) The following files contains all the SDK, manuals, glove software and data sheets for the 5DT Data Glove Series: Windows 95/98/NT/2000: GloveSetup_Win2.2.exe (13.4 MB)Linux: 5DTDataGloveSeriesLinux1_02.zip (1.21 MB ) Unix Driver:The 5DT Data Glove Ultra Driver for Unix provides access to the 5DT range of data gloves at an intermediate level. The driver functionality includes multiple instances, easy initialization and shutdown, basic (raw) sensor values, scaled (auto-calibrated) sensor values, calibration functions, basic gesture recognition and a cross-platform Application Programming Interface (API). The driver utilizes Posix threads. Pricing for this driver is shown below. Go to our Downloads page for more drivers, data sheets, software and manuals.PricingPRODUCT NAMEPRODUCT DESCRIPTIONPRICE5DT Glove 5 Ultra Right-handed5 Sensor Data Glove: Right-handedUS$9955DT Glove 5 Ultra Left-handed5 Sensor Data Glove: Left-handedUS$995Accessories5DT Ultra Wireless KitKit allows for 2 Gloves in one compact packageUS$1,4955DT Data Glove Serial KitSerial Interface Kit US$195Drivers & Software Alias | Kaydara MOCAP Driver US$4953D Studio Max 6.0 Driver US$295Maya Driver US$295SoftImage XSI Driver US$295UNIX SDK* Please Note Serial Only (No USB Drivers)US$495
    ZCamTM3D video cameras by 3DVSince it was established 3DV Systems has developed 4 generations of depth cameras. Its primary focus in developing new products throughout the years has been to reduce their cost and size, so that the unique state-of-the-art technology will be affordable and meet the needs of consumers as well as of these of multiple industries. In recent years 3DV has been developing DeepCTM, a chipset that embodies the company's core depth sensing technology. This chipset can be fitted to work in any camera for any application, so that partners (e.g. OEMs) can use their own know-how, market reach and supply chain in the design and manufacturing of the overall camera capabilities. The chipset will be available for sale soon.The new ZCamTM (previously Z-Sense), 3DV's most recently completed prototype camera, is based on DeepCTM and is the company's smallest and most cost-effective 3D camera. At the size of a standard webcam and at affordable cost, it provides very accurate depth information at high speed (60 frames per second) and high depth resolution (1-2 cm). At the same time, it provides synchronized and synthesized quality colour (RGB) video (at 1.3 M-Pixel). With these specifications, the new ZCamTM (previously Z-Sense) is ideal for PC-based gaming and for background replacement in web-conferencing. Game developers, web-conferencing service providers and gaming enthusiasts interested in the new ZCamTM (previously Z-Sense) are invited to contact us. As previously mentioned, the new ZCamTM (previously Z-Sense) and DeepCTM are the latest achievements backed by a tradition of providing high quality depth sensing products. Z-CamTM, the first depth video camera, was released in 2000 and was targeted primarily at broadcasting organizations. Z-MiniTM and DMC-100TM followed, each representing another leap forward in reducing cost and size.