pub med article

Embed Size (px)

Citation preview

  • 7/30/2019 pub med article

    1/2

    1

    A System to Study 3D Perception for Diagnosing

    Schizophrenia and Assessing Treatment ResultsJordan T. Ash, James M. Hughes, Thomas V. Papathomas

    Department of Biomedical Engineering and Laboratory of Vision Research

    Rutgers University, New Brunswick, New Jersey, 08854

    AbstractIndividuals with schizophrenia (SZ) perceive variousvisual stimuli differently than healthy controls. This is especiallytrue of the hollow-face illusion. We have created a virtualrendering system that exploits this well-known fact. We alsoexplain the addition of a head-tracking feature that provides amore realistic spatio-temporal interaction for our virtual-realityapparatus. Our system is capable of quantifying the amount oftime a person perceives the illusion and affords a comparisonbetween SZ patients and controls, thus offering a diagnostic toolfor SZ.

    I. INTRODUCTION

    Previous studies [1, 2] have shown that individuals with SZ

    rely on data-driven processes for perception. Consequently,

    they under-utilize concept-driven processes that rely on past

    experiences. This is particularly clear when people with SZ

    observe hollow-face illusion (HFI) stimuli [3]. In this study

    we discuss a virtual-reality (VR) system that we developed to

    create experiments that are impossible to achieve with real-

    world (RW) stimuli. We show that healthy controls respond

    similarly to VR and RW stimuli, even when relative motion

    cues are accounted for. We used the concept of depth reversal

    [3, 4] to measure the time observers perceive the HFI in a

    pilot study. We offer a proposal to use virtual HFI stimuli to

    diagnose SZ, and to assess the efficacy of treatment methods.

    I I . METHODS

    Eight nave subjects participated in the pilot VR study. Ten

    nave subjects participated in the pilot RW study. None of the

    participants were known to have any cognitive abnormalities.

    The VR stimulus was a mask created with the FaceGen [5]tool for human head object file generation. The head was

    then modified in 3D Studio Max [6] to create a hollow mask,

    mapped with lifelike features on both sides using three high-

    resolution image files: one of the face (1024x1024 pixels),

    and two images of the left and right eye (256x256 pixels

    each). It was presented on a 15-inch LCD monitor supplied

    with a laptop computer. This mask was loaded as an object

    file into our virtual environment using the MOGL features

    of PsychToolBox [7, 8, 9]. In our pilot study, we varied the

    distance between the nose-tip and the vertical axis of rotation,

    as shown in Fig 1.

    Figure 1. Left panel: All masks (a-e) face left in this top view. The verticalrotation axis is at circles center. Mask d rotates around an axis located at thetip of the nose. Right panel: Diagram specifying the notation. Origin is at thecircles center.

    Our software employed the standard cylindrical

    parametrization:

    < x, y, z > = < R cos(), R sin(), 0 > (1)

    This parametrization has many advantages, most notably the

    ability to easily change whether the mask is facing inwards or

    outwards by simply changing the sign of R. The mask wasrotated by incrementing by d in each iteration; the signand magnitude ofd allowed for rotation direction and angularspeed to be changed. The amount of time spent in the illusion

    was tracked, with the user pressing the right or left arrow

    keys, corresponding to whether he or she perceived the mask

    as rotating clockwise or counterclockwise as seen from above

    (same as RW experiment). We assessed the strength of the

    illusion by the ratio J/M, where J and M are themselvesratios. J is the ratio of iterations spent in the illusion andtotal iterations. M is the length of the time-interval possibleto spend in the illusion (i.e., when the concave side was facing

    the observer) divided by the duration of the entire trial. J is

    given by (2), where R is the distance from the mask to thecenter of rotation and D is the distance from the center ofrotation to the observer.

    M=1

    2

    arcsin(R/D)

    (2)

    A sum (+) corresponds to a trial in which the mask isfacing away from the axis of rotation, while a difference ()

    corresponds to the mask facing towards the axis of rotation.

    Our RW system mirrored the VR system closely, and involved

    a similarly varying rotational axis. The same formulas were

    applied for statistical analysis. A painted plastic mask, affixed

  • 7/30/2019 pub med article

    2/2

    2

    to a motorized turntable was used as the stimulus, and a

    MATLAB [10] program was used to record intervals during

    which observers witnessed the illusion. Each observer saw the

    stimulus in all possible combinations of rotation radius, mask

    direction, and rotation direction. The order of these conditions

    was arranged pseudo-randomly. d was kept at .2 radiansper iteration, resulting in 10 iterations per full rotation. Theviewing distance was kept at 25 units, to allow for the mask to

    fit comfortably on the screen during rotation. In each trial, the

    mask rotated a total of four revolutions (8 radians), justifyingthe use of (2) which only applies for integer multiples of radians.

    Although not utilized in the pilot study, our software is also

    capable of real-time head tracking (Fig 2). Such functionality

    allows the combination of observer self-motion and object

    motion in future studies. A Nintendo Wii remote control

    [11, 12] senses the positions of battery-powered infrared light

    emitting diodes (LEDs) that are mounted on a headband worn

    by of our observers.

    Figure 2. Top view of rear projection head tracking system (not drawn toscale).

    We render the virtual scene in real time by changing theviewing point according to the sensed observers position.

    To further enhance realism in the VR system, a rear screen

    projection system was installed to allow for larger images.

    This created an environment for studies less restrictive than

    the study performed on the laptop. Calibration of this system

    was necessary to accurately redraw the virtual environment

    for every change in the participants position. This gain

    calibration variable was assessed using an object in the RW

    that was also constructed in the VR world. We marked

    locations on the laboratory floor where similar features of the

    RW and VR object looked identical. This enabled us to express

    the amount of rotation for the VR object as a linear function

    of the viewers displacement. This was a necessary step to

    increase the realism of the VR stimulus view that changed as

    a result of the observers self-motion.

    III. RESULTS

    The rationale for varying the position of the rotation axis

    was to vary the strength of the motion parallax (MP) signals.

    The perceived feature-to-observer distance varies inversely

    with MP. Thus, mask d of Fig 1 should produce a weak

    illusion because the nose-tip, having zero MP, should be seen

    further than features with large MP, such as the cheeks. On

    the contrary, our data from both VR and RW experiments

    show that the illusion strength did not vary significantly with

    the position of the axis of rotation. This shows a higher

    dependence on top-down, concept-driven processes, such as

    the experience of convex faces, rather than data-driven MP

    signals.

    There was no significant difference between the responses of

    observers in the RW and the VR experiment, confirming that

    virtual stimuli can replace real-world stimuli for more flexible

    and quantifiable measurements in the future. It was clear

    that all observers saw the illusion strongly. However, many

    observers were hesitant to respond promptly, either for fear of

    an incorrect answer or because they had a poor understanding

    of their task. We are currently developing better techniques for

    training observers, based on our observations from the pilot

    studies. Nevertheless, the main finding remains that the illusion

    is obtained across all conditions.

    IV. FUTURE WORK

    Future experiments will utilize the head-tracking apparatus

    described above to assess the observers position in real space

    and time. We already have a working head-tracking system as

    shown in Fig. 2. We have successfully developed software

    that affords a wide spectrum of interesting studies on the

    perception of 3D objects, because our software is not limited

    to mask stimuli. On the contrary, it can handle any 3D object

    model. These resources enable us to design experiments to

    compare 3D object perception between patients with SZ and

    healthy controls. Since the hollow-mask illusion correlates

    with the severity of SZ, similar experiments can be performed

    on SZ patients who are undergoing treatment to assess the

    efficacy of the treatment method.

    V. ACKNOWLEDGMENTS

    We thank Chris Kourtev for technical support.

    REFERENCES

    [1] Dima, D., et al., Understanding why patients with schizophrenia donot perceive the hollow-mask illusion using dynamic causal modelling,NeuroImage (2009), doi:10.1016/j.neuroimage.2009.03.033

    [2] Silverstein, S. M., Hatashita-Wong, M. H., Schenkel. L. S., Kovcs, I.,Feher, A., Smith, T. E., Goicochea, C., & Uhlhaas, P. (2006). Reducedtop-down influences in contour detection in schizophrenia. CognitiveNeuropsychiatry, 11, 112-132

    [3] Papathomas, TV and Bono, L. (2004). Experiments with a hollow maskand a reverspective: Top-down influences in the inversion effect for 3-Dstimuli, Perception, 33, 1129-1138.

    [4] Papathomas, TV (2007). Art pieces that move in our minds anexplanation of illusory motion based on depth reversal, Spatial Vision,21, 7995.

    [5] Singular Inversions. FaceGen 3D Human Faces Version 3.5. Available:http://www.facegen.com

    [6] Autodesk. 3DSMax 2011. Available: Autodesk.com[7] Brainard DH (1997). The Psychophysics Toolbox, Spatial Vision 10,

    433-436.[8] Pelli DG (1997). The VideoToolbox software for visual psychophysics:

    Transforming numbers into movies, Spatial Vision, 10, 437-442.[9] Kleiner M, Brainard D, Pelli D, 2007, "Whats new in Psychtoolbox-3?"

    Perception 36 ECVP Abstract Supplement.[10] MATLAB version 2010. Natick, Massachusetts: The MathWorks Inc.,

    2010.[11] Lee, JC (2008). Hacking the Nintendo Wii Remote, Pervasive Comput-

    ing, July-September 2008.[12] Peek, Brian., WiiLAB version 1.1. Available:

    http://netscale.cse.nd.edu/twiki/bin/view/Edu/WiiMote