11
Statement on Research Objectives and Accomplishments Robert E. Bodenheimer, Jr. Introduction My research involves the design and evaluation of computer animation and graphics algorithms that enhance and promote learning. I believe that the design of animation algorithms can be productively shaped for use in learning applications. I also seek to understand how visual imagery and motion can be incorporated into environments suitable for learning. Computer graphics and animation are powerful technologies for communicating and motivating. They will increasingly be present in learning environments. A major obstacle to using computer graphics and animation in the development of learning environments is the difficulty in easily authoring visually compelling environments and motions that support learning. This document summarizes the ways my research attacks this obstacle. An important aspect of my work is the use of formal evaluation as a key driver for informing the design process for computer animation and graphics. The unifying theme of my work is the development of robust mathematical methods informed by perceptual evaluation to create images and motion as a communications medium. My background in systems and control theory has provided a strong mathematical tool set for the design of animation algorithms. This tool set is based on classical measures of performance, which often cannot provide adequate criteria to assess which algorithms perform well and which do not. As this problem became apparent, the quest for competent performance measures led me into psychophysics and learning, both of which inform the scientific and engineering design process, and also provide methods of validating it. As the only faculty member studying computer graphics and animation at Vanderbilt, I have developed a research program that designs, builds, and then tests these methods in different application domains. My research is thus highly interdisciplinary, and I have established collaborations with researchers in robotics, image processing, computational neuroscience, and artificial intelligence in my department; in the Psychology, Psychology and Human Development, Teaching and Learning, and Audiology departments at Vanderbilt; and nationally with collaborators at Brown, Northwestern, and Washington Universities. In the remainder of this document I give a synopsis of my research accomplishments, contributions, and directions. My primary accomplishments as a researcher so far are: Publications: I have over 50 refereed publications (13 journal articles, 38 conference papers) that have been cited over 280 times 1 . My journal articles appear in highly-rated journals, such as the ACM Transactions on Ap- plied Perception, IEEE Transactions on Visualization and Computer Graphics, IEEE Transactions on Robotics, IEEE Transactions on Automatic Control, and ACM Multi-Media Systems Journal. My conference papers have appeared in some of the most selective venues, e.g., SIGGRAPH (21% acceptance rate), Intelligent User Interfaces (20% acceptance rate), and the Symposium on Computer Animation (30% acceptance rate). These are prestigious venues for publication and dissemination of results within the field of Computer Graph- ics, Computer Animation, and Computer Science. Software written by me or my graduate students has been downloaded over 380,000 times 2 . Funding: I have demonstrated continued ability to fund my research. I received a National Science Foundation CAREER award, and have been PI or co-PI on grants totaling more than $4.35 million. A number of these grants collaborate with researchers from other disciplines, such as Psychology. Advising and Training: Two Ph.D. students have completed their degrees under me (Jing Wang, Christina de Juan), one is scheduled to complete her degree in Spring 2007 (Betsy Williams). I am advising one Master’s student in a thesis program, Elizabeth Seward, who is scheduled to graduate in December 2006. I have advised three 1 [Citeseer, 2006] and [ISI Web of Knowledge, 2006]. 2 Vanderbilt download count and [Netlib, 2006]. 1

Statement on Research Objectives and Accomplishments

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Statement on Research Objectives and Accomplishments

Statement on Research Objectives and Accomplishments

Robert E. Bodenheimer, Jr.

IntroductionMy research involves the design and evaluation of computer animation and graphics algorithms that enhance andpromote learning. I believe that the design of animation algorithms can be productively shaped for use in learningapplications. I also seek to understand how visual imagery and motion can be incorporated into environments suitablefor learning. Computer graphics and animation are powerful technologies for communicating and motivating. Theywill increasingly be present in learning environments. A major obstacle to using computer graphics and animation inthe development of learning environments is the difficulty in easily authoring visually compelling environments andmotions that support learning. This document summarizes the ways my research attacks this obstacle. An importantaspect of my work is the use of formal evaluation as a key driver for informing the design process for computeranimation and graphics.

The unifying theme of my work is the development of robust mathematical methods informed by perceptualevaluation to create images and motion as a communications medium. My background in systems and controltheory has provided a strong mathematical tool set for the design of animation algorithms. This tool set is based onclassical measures of performance, which often cannot provide adequate criteria to assess which algorithms performwell and which do not. As this problem became apparent, the quest for competent performance measures led meinto psychophysics and learning, both of which inform the scientific and engineering design process, and also providemethods of validating it. As the only faculty member studying computer graphics and animation at Vanderbilt, I havedeveloped a research program that designs, builds, and then tests these methods in different application domains. Myresearch is thus highly interdisciplinary, and I have established collaborations with researchers in robotics, imageprocessing, computational neuroscience, and artificial intelligence in my department; in the Psychology, Psychologyand Human Development, Teaching and Learning, and Audiology departments at Vanderbilt; and nationally withcollaborators at Brown, Northwestern, and Washington Universities.

In the remainder of this document I give a synopsis of my research accomplishments, contributions, and directions.My primary accomplishments as a researcher so far are:

Publications: I have over 50 refereed publications (13 journal articles, 38 conference papers) that have been citedover 280 times1. My journal articles appear in highly-rated journals, such as the ACM Transactions on Ap-plied Perception, IEEE Transactions on Visualization and Computer Graphics, IEEE Transactions on Robotics,IEEE Transactions on Automatic Control, and ACM Multi-Media Systems Journal. My conference papers haveappeared in some of the most selective venues, e.g., SIGGRAPH (≤ 21% acceptance rate), Intelligent UserInterfaces (≤ 20% acceptance rate), and the Symposium on Computer Animation (≈ 30% acceptance rate).These are prestigious venues for publication and dissemination of results within the field of Computer Graph-ics, Computer Animation, and Computer Science. Software written by me or my graduate students has beendownloaded over 380,000 times2.

Funding: I have demonstrated continued ability to fund my research. I received a National Science FoundationCAREER award, and have been PI or co-PI on grants totaling more than $4.35 million. A number of thesegrants collaborate with researchers from other disciplines, such as Psychology.

Advising and Training: Two Ph.D. students have completed their degrees under me (Jing Wang, Christina de Juan),one is scheduled to complete her degree in Spring 2007 (Betsy Williams). I am advising one Master’s studentin a thesis program, Elizabeth Seward, who is scheduled to graduate in December 2006. I have advised three

1[Citeseer, 2006] and [ISI Web of Knowledge, 2006].2Vanderbilt download count and [Netlib, 2006].

1

Page 2: Statement on Research Objectives and Accomplishments

Robert E. Bodenheimer, Jr. Statement on Research

non-thesis Master’s students, Jingjing Meng, Chen Zhou, and Paul Bielaczyc. Some of my students have wonbest paper awards at conferences. Most of my graduate students are from populations traditionally underrepre-sented in Computer Science (six of the seven are women). Finally, I have mentored undergraduates in researchactivities, developing programs with them that have allowed them to publish in refereed venues.

Professional Service: I am a co-founder of the Midwest Graphics Workshop, and have served on the program com-mittee of the Symposium on Computer Animation, Computer Animation and Simulation Workshop, and theInternational Conference on Web 3D Technology. I have been a session chair or conference co-chair of theseconferences as well. I have served on review panels for the National Science Foundation and the NationalEndowment for the Humanities. As outreach, I have deployed attractive applications of computer science tech-niques in K-12 classrooms.

Animation:Synthesis,Use,Reuse

Background. The most fundamental issue in using animation in a learning system is the problem of authoringthe animation. Authoring visually compelling animation is a difficult and time-consuming task. People are able toperceive subtle movements and often attribute style, emotion, and intent to these movements. Thus, the bar for qualityanimation is high. Most animation techniques are ad hoc and employ heuristic techniques to achieve acceptable results.Authoring methods for animation can be broadly classified into three categories: motion capture, dynamic simulation,and keyframing. Motion capture is a popular process for generating human animation, in which sensors are attachedto a performer and the motion of the performer is recorded for later playback in a graphical character. Dynamicsimulation uses the physics of the graphical character to constrain the motion to be physically correct. Keyframing isthe traditional Disney-like type of animation. My research has made contributions for all three of these techniques.

Motion capture is the dominant method of animation in video games and is increasingly used in other venues. Themain difficulty in using motion capture is that the recorded motion is never precisely correct. Much of my work inthis domain has developed methods to insure that the recording process is done as faithfully as possible [1, 18]3. Thiscollaborative work unlocked motion capture data for use in synthesis algorithms by developing techniques to robustlyestimate skeleton size and process similar motions into compatible sets. These results led to methods for controllablesynthesis of stylistically similar but novel motions [20]. This research has had the following impact:Impact: My principal contribution to motion capture has been the development of practical, robust methods totransform performance data into compelling animation.

Synthesis from motion capture was a step forward from using motion capture simply as pre-recorded animation, orfor editing it to make a different pre-recorded animation. The key idea for synthesizing controllable, stylistically simi-lar motion is to cast the problem as a scattered data interpolation problem (also developed by [Wiley and Hahn, 1997]).The algorithm and framework have since been developed and used in both animation [Mukai and Kuriyama, 2005,Urtasun et al., 2004, Kovar and Gleicher, 2003] and robotics [4][Jenkins and Mataric, 2004, Breazeal et al., 2003]. Theimportant problem not addressed by this work, however, is that interpolation of motion data is not guaranteed to bevisually compelling. This problem has driven much of my work since then, both in developing techniques for theevaluation of types of synthesized motion [26] and in developing techniques that directly lead to pleasing motion [27].Impact: My research provides methods to use motion capture data to synthesize new motion.

In contrast, when using dynamic simulation for animation, the authoring problem manifests itself as one of build-ing controllers that produce visually compelling motion. Animation generated by dynamic simulation is sometimes

3In this document, work by me is cited numerically. Work by others is cited in author-year format.

2

Page 3: Statement on Research Objectives and Accomplishments

Robert E. Bodenheimer, Jr. Statement on Research

criticized as appearing “robotic,” and part of the reason for this criticism can be traced to a lack of variability in themotion, i.e., a repetitive motion is a boring motion. Adding variability to the control structure in a way that still resultsin stable, yet pleasing motion is not trivial. My work represents one of the few efforts to adjust the control laws in a per-ceptually validated but easily authored way [2] (related work, e.g., [O’Sullivan et al., 2003, Reitsma and Pollard, 2003,Harrison et al., 2004], evaluated changes to intrinsic parameters such as momentum, velocity, and size).Impact: My work provides a perceptually-validated method for tuning control algorithms in dynamic simula-tion to improve the quality of the resulting motion.

Sometimes, however, realism of motion capture or dynamic simulation is not wanted. One can imagine learningenvironments, particularly for children, in which traditionally animated characters with caricatured or non-humanfeatures serve as a better representation for an agent. Authoring, editing, and re-using traditional animation are someof the most difficult and time-consuming animation tasks. My Ph.D. student Christina de Juan and I developed waysof generating novel traditional animation from existing traditional animation [8, 9]. Our method is model-free, i.e.,no a priori knowledge of the drawing or character is required. This work treats a set of traditional animation as alow-dimensional manifold embedded in high-dimensional space. By recovering the manifold structure of the set, wecan generate novel animation. Additional processing is needed to improve its appearance, however, and by treatingthe problem as a scattered data interpolation problem we have successfully generated novel sequences. This workhas been used in animation systems to produce, for example, perpetual animations [Van Haevre et al., 2005]. Mystudent Elizabeth Seward and I have explored these ideas for three-dimensional animation as well [23]. The results,while preliminary, show promise in allowing us to treat animation as a problem of geometry rather than its standardtreatment as a set of signals. These ideas are also being extended to the domain of robotics [11], as discussed later.Impact: My research provides methods for the reuse of traditional animation along with the synthesis of newcartoon animation.

BetterMotionTransitions

Background. Central to all forms of motion synthesis is the generation of a motion transition. Motion transitions aresegues between two sequences of animation. They often present difficulties for synthesis methods, yet are importantcomponents for generating compelling animation streams in computer games, virtual environments, and interactivelearning environments. My work has studied using spacetime constraints to generate transitions [21], and also lin-ear blending [20, 26, 27, 28, 29]. Methods involving linear blending are more studied because of their efficiency,computational speed, and widespread use.

To determine good blending, two things must be selected: the intervals over which blending will occur, and theduration of these intervals. My Ph.D. student Jing Wang and I have studied both of these issues [26, 27], and havedeveloped methods for calculating the visually optimal blend length, also called the transition duration. We havedeveloped algorithms to calculate visually pleasing transition durations for two categories of motion that encompassmost typical activities. These algorithms are computationally efficient and could be incorporated into interactiveapplications with little overhead.Impact: My principal contribution to motion synthesis is the introduction of robust algorithms for the calcula-tion of transition durations, the only such algorithms known.

To evaluate these methods, we conducted formal psychophysical studies to determine two things: (1) whether ouralgorithms were better than a fixed blend length (the default, prior to our work), and (2) how noticeable a change inthe duration of the transition was. We have demonstrated that a suitable choice of transition duration can significantlyimprove the appearance of the motion through the transition. The impact of this work on the community has been to

3

Page 4: Statement on Research Objectives and Accomplishments

Robert E. Bodenheimer, Jr. Statement on Research

demonstrate that transition duration is an important variable in the synthesis of complete motions, as noted by, e.g.,[Mezger et al., 2005, Zordan et al., 2005, Safonova and Hodgins, 2005].Impact: My work represents the only formal perceptual evaluation of the importance of choosing the correcttransition duration.

As mentioned above, the other component of a good transition is the selection of the points or intervals in themotion where blending will occur. A variety of metrics are available in the literature for selecting these points, butwhat distinguishes our work is the use of constrained optimization to find a perceptually-based metric. We have usedthis metric on a large variety of motions to determine transition points, and have validated its success again usingpsychophysical testing [26]. This work has been used, for example, in systems for the real-time animation of crowdsof pedestrians [Srinivasan et al., 2005].Impact: My research provides a formal perceptual evaluation of measures for determining the appropriatenessof motion transitions.

BetterLearningSystems

Background. In contrast to the low level motion algorithms that form components of learning systems, I have alsoworked on higher level design issues of learning environments, the main thrust of my NSF CAREER award. My workso far can be categorized into desktop learning systems and immersive virtual environments. The important point ofthis work is to build systems that allow people to learn in meaningful contexts and situations.

For immersive virtual environments, my research develops design principles for immersion that allow environ-ments where people can actively explore and guide their own learning. This work has been done in collaborationwith Profs. John Rieser and Timothy McNamara. Most important applications of virtual environments assume thatwhat people learn from exploring virtual simulations of physical environments is functionally similar to what theylearn from exploring the physical environments themselves. Understanding these similarities or differences is im-portant: when constructing learning environments, we want either to correct differences or exploit them. My Ph.D.student Betsy Williams and I have demonstrated such functional similarities in subjects’ access to spatial knowledgebetween real and virtual environments [34]. My student Jingjing Meng and I have demonstrated functional dissimilar-ities in the perception of distance between real and virtual environments [15] using a strictly perceptual measure (incontrast to other work, e.g. [Thompson et al., 2004, Willemsen and Gooch, 2002, Plumert et al., 2005], that employa perception-action or perception-imagination measure). These results will inform our future work in the design ofvirtual environments.Impact: My research assessed functional similarities in the perception of the real world and the virtual envi-ronment.

Currently, immersive virtual environments are expensive, but it may be that head-mounted display technology be-comes widely affordable in the near future. A significant problem for the widespread adoption of virtual environmentsthat enable people to actively explore them will then be the problem of space. Having large areas suitable for activeexploration of a large virtual environment will not be practical and commodity-level virtual environment equipmentwill need to be placed in small rooms. The important issue my work has addressed is how subjects can explore largeenvironments on foot when physical space is constrained. The qualification “on foot” is critical as we have shownthat using a joystick is inferior to foot exploration [32], and devices such as treadmills are expensive. The solution mystudent Betsy Williams and I have developed is to manipulate the translational and rotational gain of movement so thatthe virtual environment affords cognitively-friendly exploration [32, 33].Impact: My principal contribution to the construction of immersive learning environments is the exploitationof perceptual affordances to improve such environments.

4

Page 5: Statement on Research Objectives and Accomplishments

Robert E. Bodenheimer, Jr. Statement on Research

For desktop environments, my work has focused on the incorporation of compelling animation into a learningsystem, called the teachable agent system [7][Biswas et al., 2001]. My work examines the means and utility of us-ing animated agents to promote positive learning experiences, and much of it has been done in collaboration withProf. Gautam Biswas. Initially, my work focused on developing and improving a user interface that would supportan animated agent, and evaluating these changes on the teachable agent’s target audience, K-12 students [7, 24, 25].My group and I animated the agent, using a videographic representation and using the manifold learning techniquesdescribed previously for traditional animation. We chose these methods to enable an easily customizable, easily au-thored agent. The effects of agent appearance, gender, expressions, and style of motion in promoting learning are stillunder investigation. We have demonstrated, however, that both the improved interface and the presence of an animatedagent promote positive learning experiences for our target audience [30, 3].Impact: My research demonstrates a system that facilitates easy authoring of animated learning environments.Further, my work demonstrates that an animated teachable agent promotes a better learning outcome than onewithout animation.

CollaborativeProjects

Background. In addition to the collaborators mentioned previously, I have been privileged to work with othertremendous colleagues both from Vanderbilt University and nationally. My contributions all have learning aspectsassociated with them, but the aspects are particular to each project.

Lossy Transmission of Meshes. The continual improvement in computer performance together with the prevalenceof high-speed network connections with high throughput and moderate latencies enables the deployment of multimediaapplications, such as collaborative virtual environments, over wide area networks. Many of these systems use 3Dgraphics for display and may be required to distribute geometric models on demand between participants. This work,by my student Zhihua Chen in collaboration with Prof. Fritz Barnes, modified progressive mesh models [Hoppe, 1996]to allow reconstruction in the event of packet loss. We used these modifications in two transmission schemes, a hybridtransmission that uses TCP and UDP to send packets and a forward error-correcting transmission scheme that usesredundancy to decode the information sent. We formally assessed the performance of these transmission schemeswhen deployed on network testbeds that simulate wide-area and wireless characteristics [5].Impact: My research provides methods for wide-area distribution of geometric models on demand in collabo-rative learning environments.

3-D Visualization of Proteomic Information. This work, in collaboration with Profs. Benoit Dawant and RichardCaprioli, and supported by the NIH, developed methods to pre-process and visualize matrix-assisted laser desorptionionization imaging mass spectrometry (MALDI IMS) data aligned with optically determinable tissue structures inthree dimensions. In this procedure, optical images obtained from serial coronal sections of a mouse brain are firstaligned to each other and the result used to build a 3-D implicit surface model of the corpus callosum from segmentedcontours of the aligned images. The MALDI IMS data are then baseline corrected using an algorithm developed by mystudent Betsy Williams and co-registered to the optical images. The proteomic data is then mapped into the 3-D modelto provide a full visualization of proteomic data correlated with anatomical structure in a way never before seen.Correlating proteomic data with anatomical structures provides a better understanding of healthy and pathologicalbrain functions, and may be useful in more complex anatomical arrangements [6, 31].Impact: My work provides three-dimensional visualization methods for learning about and understanding theproteomic structure of the brain.

5

Page 6: Statement on Research Objectives and Accomplishments

Robert E. Bodenheimer, Jr. Statement on Research

Learning and Generation of Robotic Behaviors. The relationship between humanoid robotics and human-figureanimation is synergistic, as each discipline provides tools and techniques of use to the other. Some of these havebeen mentioned previously. This work, in collaboration with Profs. Alan Peters, Chad Jenkins (Brown University)and Kaz Kawamura, and supported by DARPA and NASA, has demonstrated that fundamental motion exemplars fora robot can be generated using a gain-scheduled variant of the scattered data interpolation algorithms used for motionsynthesis [20, 13]. Viewing robots as primitive learners, our results show that a robot can learn to interact purposefullywith its environment through a developmental acquisition of sensory-motor coordination [4]. My contribution was theidea to apply this technology to the robots, and I oversaw its implementation. Additionally, we are employing manifoldlearning strategies in a manner similar to my use of them in animation to uncover the robot’s underlying sensorimotorstate structures and thus to develop better learning and control strategies [11, 12]. We have conducted several sets ofexperiments on Robonaut, NASA’s humanoid robot, to validate our approach. I also established the use of the SensoryEgo-Sphere (SES) for sensor fusion [10, 19]. The SES is currently used on Robonaut to enhance its performance. Thiswork is significant because Robonaut with the SES, as part of the “Centaur” robot, may be deployed on NASA’s Moonmission.Impact: My work leverages the close connection between animation and robotics and enables robots to learn inuncertain environments.

Visualization of Computational Models of Cognition. Computational models of cognition often exhibit rich com-plex dynamics that are difficult to discern without the use of visualization tools. This work [14], in collaboration withProf. David Noelle, addressed a deficiency in prior visualization work for such models, as prior visualization toolstypically provided insight only to the modeling expert. In addition, prior tools provided limited functionality for com-municating model dynamics to the non-expert, as is needed during scientific presentations and in educational settings.We developed NAV, the Node Activity Visualizer, to remedy these deficiencies. NAV provides a sketch-based inter-face for constructing a graphical neural model and to display node activation levels, and can generate animations ofsimulation results. It is an easy-to-use and portable software tool that interactively transforms the output of cognitivemodeling simulators into presentation-quality animations of model performance.Impact: My work provides methods for understanding and visualizing cognitive models.

Computational Photography - Relighting. In collaboration with Profs. Jack Tumblin (Northwestern University)and Cindy Grimm (Washington University), I have applied constrained optimization techniques similar to those men-tioned above for transition metric formulation to the problem of lighting real-world objects[17, 16]. Lighting has longbeen recognized as a difficult problem in the field of computer graphics. We apply simplified image-based lightingmethods to reduce the equipment, cost, time, and specialized skills required for high-quality photographic lighting ofdesktop-sized static objects such as museum artifacts. These methods were developed into a re-lighting system bymy student Jon Waite. The system uses interactive sketching as a target to solve for a combination of images thatbest reproduces the sketched lighting. This system is suitable for a photographer to achieve quality lighting without astudio, and has such applications as museums wanting to document their collections efficiently.Impact: My work provides a simple, light-weight system to obtain quality lighting of objects.

Outreach and Future PlansOne of my goals at Vanderbilt was to create a research and education program in computer graphics and animation,a field of study in which Vanderbilt has never had a presence. I have largely succeeded in this goal. This programis interdisciplinary, and I have established collaborations with a variety of researchers both within Vanderbilt andnationally. I built the infrastructure for such a program, and I have graduated two Ph.D. students, with a third ontrack to graduate in May 2007. This infrastructure required the creation of courses to prepare students for cutting-edge research, and the building of computational resources to enable the conduct of such research. I have trainedstudents from populations traditionally underrepresented in Computer Science. I have introduced undergraduates toresearch activities: all of my summer undergraduate students have published refereed conference papers or journalarticles. Finally, I have deployed attractive applications of computer science techniques in K-12 classrooms, whichmight encourage young learners to consider computer science as a career option.

6

Page 7: Statement on Research Objectives and Accomplishments

Robert E. Bodenheimer, Jr. Statement on Research

The groundwork laid in creating this program provides infrastructure for research challenges over the next severalyears. These research challenges will require a cross-disciplinary approach to solve. Modern graphics processingunits have reached a stage where the constraint of efficient rendering is less significant, and, as a result, we are in anera of animation research where perception will form a critical component of our animation systems. My researchplan is to build upon my successes in this area towards a thorough understanding of how people judge animation. Thenext logical step is then to develop and revise synthesis techniques based on this knowledge. People’s judgments ofanimation are probably colored by the context of the animation, however, and so I will likely focus on the learningsystems described previously. In keeping track with the goals of my CAREER award, the next couple of years willsee additional formal evaluations of the impact of animation attributes on learning outcomes, a process that shouldlead to better design principles. In the longer term, the synthesis of compelling visual motion is likely constrainedby the requirements of physical plausibility, but within the realm of plausibility are many properties of movementsuch as style and personality that most physically based systems cannot yet achieve. To deal with this issue, I amactively engaged in efforts to treat animation in a geometric or topological framework. This work has the potentialto deal with problems currently considered difficult or intractable, such as the automatic design of controllers fordynamically simulated behaviors. That goal, the topic of a good Ph.D. dissertation, would then lead to a principledstudy of control parameters and their effect on the style of motion within the set of physically plausible motions. Ihave recently submitted a grant proposal to the NSF to fund this work. This work applies directly to robotics as well,and I am currently collaborating on this effort with Prof. Peters. We have been informed that a grant to NASA to fundthe robotic portion of this effort has been approved pending approval of a budget for NASA.

Virtual environments have been similarly constrained, but with modern hardware an extensive look at how per-ception in virtual environments provides affordances for the construction of better learning environments is possible.My collaboration with the colleagues mentioned previously leads naturally to the next steps of research involvingthe construction of self-representations in virtual environments — currently most virtual environment systems do notallow users to see a self-representation of themselves. The incorporation of, for example, a real-time motion capturefacility into a virtual environment system poses an interesting question of engineering as well as interesting questionsof self-perception and calibration. These questions lead naturally to challenging questions of skill acquisition andlearning, and, next, to the questions of collaboration and interactions in virtual environments. I am actively takingsteps towards answering these questions, and am PI on a recently submitted a grant proposal to the NSF to fund thisresearch with co-PIs Profs. Rieser, McNamara, and Carr.

New Opportunities. Two examples of how the infrastructure I have created is leading to expanded and excitingresearch follow. A recent collaborative effort with Prof. Dan Ashmeade [22] investigates how people perceive ap-proaching traffic in virtual environments. The goal of this study is to improve pedestrian traffic crossings. This work isimportant since there were over 4500 pedestrian fatalities in 2003 in the United States. A five year plan for exploringthis topic was recently submitted as an NIH grant, and while it is still pending, the grant received a priority score of154 and we believe it will be funded (NIH has requested a revised budget but cannot make an official award untiltheir fiscal year budget is approved by Congress). A by-product of this work that will impact the broader computergraphics community is our preliminary work on perception of motion from different camera angles to mimic differentstreet-crossing scenarios. Also, in a new collaboration with Prof. Dawant, we are attempting to apply motion analysistechniques, some of which were described above, to the pre- and post-operative assessment of deep brain stimulationas a treatment for Parkinson’s disease and Essential Tremor, disorders that affect over 4.5 million individuals in theUnited States. An outgrowth of this project may be the robust reconstruction and recognition of motion from video.In summary, I believe that the analysis techniques we will develop as part of this process will inform the future designof synthesis algorithms for visually compelling motion.

Selected Publications of Mine[1] BODENHEIMER, B., ROSE, C., ROSENTHAL, S., AND PELLA, J. The process of motion capture: Dealing with

the data. In Computer Animation and Simulation ’97 (Sept. 1997), D. Thalmann and M. van de Panne, Eds.,Springer NY, pp. 3–18. Eurographics Animation Workshop.

[2] BODENHEIMER, B., SHLEYFMAN, A. V., AND HODGINS, J. K. The effects of noise on the perception ofanimated human running. In Computer Animation and Simulation ’99 (Sept. 1999), N. Magnenat-Thalmann and

7

Page 8: Statement on Research Objectives and Accomplishments

Robert E. Bodenheimer, Jr. Statement on Research

D. Thalmann, Eds., Springer-Verglag, Wien, pp. 53–63. Eurographics Animation Workshop.

[3] BODENHEIMER, B., WILLIAMS, B., VISWANATH, K., BALACHANDRAN, R., BELYNNE, K., AND BISWAS,G. Construction and evaluation of animated teachable agents. Educational Technology and Society (2006).submitted.

[4] CAMPBELL, C. L., PETERS, R. A., BODENHEIMER, R. E., BLUETHMANN, W. J., HUBER, E., AND AM-BROSE, R. O. Superpositiong of learned behaviors through teleoperation. IEEE Transactions on Robotics 22, 1(Feb. 2006), 79–91.

[5] CHEN, Z., BARNES, J. F., AND BODENHEIMER, B. Hybrid and forward error correction transmission tech-niques for unreliable transport of 3d geometry. Multimedia Systems Journal 10, 3 (Mar. 2005), 230–244.

[6] CRECELIUS, A., CORNETT, D. S., CAPRIOLI, R. M., WILLIAMS, B., DAWANT, B., AND BODENHEIMER, B.Three-dimensional visualization of protein expression in mouse brain structures using imaging mass spectrome-try. Journal of the American Society of Mass Spectrometry 16 (June 2005), 1093–1099.

[7] DAVIS, J., LEELAWONG, K., BELYNNE, K., BODENHEIMER, B., BISWAS, G., VYE, N., AND BRANSFORD,J. Intelligent user interface design for teachable agent systems. In Proceedings of Intelligent User Interfaces2003 (Jan. 2003), pp. 26–33.

[8] DE JUAN, C., AND BODENHEIMER, B. Cartoon textures. In 2004 ACM SIGGRAPH / Eurographics Symposiumon Computer Animation (July 2004), pp. 267–276.

[9] DE JUAN, C. N., AND BODENHEIMER, B. Re-using traditional animation: Methods for semi-automatic seg-mentation and inbetweening. In Eurographics/SIGGRAPH Symposium on Computer Animation (Vienna, Austria,Sept. 2006), p. to appear.

[10] FLEMING, K. A., PETERS II, R. A., AND BODENHEIMER, B. Image mapping and visual attention on a sensoryego-sphere. In Proceddings IEEE/RSJ Conference on Intelligent Robotic Systems (IROS) (Oct. 2006). To appear.

[11] JENKINS, O. C., BODENHEIMER, B., AND PETERS II, R. A. Manipulation manifolds: Explorations intouncovering manifolds in sensory-motor spaces. In Fifth International Conference on Development and Learning(Bloomington, IN, June 2006). To appear.

[12] JENKINS, O. C., BODENHEIMER, R. E., AND PETERS II, R. A. Uncovering manifold structure in robotsensorimotor data. IEEE Transactions on Robotics (2006). submitted.

[13] KAWAMURA, K., PETERS II, R. A., BODENHEIMER, R. E., SARKAR, N., PARK, J., CLIFTON, C. A., SPRAT-LEY, A. W., AND HAMBUCHEN, K. Distributed cognitive control systems for a humanoid robot. InternationalJournal of Humanoid Robotics 1, 1 (Mar. 2004), 65–95.

[14] KRIETE, T., HOUSE, M., BODENHEIMER, B., AND NOELLE, D. C. NAV: A tool for producing presentation-quality animations of graphical cognitive model dynamics. Behavior Research Methods, Instruments, and Com-puters 37, 2 (2005), 335–339.

[15] MENG, J., RIESER, J. J., AND BODENHEIMER, B. Distance estimation in virtual environments using bisection.In APGV Posters (July 2006), p. 146.

[16] MOHAN, A., BAILEY, R., WAITE, J., TUMBLIN, J., GRIMM, C., AND BODENHEIMER, B. Table-top computedlighting for practical digital photography. IEEE Transactions on Visualization and Computer Graphics (2006).In press.

[17] MOHAN, A., TUMBLIN, J., BODENHEIMER, B., GRIMM, C., AND BAILEY, R. Table-top computed lightingfor practical digital photography. In Eurographics Symposium on Rendering (Konstanz, Germany, June 2005),pp. 165–172.

[18] O’BRIEN, J. F., BODENHEIMER, R., BROSTOW, G. J., AND HODGINS, J. K. Automatic joint parameterestimation from magnetic motion capture data. In Graphics Interface (May 2000), pp. 53–60.

8

Page 9: Statement on Research Objectives and Accomplishments

Robert E. Bodenheimer, Jr. Statement on Research

[19] PETERS II, R. A., HAMBUCHEN, K., AND BODENHEIMER, B. The sensory ego-sphere: A mediating interfacebetween sensors and cognition. Robotics and Automation (2006). Accepted.

[20] ROSE, C., COHEN, M., AND BODENHEIMER, B. Verbs and adverbs: Multidimensional motion interpolation.IEEE Computer Graphics and Applications 18, 5 (1998), 32–40.

[21] ROSE, C. F., GUENTER, B., BODENHEIMER, B., AND COHEN, M. F. Efficient generation of motion transi-tions using spacetime constraints. In Proceedings of SIGGRAPH 96 (New Orleans, Louisiana, August 1996),Computer Graphics Proceedings, Annual Conference Series, ACM SIGGRAPH / Addison Wesley, pp. 147–154.ISBN 0-201-94800-1.

[22] SEWARD, A. E., ASHMEAD, D. H., AND BODENHEIMER, B. Discrimination and estimation of time-to-contactfor approaching traffic using a desktop environment. In APGV 2006 (July 2006), pp. 29–32.

[23] SEWARD, A. E., AND BODENHEIMER, B. Using nonlinear dimensionality reduction in 3d figure animation. InProc. 43rd ACM Southeast Regional Conference (Kennesaw, GA, Mar. 2005), V. Clincy, Ed., vol. 2, pp. 388–393.

[24] VISWANATH, K., BALACHANDRAN, R., DAVIS, J., AND BODENHEIMER, B. Effective user interface designfor teachable agent systems. In Proceedings of the ACM Southeast Conference (ACMSE03) (Savannah, GA, Mar.2003), M. Burge, Ed., pp. 138–144.

[25] VISWANATH, K., BALACHANDRAN, R., KRAMER, M. R., AND BODENHEIMER, B. Interface design issues forteachable agent systems. In Proceedings of ED-MEDIA 2004 (Lugano, Switzerland, June 2004), pp. 4197–4204.

[26] WANG, J., AND BODENHEIMER, B. An evaluation of a cost metric for selecting transitions between motionsegments. In 2003 ACM SIGGRAPH / Eurographics Symposium on Computer Animation (Aug. 2003), pp. 232–238.

[27] WANG, J., AND BODENHEIMER, B. Computing the duration of motion transitions: an empirical approach. In2004 ACM SIGGRAPH / Eurographics Symposium on Computer Animation (July 2004), pp. 335–344.

[28] WANG, J., AND BODENHEIMER, B. The just noticeable difference of transition durations. SIGGRAPH 2005Poster, Aug. 2005. Los Angeles, CA.

[29] WANG, J., AND BODENHEIMER, B. Synthesis and evaluation of linear motion transitions. ACM Transactionson Graphics (2006). submitted and under review.

[30] WILLIAMS, B., BELYNNE, K., AND BODENHEIMER, B. An evaluation of animation in a pedagogical agent,Aug. 2005. Los Angeles, CA.

[31] WILLIAMS, B., CRECELIUS, A., CORNETT, D. S., CAPRIOLI, R. M., DAWANT, B., AND BODENHEIMER, B.Pre-processing of mass spectra for imaging maldi-ms. International Journal of Imaging Systems and Technology(2006). submitted.

[32] WILLIAMS, B., NARASIMHAM, G., MCNAMARA, T., CARR, T., RIESER, J., AND BODENHEIMER, B. Up-dating orientation in large virtual environments using scaled translational gain. In APGV 2006 (July 2006),pp. 21–28.

[33] WILLIAMS, B., NARASIMHAM, G., RUMP, B., MCNAMARA, T., CARR, T., RIESER, J. J., AND BODEN-HEIMER, B. Exploring large virtual environments with an HMD on foot. In APGV Posters (July 2006), p. 148.

[34] WILLIAMS, B., NARASIMHAM, G., WESTERMAN, C., RIESER, J., AND BODENHEIMER, B. Functionalsimilarities in spatial representations between real and virtual environments. ACM Transactions on AppliedPerception (2006). To appear.

9

Page 10: Statement on Research Objectives and Accomplishments

Robert E. Bodenheimer, Jr. Statement on Research

References to the Work of Others[Biswas et al., 2001] Biswas, G., Schwartz, D., Bransford, J., and the Teachable Agents Group at Vanderbilt (2001).

Technology support for complex problem solving: From SAD environments to AI. In Forbus, K. D. and Feltovich,P. J., editors, Smart Machines in Education: The coming Revolution in Educational Technology. AAAI/MIT Press,Menlo Park, CA.

[Breazeal et al., 2003] Breazeal, C., Brooks, A., Gray, J., Hancher, M., McBean, J., Stiehl, D., and Strickon, J. (2003).Interactive robot theatre. Commun. ACM, 46(7):76–85.

[Citeseer, 2006] Citeseer (2006). citeseer.ist.psu.edu.

[Harrison et al., 2004] Harrison, J., Rensink, R. A., and van de Panne, M. (2004). Obscuring length changes duringanimated motion. ACM Transactions on Graphics, 23(3):569–573.

[Hoppe, 1996] Hoppe, H. (1996). Progressive meshes. In Proceedings of SIGGRAPH 96, Computer Graphics Pro-ceedings, Annual Conference Series, pages 99–108.

[ISI Web of Knowledge, 2006] ISI Web of Knowledge (2006). www.isiknowledge.com.

[Jenkins and Mataric, 2004] Jenkins, O. C. and Mataric, M. J. (2004). Performance-derived behavior vocabularies:Data-driven acqusition of skills from motion. International Journal of Humanoid Robotics, 1(2):237–288.

[Kovar and Gleicher, 2003] Kovar, L. and Gleicher, M. (2003). Flexible automatic motion blending with registrationcurves. In Breen, D. and Lin, M., editors, Symposium on Computer Animation 2003, pages 214–224, San Diego,CA. ACM SIGGRAPH/Eurographics.

[Mezger et al., 2005] Mezger, J., Ilg, W., and Giese, M. A. (2005). Trajectory synthesis by hierarchical spatio-temporal correspondence: comparison of different methods. In APGV ’05: Proceedings of the 2nd symposiumon Applied perception in graphics and visualization, pages 25–32, New York, NY, USA. ACM Press.

[Mukai and Kuriyama, 2005] Mukai, T. and Kuriyama, S. (2005). Geostatistical motion interpolation. ACM Trans-actions on Graphics, 24(3):1062–1070.

[Netlib, 2006] Netlib (2006). www.netlib.org/master_counts2.html.

[O’Sullivan et al., 2003] O’Sullivan, C., Dingliana, J., Giang, T., and Kaiser, M. K. (2003). Evaluating the visualfidelity of physically based animations. ACM Transactions on Graphics, 22(3):527–536.

[Plumert et al., 2005] Plumert, J. M., Kearney, J. K., Cremer, J. F., and Recker, K. (2005). Distance perception in realand virtual environments. ACM Transactions on Applied Perception, 2:216–233.

[Reitsma and Pollard, 2003] Reitsma, P. S. A. and Pollard, N. S. (2003). Perceptual metrics for character animation:Sensitivity to errors in ballistic motion. ACM Transactions on Graphics. Proceedings of SIGGRAPH 2003, toappear.

[Safonova and Hodgins, 2005] Safonova, A. and Hodgins, J. K. (2005). Analyzing the physical correctness of in-terpolated human motion. In 2005 Eurographics/ACM SIGGRAPH Symposium on Computer Animation, pages171–180.

[Srinivasan et al., 2005] Srinivasan, M., Metoyer, R. A., and Mortensen, E. N. (2005). Controllable real-time loco-motion using mobility maps. In GI ’05: Proceedings of the 2005 conference on Graphics interface, pages 51–59,School of Computer Science, University of Waterloo, Waterloo, Ontario, Canada. Canadian Human-ComputerCommunications Society.

[Thompson et al., 2004] Thompson, W. B., Willemsen, P., Gooch, A. A., Creem-Regehr, S. H., Loomis, J. M., andBeall, A. C. (2004). Does the quality of the computer graphics matter when judging distances in visually immersiveenvironments. Presence: Teleoperators and Virtual Environments, 13:560–571.

10

Page 11: Statement on Research Objectives and Accomplishments

Robert E. Bodenheimer, Jr. Statement on Research

[Urtasun et al., 2004] Urtasun, R., Glardon, P., Boulic, R., Thalmann, D., and Fua, P. (2004). Style-based motionsynthesis. Computer Graphics Forum, 23(4):799–812.

[Van Haevre et al., 2005] Van Haevre, W., Di Fiore, F., and Van Reeth, F. (2005). Uniting cartoon textures withcomputer assisted animation. In Proceedings of GRAPHITE 2005, pages 245–253. ACM Press.

[Wiley and Hahn, 1997] Wiley, D. J. and Hahn, J. K. (1997). Interpolation synthesis for articulated figure motion.In Proceedings of the Virtual Reality Annual International Symposium, pages 157–160. IEEE Computer SocietyPress.

[Willemsen and Gooch, 2002] Willemsen, P. and Gooch, A. A. (2002). Perceived egocentric distances in real, image-based and traditional virtual environments. In IEEE Virtual Reality, pages 79–86.

[Zordan et al., 2005] Zordan, V. B., Majkowska, A., Chiu, B., and Fast, M. (2005). Dynamic response for motioncapture animation. ACM Transactions on Graphics, 24(3):697–701.

11