Click here to load reader
Upload
gary-bente
View
218
Download
1
Embed Size (px)
Citation preview
Journal of Nonverbal Behavior 25(3), Fall 2001 � 2001 Human Sciences Press, Inc. 151
COMPUTER ANIMATED MOVEMENTAND PERSON PERCEPTION:METHODOLOGICAL ADVANCES IN NONVERBALBEHAVIOR RESEARCH
Gary Bente, Nicole C. Kramer, Anita Petersen,and Jan Peter de Ruiter
ABSTRACT: Impression effects of videotaped dyadic interactions were comparedwith 3D-computer animations based on movement transcripts of the same interac-tions to determine whether similar effects could be obtained. One minute se-quences of movement behavior taken from three different dyadic interactions weretranscribed using the Bernese Coding System (BCS). Descriptive data were con-verted into animation scripts for professional animation software. Original videodocuments and computer animations were shown to separate groups of observersand their socio-emotional impressions were assessed on a standard adjective check-list. Only marginal differences were found between the two presentation modes.On the contrary, the data point to remarkable similarities in the impression ratingsin both conditions, indicating that most of the relevant social information availableto observers in the video recordings was also conveyed by computer animations.Overall, the data suggest that the systematic use of computer animation techniquesin nonverbal research deserves further scientific attention.
KEY WORDS: artificial behavior; computer animation; nonverbal behavior; personperception; new methodology.
Although gestures, movements, and postures are supposed to add a
meaningful dimension to human communication (Argyle, 1972; Davis,
1972; Mehrabian, 1969), much remains to be learned about the implicit
Gary Bente, Nicole C. Kramer, Anita Petersen, Jan Peter de Ruiter, University of Cologne,Germany.;
This research was supported by grant BE 1745/2-1 from the Deutsche Forschungsge-meinschaft (DFG, German Research Association). We also thank two anonymous reviewersfor helpful comments on earlier versions of this article.
Address correspondence to Prof. Dr. Gary Bente, Department of Psychology, Universityof Cologne, Bernhard-Feilchenfeld-Strasse 11, 50969 Cologne, Germany; e-mail: bente�uni-koeln.de.
152
JOURNAL OF NONVERBAL BEHAVIOR
information and the psychological impact of this ‘silent language’ (Hall,
1959). For many years, a recurrent problem of movement analysis in the
context of nonverbal communication research has been the lack of ade-
quate description methods (see, e.g., Duncan & Fiske, 1979; Duncan,
Kanki, Makros, & Fiske, 1984; Ellsworth & Ludwig, 1972; Grammer, Fil-
ova, & Fieder, 1997). Some research from the last two decades has signifi-
cantly advanced methodological knowledge in this area (Donaghy, 1989).
Based on standard video technology a series of transcription procedures
and coding strategies have been developed that provide detailed and accu-
rate protocols for both facial behavior and body movement (Ekman &
Friesen, 1978; Frey, Hirsbrunner, Florin, Daw, & Crawford, 1983). More-
over, powerful tools for automatic data acquisition have been introduced
to the field recently, using sophisticated measurement devices such as mo-
tion-capture devices, infra-red and ultra-sonic movement sensors, or video
based pattern recognition techniques (Altorfer, Jossen, & Wurmle, 1997;
Bers, 1996; Cassell et al., 1999; Essa, 1995; Essa & Pentland, 1995; Gram-
mer, Filova, & Fieder, 1997; Thorisson, 1996).
The strength of this purely descriptive methodology, i.e., the exclusion
of observers’ evaluations from the measurement process, becomes a weak-
ness when the analysis of interpersonal effects and the psychological
meaning of nonverbal behavior are the focus of attention. When the as-
signment of meaning is not included in the coding process, it must be
reconstructed either from structural aspects inherent in the behavior proto-
cols or by adding a semantic dimension to the data base. The first ap-
proach was suggested by Grammer, Filova, and Fieder (1997) following a
“radical empiricism,“ which relies on the identification of recurrent patterns
in the stream of nonverbal behavior (Grammer, Kruck, & Magnusson, 1998).
A similar idea was introduced by Altorfer (1988, 1989), namely reducing
the question of meaning to the problem of isolating behaviors preceding
systematical changes in the course of the interaction; e.g., the content of
any verbal message that follows. Examination of these procedures reveals
that meaningful findings cannot be complete without a verbal translation of
descriptive data into psychological categories, which carry a semantic sur-
plus that may have little or no empirical foundation.
On the other hand, it has also been difficult to study the meanings and
interpersonal effects of nonverbal behavior using a systematic experimental
approach, especially the control of the independent variables. Such experi-
ments, i.e. the manipulation of particular nonverbal behaviors, caused se-
vere problems because of the many interactions between different nonver-
bal channels and between features of appearance (e.g., physical
attractiveness) and those nonverbal channels. For example, Lewis, Derlega,
153
GARY BENTE, NICOLE C. KRAMER, ANITA PETERSEN, JAN PETER DE RUITER
Shankar, Cochard, and Finkel (1997) could show that the experimental
variation of touch behavior was confounded by simultaneous variations in
other nonverbal channels. They concluded that “in spite of specific instruc-
tions to keep nonverbal behavior consistent, confederates in the touch
versus no touch condition displayed different behaviors. Confederates who
touched used more nervous gestures and fewer expressive hand gestures
compared to those who did not touch” (Lewis et al., 1997, p. 821). Other
investigators tried to solve such problems by using images or puppets that
could be controlled more easily and precisely than actors. For example,
Frey, Hirsbrunner, Florin, Daw, and Crawford (1983) systematically varied
head positions by means of photo-retouch techniques to investigate the
socio-emotional impact of lateral head tilt, a subtle cue used in art paint-
ings. Similarly, Trautner (1991) manipulated wooden manikins to study the
perception of sex-stereotyped postures from a developmental perspective.
Schouwstra and Hoogstraten (1995) used hand drawings in cartoon style to
study the impression effects of different positions of head and body. De-
spite some seemingly encouraging results, all these studies have been re-
stricted to the investigation of static and easily manipulated features of
nonverbal behavior such as postures or positions of specific body parts.
One of the first attempts to study the perception of dynamic features of
bodily behavior was pursued by Johansson (1973, 1976), who introduced
the so-called ‘point light display’ technique. Berry, Kean, Misovich, and
Baron (1991) pointed out that this method: “. . . is a potentially valuable
tool for researchers in the area of nonverbal behavior, as it permits the
study of human movement in the absence of potentially contaminating
cues such as physical appearance” (p. 82). However, the point light display
method has the inherent disadvantage that light sources or light-reflecting
materials must be applied to the subjects, which can obstruct movements
and also increase self-awareness and evaluation apprehension.
Against this background, Berry et al. (1991) proposed an alternative
method that can be used with common video documents. Based on a tech-
nique referred to as ‘quantization’, (see Harmon, 1973; Morrone, Burr, &
Ross, 1983) standard videotapes are electronically distorted in a post pro-
duction process to eliminate the recognition of most features of physical
appearance. With this procedure the stimulus person can be filmed under
natural, non-restrictive conditions and, when required, even without
awareness of the video recording. While this method is to some degree
useful in analyzing dynamic aspects of movement behavior independently
of an actor’s physical appearance, it still lacks the important dimension of
systematic experimental stimulus control. As Berry (1990) already pointed
out, in order to solve this problem: “. . . the precise nature of the stimulus
154
JOURNAL OF NONVERBAL BEHAVIOR
information communicating a given characteristic needs to be described.
And that stimulus quality of interest needs to be manipulated in some fash-
ion in order to demonstrate that this is indeed the effective stimulus under-
lying a given percept” (p. 150). The proper way to describe and manipulate
the particular ‘stimulus quality of interest’ remained an open question for
future research. Earlier, Cutting and Profitt (1981) suggested computer-
simulation as a possible solution to the problem. However, deficiencies in
affordable computer hardware and software did not permit its systematic
use in nonverbal research until recently (see Berry, 1990). A first software
implementation for the experimental computer animation based on so-
called position time-series protocols of movement was described by Bente
(1989). The computer program allows for the animation of a simple wire-
frame model of a human head that could be attached to static images of
various body models. This method has been evaluated in a study on sex-
stereotyped person perception (Bente, Feist, & Elder, 1996). Another com-
puter program for the 3D-animation of body movement was introduced by
Kempter (in press). Although allowing for full body animation, the program
is restricted to the use of low resolution 3D-models. Further developments
in technology have facilitated the application of more sophisticated com-
puter animations in this research area. Based on these advancements, a
new research tool was introduced recently, linking position time series pro-
tocols to a professional 3D-computer animation platform, thus allowing for
the interactive editing of the movement data and the generation of smooth
animations performed by realistic 3D-models (see Bente, Petersen, &
Kramer, 1999; Bente, 2000).
The major concern of the present approach was to determine whether
such computer animations are capable of producing realistic person per-
ception effects in neutral observers. This test is crucial for any future appli-
cation of this methodology in person perception research: before system-
atic variations of computer animations can be applied in experimental
research into specific nonverbal behaviors, it has to be demonstrated that
animated movement evokes socio-emotional effects similar to those of
real-life or videotaped behavior and is not merely perceived as ‘artificial’
or ‘strange.’
Although we were interested in demonstrating (near-)similarity, we fol-
lowed the traditional statistical approach of rejecting the null hypothesis.
Indeed the assumption that differences will be found bears some plaus-
ibility, as the available computer model still lacks some details of nonver-
bal behavior. In particular the virtual actors perform neither facial activity
nor finger movements. Also, the physical appearance of the computer ac-
tors is standardized and thus different from all the actors on the video (see
155
GARY BENTE, NICOLE C. KRAMER, ANITA PETERSEN, JAN PETER DE RUITER
figure 1). The anatomical differences between stimulus persons and the
computer model (e.g., length of arms and legs) could also lead to slight
differences in the angles or the positions of the extremities. Thus, if crucial
information gets lost on the way from the video to the computer anima-
tions the stimulus should provoke different judgements or at least more
ambiguous ones when compared to the video judgements. As a lack of
significance cannot be interpreted as a direct proof of similarity we used
further analyses such as correlation-based comparisons of judgement pro-
files. The problems of demonstrating similarity that become salient in this
context will be discussed below.
Method
Stimulus Material
Stimulus material was selected from three dyadic interaction se-
quences between six male students involved in casual chats with no spe-
cific topic. One minute of movement behavior in the initial phase of the
conversation was transcribed for each stimulus person using the ‘Bernese
Coding System’ (BCS; Frey, Hirsbrunner, Florin, Daw, & Crawford, 1983;
Hirsbrunner, Frey, & Crawford, 1987). The BCS is based on the principle of
so-called position-time-series-notation. Assigning numerical codes to the
spatial deviations of the various body parts from predefined base positions,
the BCS allows for a detailed and reliable transcription of video-recorded
movement behavior into high resolution data protocols. As could be shown,
these phenotypical codes can be converted into generic rotation angles
(Euler angles) that can be used as input for 3D-animation programs (Bente,
1989). A special software tool was developed for the current study to per-
form this conversion in real time (Bente, 2000; Leuschner, 1999). The pro-
gram SoftImage3D� was used as front end animation tool because it comes
together with an elaborated human body model with a skeleton that could
be easily adopted to our needs and also provides a special interface for
external animation data. Combining SoftImage3D� with our own con-
verter module, we were able to produce accurate computer animated rep-
lications of video-recorded movement and at the same time retain full ac-
cess to descriptive BCS-data for the purpose of analysis and experimental
variation. Restrictions of the computer animation concern finger move-
ments, facial activity and the display of realistic clothes. Thus, parts of the
computer models were colored in black producing the impression of a
pantomime’s suit (see figure 1). Video transcription was done with a tem-
poral resolution of 2 Hz. Movement protocols were then interpolated to 25
156
JOURNAL OF NONVERBAL BEHAVIOR
Figure 1. Presentation modes.
Hz and rendered by means of SoftImage3D�. Original video and com-
puter animations were converted to digital video (AVI) and stored on hard-
disk for experimental presentation. Figure 1 shows screen representations
of both experimental conditions.
Participants
Students from different disciplines (economics, computer science,
chemistry, art and psychology) at the University of Cologne participated in
the study (N � 104). The age of the participants ranged from 19 to 57
years with a mean age of 26.11 (SD � 5.75) years. Matching for gender,
the participants were assigned to one of the experimental groups. In the
experiment 53 observers (26 male/27 female) were shown the video re-
cordings and 51 observers (25 male/26 female) saw the animations. Devia-
tions from the intended equal distribution over the three conditions were
caused by participant drop outs. The participants were paid for their assis-
tance.
Dependent Measures
The Positive-Negative-Affect-Schedule (PANAS, Krohne, Egloff, Kohl-
mann, & Tausch, 1996; Watson, Clark, & Tellegen, 1988) was used to mea-
sure the observers’ attributions of emotional states and interpersonal
attitudes towards both actors on the screen. Although developed for self-
assessment of emotional states, the system provides an item list that can be
used for the judgement of other social stimuli as well. The following
PANAS items were used with a five-point scale (1 � not at all; 5 � ex-
157
GARY BENTE, NICOLE C. KRAMER, ANITA PETERSEN, JAN PETER DE RUITER
tremely): active, interested, excited, strong, inspired, proud, enthusiastic,
alert, determined, attentive, distressed, upset, guilty, scared, hostile, irrita-
ble, ashamed, nervous, jittery and afraid. Additionally, we included seven
questionnaire items asking for a direct comparison of the actors on the
screen. Three of these items are related to the basic socio-emotional di-
mensions ‘evaluation,’ ‘activity’ and ‘potency’ (Which person do you con-
sider more sympathetic? . . . more active? . . . more dominant?) (see Mehra-
bian, 1969, 1970). Four further items were used to assess the personal
preferences of the observer for either one of the actors (Which person did
you pay more attention to? Which person’s perspective could you take
more easily? Which person did you identify with? Which person would
you prefer to talk to?).
Procedure
Groups of up to 12 participants were seated in the small cinema-like
presentation room and asked to watch three different sequences of silent
one-minute-interactions presented via a large screen LCD projector. The
groups were shown the stimulus material in either one of the presentation
modes (video or computer animation). Each interaction sequence was
shown twice, and after each presentation the participants were asked to
write down their PANAS rating for one of the interactors. Potential memory
effects were counterbalanced by exchanging the order of judgement for left
and right person on the screen for half of the observers in each experimen-
tal group. To prevent serial effects, the order of the three different interac-
tions was rotated systematically for all groups. Also, questionnaire items for
comparative evaluation of the actors were alternatingly combined with the
judgment of either one of the actors.
Results
Because the aim of our study is to investigate whether computer anima-
tions will evoke similar responses from subjects as the original video, we
want to make the argument that there are few differences between the two
modes. This means that we are trying to show that two sets of PANAS
profiles are effectively similar. Therefore, we need to take care that those
items of the PANAS that are unreliably scored in the video modus are
removed from the analysis. Items that are unreliably scored in the video
mode will probably also be unreliable scored in the computer animation,
which will lead to nonsignificant differences in the comparison, not be-
158
JOURNAL OF NONVERBAL BEHAVIOR
cause these items are scored similarly, but because the variance in both
data sets will be large and unsystematic. Therefore, we computed the inter-
rater reliability (intraclass correlation) between the different subjects for all
items of the video PANAS data. We set the rejection criterion at .6 since
the reliability for values higher than that are generally conceived to be
‘substantial’ (cf. Landis & Koch, 1977; Fleiss, 1981). There were four items
that did not reach the criterion (the items hostile, irritable, nervous and
proud) and were therefore excluded from the subsequent analysis.
T-tests were used on the remaining 16 PANAS items to detect signifi-
cant judgement discrepancies between the two presentation modes. Differ-
ences in dichotomous questionnaire data were analyzed by means of chi-
square-tests. The alpha level for all tests was set to .01 (two-tailed).
Impression Formation Based on Video vs. Computer Animated Stimuli
Significant differences could be found for a small number of PANAS
items in the comparison between video and computer animated stimuli.
Table 1 summarizes the significant results for the 6 stimulus persons. As
can be seen, the results are unsystematic and thus difficult to interpret as
specific effects of the experimental variation: overall, 8 different items of
the PANAS reveal significant differences.
TABLE 1
Significant Judgement Differences (p � .01) in the PANAS-Items for 6Different Stimulus Persons
Video Animation t-test
Items / IDa M SD M SD t df p
Excited / 1 2.34 1.02 1.59 1.00 3.79 102 .000
Ashamed / 2 1.30 .61 1.76 .97 �2.9 83.34 .005
Ashamed / 3 1.47 .98 2.20 1.30 �3.17 92.97 .002
Afraid / 3 1.55 1.00 2.44 1.28 �3.85 92.48 .000
Determined / 4 2.35 1.17 2.96 1.09 �2.75 101 .007
Upset / 5 1.91 .97 1.32 .59 3.74 86.59 .000
Alert / 6 2.34 .94 3.08 .89 �4.11 102 .000
Attentive / 6 2.70 .85 3.27 .90 �3.38 102 .001
Interested / 6 2.36 .96 3.10 .85 �4.14 102 .000
aID of stimulus person: dyad 1: 1 and 2; dyad 2: 3 and 4; dyad 3: 5 and 6).
159
GARY BENTE, NICOLE C. KRAMER, ANITA PETERSEN, JAN PETER DE RUITER
The number of significant differences is quite low for all stimulus per-
sons but varies between one (persons 1, 2, 4, 5), two (person 3) and three
differences (person 6) per person. As an illustration, the judgement profiles
for the third dyad are represented in figure 2. These were selected since
they include the judgement for the person showing the largest number (per-
son 6: right person in dyad 3) and the person showing the smallest number
of significant differences (person 5: left person in dyad 3), respectively. The
figure demonstrates that despite the named discrepancies the profiles show
remarkable overall correspondence.
Interestingly, the small number of significant differences between the
Figure 2. Impression rating profiles for stimulus person 1 and 2 in dyad 3.
160
JOURNAL OF NONVERBAL BEHAVIOR
video and the animation data are interpretable with the help of the dimen-
sion ‘activity’ of Mehrabian (1969, 1970). The items upset and excited are
rated higher in the video data (which corresponds to a high degree of
Mehrabian’s concept ‘activated’), whereas the items ashamed and afraid
are rated higher in the animation data (corresponding to a low level of
Mehrabian’s ‘activitated’ dimension). Possibly, this is due to the general
lack of facial activity in the computer animations. Two explanations seem
to be worth considering. First, the animation models’ lack of facial dy-
namics could lead to weaker perceptual effects with respect to the acti-
vated states upset and excited, relative to the video data. Second, the fact
that the facial expression of the animation models are static could also be a
discriminative socio-emotional cue in itself, which could be perceived as a
‘frozen face’ and thus lead to higher ratings on the PANAS scales ashamed
and afraid.
Another difference between the two conditions concerns the data from
person 6, who is consistently rated as more alert, attentive and interested
when presented as an animated character. This may be due to the lack of
eye movements in the animation condition. For example, a person who is
rotated towards but not precisely looking at the interaction partner could
be perceived as more attentive when presented as an animated character
because in the animation the eyes are consistently pointing in the same
direction as the head.
From the fact that the majority of the t-tests performed over PANAS
items are not significant, it can of course not be concluded that there are
no differences between the groups. Therefore, it is important to inspect the
statistical power of the experiment (e.g., Cohen, 1988, 1992; Faul &
Erdfelder, 1992). Given the number of subjects for each comparison
(n � 51, 53), an alpha level of .01 (two-tailed), and the relatively low
variance (SD appr. 1), the power for detecting an effect size (delta) of
� � 0.75 is 1–� � 0.88, which is larger than the conventional lower limit
of 0.8. The specified effect size of � � 0.75 corresponds with an average
scoring difference of less than one point on the five-point scale. The anal-
ysis of the statistical power of this experiment thus indicates that the low
number of significant differences is not due to the insensitivity of the statis-
tical test that was used.
Unfortunately, performing t-tests on individual items of the PANAS
scale does not allow us to evaluate the similarity of the profiles as a whole.
For this purpose, we computed Hofstatter’s Q value (Hofstatter, 1959), es-
sentially a correlation coefficient based on Pearson’s product-moment cor-
relation measure, indicating the correlation between two profiles as a
whole. The correlations are based on N � 16 (number of items taken into
161
GARY BENTE, NICOLE C. KRAMER, ANITA PETERSEN, JAN PETER DE RUITER
account) and range from Q � .75 (for stimulus person 3) to Q � .97 (for
stimulus person 5), all significant at the .001 level.
These high correlations indicate a very high degree of similarity be-
tween the profiles obtained with video presentation on the one hand, and
computer animation on the other. Because Q coefficients are based on the
average response of all subjects, we still need to investigate whether
switching from video to computer animation will not result in a loss of
interrater reliability. This was done by comparing the average interrater
reliability over all 16 remaining PANAS items for video (r � .815), com-
puter animation (r � .751) and video and computer animation taken to-
gether (r � .850). (The fact that it is higher for the two data sets together is
related to the fact that the number of data points is twice as large in that
dataset). Overall, this analysis shows that using computer animations in-
stead of video does not result in a loss of interrater reliability. Moreover,
the fact that the interrater-reliability does not decrease if judgements based
on computer animation are included in the analysis again shows that the
evaluations in the two groups are highly similar.
Comparative Evaluation of Stimulus Persons in Video and Computer
Animation Sequences
Chi-square tests were run for the comparative evaluation items to test
differences in right person/left person preferences between video and com-
puter animation mode. Preference choices in the video condition were
treated as expected frequencies and the preference choices in the com-
puter animation mode were treated as observed frequencies. Expected fre-
quencies were corrected for the number of observers in the experimental
group. In only one of 21 tests (7 Items per dyad) a significant difference in
preference choices was detected: the video based rating of ‘dominance’ in
dyad 2 (fleft � 39; fright � 12) is inverted in the computer animation condi-
tion (fleft � 22; fright � 29; �2 (1, N � 102) � 7.41; p � 0.006). This dif-
ference could also be due to the fact that gaze direction, which is strongly
correlated with dominance (see Dovidio, Ellyson, Keating, Heltman, &
Brown, 1988), is not recognizable in the computer animation condition.
Therefore, the left person who is perceived as more dominant in the video
condition, is perhaps no longer perceived to be dominant in the animation
condition since he is displaying ‘power’ mainly by his gaze behavior. Be-
yond this particular difference, however, the inspection of the frequency
distributions reveals a remarkable correspondence in the direction of social
preferences when comparing computer animations to video.
162
JOURNAL OF NONVERBAL BEHAVIOR
Discussion
A method for the 3D-computer animation of body movement has been
introduced as a new tool for nonverbal communication research. In con-
trast to previous approaches, that were either limited with respect to the
range of simulated behavior or with respect to the realism of the computer
animations, the new instrument allows for sophisticated 3D-animations of
whole body movements using realistic 3D-polygon-models with skeleton
and skin-like envelope. By means of a special converter program it has
been made possible to translate position time-series protocols of video-
recorded movement behavior into animation-scripts of a professional com-
puter animation platform. The new method was evaluated in an experi-
ment comparing person perception effects of video recorded nonverbal
interactions with a computer animation of the same behavior to find out
whether similar impression effects could be obtained. The results indicate a
remarkable correspondence between impression ratings based on standard
video recordings and those based on computer animation—as can be
shown by the lack of significant differences on the judgement item level,
the overall correlations of judgement profiles and the correspondence in
direction of social preferences when comparing computer animations to
video. Even though some aspects of nonverbal behavior such as facial ac-
tivity or finger movements could not be displayed by the available com-
puter models, the animated movements lead to nearly the same judge-
ments as in the video condition.
This again indicates (see Bente, 1989; Bente, Feist & Elder, 1996) that
the detailed behavior protocols coded by means of the Bernese System
capture movements faithfully and are especially valuable to describe
movements. Although the coding procedure is a time consuming process,
its employment will also be necessary in the future. The alternative of using
motion capture devices seriously decreases the ecological validity of the
experimental situation due to its intrusiveness. Moreover observers com-
paring video with animation would also be disturbed by the devices the
stimulus persons are wearing. Video based analysis techniques on the other
hand can not be used for this purpose in the near future since the methods
available at the moment are not sufficiently detailed. Especially their spa-
tial resolution is not sufficient to determine the exact positions of the rele-
vant body parts. As mentioned earlier, another important advantage of the
Bernese Coding System is the availability of phenotypical codes that can
be edited by hand to create variations of the nonverbal behavior of interest
(see Bente, 1989).
The remarkable correspondence in person perception effects of video
163
GARY BENTE, NICOLE C. KRAMER, ANITA PETERSEN, JAN PETER DE RUITER
and computer animations does not only provide a strong argument for the
capacity of our simulations to accurately convey the relevant movement
cues but also leads to the assumption that a considerable part of impres-
sion variance is explained by the observed movement behavior. This is
consistent with the findings of Kempter (in press) who also found that the
absence of facial cues makes no difference in interpersonal evaluations.
Since the lack of facial activity does not lead to different evaluations we
suggest that body movements and positions are more crucial for impression
formation and person perception processes than has been supposed before
(e.g., Ekman & Friesen, 1969). Whether body movement, postures and ges-
tures are the predominant or even sole channels for person perception or
whether the relevant information is to some degree redundantly coded in
facial and bodily activities however, can only be answered on the basis of
experiments designed to answer this question.
Against this background our data make a clear point for the systematic
use of this new technology in future experimental nonverbal research. As
the data used for 3D-animation can be analyzed quantitatively and also
modified according to specific hypotheses, this platform creates unique
possibilities for both data-driven and theory-driven research strategies. For
example, in data driven approaches differences in the impression ratings
based on specific simulations can be traced back to structural differences
in the behavior protocols. Specific aspects of behavior, like postures,
movements of head, body or hands or general movement qualities such as
speed, acceleration or complexity (see Bente, Donaghy & Suwelack, 1998)
can be systematically controlled modifying the relevant numerical entries
in the raw data. This can be done by adding a certain constant value to a
specific column in the BCS data matrix in order to change the position of
the head without changing the original dynamics. Another possibility is
multiplying a column with a certain (small) value in order to intensify
movements. Hereby subtle changes in otherwise natural behavior can be
produced. Their specific impression effects can then be tested in subse-
quent experiments. As our knowledge grows, algorithms for behavior gen-
eration can be formulated within theory driven approaches. These algo-
rithms can then be used for rule based generation of behavior or certain
aspects of behavior, that can also be evaluated in person perception exper-
iments.
The application of our data driven methodology, however, is not re-
stricted to fundamental research. It is also relevant in applied research.
Especially, the field of ‘anthropomorphic interfaces’ or ‘embodied com-
puter agents’ (i.e., autonomously directed, virtual agents of human-like ap-
pearance) can profit from findings about the effects of specific nonverbal
164
JOURNAL OF NONVERBAL BEHAVIOR
cues when shown by animated characters. Against this background graphi-
cal computer animation of nonverbal behavior will not only be a fascinat-
ing experimental tool, but also one of the most fruitful applied areas of
nonverbal research.
References
Altorfer, A. (1989). Verbale und nichtverbale Verhaltensweisen von Depression als “aktivesVerhalten“ zur Interaktionssteuerung [Depressive patients’ verbal and nonverbal behavioras means of actively controlling interaction]. Schweizerische Zeitschrift fur Psychologie,48 (2), 99–111.
Altorfer, A. (1988). Eine Methode zur Untersuchung der interaktiven Bedeutung von nichtver-balen Verhaltensweisen [A method for analysing interactional meaning of nonverbal be-havior]. Sprache und Kognition, 7 (2), 99–112.
Altorfer, A., Jossen, S., & Wurmle, O. (1997). Eine Methode zur zeitgenauen Aufnahme undAnalyse des Bewegungsverhaltens [A methodological approach to measure and analyzemovement behavior]. Zeitschrift fur Psychologie, 205 (1), 83–117.
Argyle, M. (1972). Non-verbal communication in human social interaction. In R. A. Hinde(Ed.), Nonverbal communication. London: Methuen.
Bente, G. (2000). Digital representation and experimental simulation of nonverbal behavior.In L.P.J.J. Noldus (Ed.), Proceedings of the 3rd International Conference on Methods andTechniques in Behavioral Research (pp. 16–17). Nijmegen: Katholieke Universiteit Nij-megen.
Bente, G. (1989). Facilities for the graphical computer simulation of head and body move-ments. Behavior Research Methods, Instruments, & Computers, 21 (4), 455–462.
Bente, G., Donaghy, W.F., & Suwelack, D. (1998). Sex Differences in body movement andVisual Attention: An Integrated Analysis of Movement and Gaze in Mixed-Sex Dyads.Journal of Nonverbal Behavior, 22 (1), 31–58.
Bente, G., Feist, A., & Elder, S. (1996). Person perception effects of computer-simulated maleand female head movement. Journal of Nonverbal Behavior, 20 (4), 213–228.
Bente, G., Petersen, A., & Kramer, N.C. (1999). Virtuelle Realitat und parasoziale Interaktion.Entwicklung eines Verfahrens zur Untersuchung sozio-emotionaler Effekte computer-simulierten Verhaltens. [Virtual reality and parasocial interaction. A tool for analyzingsocio-emotional effects of computer-simulated behavior]. Research Report, University ofCologne.
Berry, D. S. (1990). The perceiver as naive scientist or the scientist as naive perceiver? Anecological view of social knowledge acquisition. Contemporary Social Psychology, 14(3), 145–153.
Berry, D. S., Kean, K. J., Misovich, S. D., & Baron, R. M. (1991). Quantized displays of humanmovement: A methodological alternative to the point-light display. Journal of NonverbalBehavior, 15 (2), 81–97.
Bers, J. (1996). A body model server for human motion capture and representation. Presence:Teleoperators and virtual environments, 5 (3), 381–392.
Cassell, J., Bickmore, T., Billinghurst, M., Campbell, L., Chang, K, Vilhjalmsson, H., Yan, H.(1999). Embodiment in conversational interfaces: Rea. CHI’99 Conference Proceedings(pp. 520–527). Association for Computing Machinery.
Cassell, J., Steedman, M., Badler, N. Pelachaud, C., Stone, M. Douville, B., Prevost, S. &Achorn, B. (1994). Modeling the interaction between speech and gesture. In A. Ram & K.Eiselt (eds.), Proceedings on the sixteenth annual conference of the cognitive science (pp.153–158). LEA.
165
GARY BENTE, NICOLE C. KRAMER, ANITA PETERSEN, JAN PETER DE RUITER
Cohen, J. (1992). A power primer. Psychological Bulletin, 112 (1), 155–159.Cohen, J. (1988). Statistical power analysis for the behavioral sciences (Second Edition). Hills-
dale, NJ: LEA.Cutting, J. E. & Proffitt, D. R. (1981). Gait perception as an example of how we perceive
events. In R. D. Walk & H. L. Pick (Eds.), Intensory perception and sensory integration(pp. 249–273). New York: Plenum.
Davis, M. (1972). Understanding body movement. An annotated bibliography. New York:Arno Press.
Donaghy, W. C. (1989). Nonverbal communication measurement. In P. Emmert & L. Barker (Eds.),Measurement of communication behavior (pp. 296–332). White Plains, NY: Longman.
Dovidio, J. F., Ellyson, S. L., Keating, C. F., Heltman, K. & Brown, C. E. (1988). The relation-ship of social power to visual displays of dominance between men and women. Journalof Personality and Social Psychology, 54, 233–242.
Duncan, S. Jr. & Fiske, D.W. (1979). Dynamic patterning in conversation. American Scientist,67, 90–98.
Duncan, S., Jr., Kanki, B.G., Mokros, H., & Fiske, D.W. (1984). Pseudounilaterality, simple-rate variables, and other ills to which interaction research is heir. Journal of Personalityand Social Psychology, 46 (6), 1335–1348.
Ekman, P. & Friesen, W. V. (1978). The facial action coding system. Palo Alto, CA: ConsultingPsychology Press.
Ekman, P. & Friesen, W. V. (1969). The repertoire of nonverbal behavior: Categories, origins,usage, and coding. Semiotica, 1, 49–98.
Ellsworth, P. C. & Ludwig, L. M. (1972). Visual behavior in social interaction. Journal ofCommunication, 22, 375–403.
Essa, I. A., (1995). Analysis, interpretation and synthesis of facial expressions. PHD-Thesis,MIT.
Essa, I. A. & Pentland, A. (1995). Facial expression recognition using visually extracted facialaction parameters. In M. Bichsel (Ed.), International workshop on automatic face- andgesture-recognition (pp. 34–40). Zurich: MultiMedia Laboratory.
Faul, F. & Erdfelder, E (1992). GPower: A priori, post-hoc, and compromise power analyses forMS-DOS [Computer program]. Bonn, FRG: Bonn University, Department of Psychology.
Fleiss, J. L. (1981). Statistical Methods for Rates and Proportions (Second Edition). New York:John Wiley & Sons.
Frey, S., Hirsbrunner, H. P., Florin, A., Daw, W., & Crawford, R. (1983). A unified approach tothe investigation of nonverbal and verbal behavior in communication research. In Doise,W. & Moscovici, S. (Eds.), Current issues in European social psychology (pp. 143–199).Cambridge: Cambridge University Press.
Grammer, K., Filova, V. & Fieder, M. (1997). The communication paradox and a possiblesolution: Toward a radical empiricism. In A. Schmitt, K. Atzwanger, K. Grammer & K.Schafer (Eds.), New aspects of human ethology (pp. 91–120). New York: Plenum.
Grammer, K. Kruck, K. B., & Magnusson, M. S. (1998). The courtship dance: Patterns ofnonverbal synchronization in opposite-sex encounters. Journal of Nonverbal Behavior,22 (1), 2–29.
Hall, E. T. (1959). The silent language. Garden City: Doubleday.Harmon, L. D. (1973). The recognition of faces. Scientific American, 229, 71–82.Hirsbrunner, H. P., Frey, S. & Crawford, R. (1987). Movement in human interaction. Parameter
formation and analysis. In A. W. Siegmann & S. Feldstein (Eds.), Nonverbal behavior andcommunication (pp. 99–140). Hillsdale, NJ: LEA.
Hofstatter, P.R. (1959). Zur Problematik der Profilmethode. Diagnostica, 5, 19–24.Johansson, G. (1973). Visual perception of biological motion and a model for its analysis.
Perception & Psychophysics, 14, 201–211.Johansson, G. (1976). Spatio-temporal differentiation and integration in visual motion percep-
tion. Psychological Research, 38, 379–393.
166
JOURNAL OF NONVERBAL BEHAVIOR
Kempter, G. (in press). Trait attribution to reanimated gestural movements. Behavior ResearchMethods, Instruments, and Computers.
Krohne, H. W., Egloff, B., Kohlmann C. W. & Tausch, A. (1996). Untersuchungen mit einerdeutschen Version der “Positive and Negative Affect Schedule” (PANAS) [Evaluation ofthe german version of the ‘Positive and Negative Affect Schedule’ (PANAS)]. Diagnostica,42 (2), 139–156.
Landis, J. & Koch, G. G. (1977). The measurement of observer agreement for categorical data.Biometrics, 33, 159–174.
Leuschner, H. (1999). Vom Videotranskript zur Computeranimation nonverbalen Verhaltens:Methodendokumentation. [From video-transcript to computer animation of nonverbal be-havior. Method documentation]. Research Report, University of Cologne.
Lewis, R. J., Derlega, V. J., Shankar, A., Cochard, E., & Finkel, L. (1997). Nonverbal correlatesof confederates’ touch: Confounds in touch research. Journal of Social Behavior andPersonality, 12 (3), 821–830.
Mehrabian, A. (1969). Significance of posture and position in the communication of attitudeand status relationships. Psychological Bulletin, 71 (5), 359–372.
Mehrabian, A. (1970). A semantic space for nonverbal behavior. Journal of Consulting andClinical Psychology, 35 (2), 248–257.
Morrone, M. C., Burr, D. C., & Ross, J. (1983). Added noise restores recognizability of coarsequantized images. Nature, 305, 226–228.
Schouwstra, S. J. & Hoogstraten, J. (1995). Head position and spinal position as determinantsof perceived emotional state. Perceptual and Motor Skills, 81 (2), 673–674.
Thorisson, K. R. (1996). Communicative humanoids. A computational model of psychosocialdialogue skills. PHD-Thesis, MIT.
Trautner, H. M. (1991). Children’s and adults awareness of sex-stereotyped postures. Paperpresented at the Eleventh Biennal Meeting of the ISSBD, Minneapolis.
Watson, D., Clark, L. A. & Tellegen, A. (1988). Development and validation of brief measuresof positive and negative affect: The PANAS scales. Journal of Personality and Social Psy-chology, 54, 1063–1070.