Upload
keisuke-kishi
View
221
Download
0
Embed Size (px)
Citation preview
Styling and Animating Human Hair
Keisuke Kishi and Shigeo Morishima
Faculty of Engineering, Seikei University, Musashino, 180-8633 Japan
SUMMARY
Synthesizing facial images by computer graphics
(CG) has attracted attention in connection with the current
trends toward synthesizing virtual faces and realizing com-
munication systems in cyberspace. In this paper, a method
for representing human hair, which is known to be difficult
to synthesize in computer graphics, is presented. In spite of
the fact that hair is visually important in human facial
imaging, it has frequently been replaced by simple curved
surfaces or a part of the background. Although the methods
of representing hair by mapping techniques have achieved
results, such methods are inappropriate in representing
motions of hair. Thus, spatial curves are used to represent
hair, without using textures or polygons. In addition, hair
style design is simplified by modeling hair in units of tufts,
which are partially concentrated areas of hair. This paper
describes the collision decisions and motion repre-
sentations in this new hair style design system, the model -
ing of tufts, the rendering method, and the four-branch
(quadtree) method. In addition, hair design using this hair
style design system and the animation of wind-blown hair
are illustrated. © 2002 Scripta Technica, Syst Comp Jpn,
33(3): 31�40, 2002; DOI 10.1002/scj.1111
Key words: Computer graphics; hair style; tuft
model; animation; GUI.
1. Introduction
Image syntheses of human faces by CG are currently
attracting attention in various fields. In the area of human
communications, for example, human-simulating agents
are being realized and the generation of real human facial
images is required. However, currently hair is represented
by simple curves or replaced by a part of the background,
even though hair is visually important in imaging a human
face. This is due to the fact that hair is difficult to represent
by CG, since the number of hair strands is very large and
their shapes are complicated. However, various attempts
have been made to represent hair by CG, although methods
which are decisively superior from the points of view of
realism, ease of modeling, memory capacity, computing
time, and so on are nonexistent.
In this paper, a hair style design system which repre-
sents and generates an arbitrary hair style in tuft units is
proposed. In addition, a scheme for thinning the ends of hair
using α blending and a scheme for avoiding occlusion of
hair by the head via an accurate collision decision scheme
that includes a collision decision buffer for realizing more
realistic hair animation are presented.
2. Modeling of Hair
The methods for representing hair by CG are broadly
divided into two categories. One consists of methods of
representing hair by anisotropic reflection models [1, 2] or
�texels� [3] by taking the hair as surface texture. and the
other consists of methods of modeling the hair strand by
strand via shape models formed of triangular pyramidal
primitives, for example [4]. Representing wind-blown hair
requires that the motion of hair be represented strand by
strand. Although representing hair by texture can yield
high-quality images, it is not appropriate for representing
motions and the like, since hair is treated as an object.
Although representing hair by shape models is appropriate
© 2002 Scripta Technica
Systems and Computers in Japan, Vol. 33, No. 3, 2002Translated from Denshi Joho Tsushin Gakkai Ronbunshi, Vol. J83-D-II, No. 12, December 2000, p p. 2716�2724
31
for representing motions, if modeling of complicated and
abundant hair is done using polygons, problems associated
with memory capacity and computing arise, since a huge
memory capacity is required to model an entire head of hair.
Considering such problems, methods of approximat-
ing the hair strand by strand via �spatial curves� [5, 6] have
been used. The shapes of the spatial curves are determined
by a number of points in the space, called �shape controlling
points.� Thus, all of the hair can be modeled with a small
memory capacity by storing only the shape control points
as shape data. The spatial curves used in this paper are third-
order B-spline curves and their shapes are shown in Fig. 1.
3. Modeling of Hair
The design system presented in this paper constructs
a head model with about 3400 triangular polygons and
constructs the hair using spatial curves on their surfaces.
The generated hair area consists of about 1300 polygons,
and a strand of representative hair is generated for one
polygon on the modeling tool. Since this system depends
on the head model to a certain degree, a model having a
polygon number of this magnitude is required. In addition,
it is necessary to make the meshes of the generated hair area
fine when using other models.
In modeling hair using shape control points, a huge
number of shape control points must be appropriately
placed. For this purpose, a scheme [7] for determining the
arrangement of the shape control points using an appropri -
ate function and a method of constructing the shape of hair
automatically from the bending state of the hairs [9] have
been proposed. However, a desired hair style cannot be
easily constructed by either method.
It has previously been confirmed that the motion of
rustling hair blown by the wind can be represented [8] by
modeling the hair strand by strand independently for mo-
tion simulation and applying this dynamic model. However,
since hair consists of a huge number of strands, determining
the hair style by editing spatial curves strand by strand is
impossible in reality. Thus, in this paper, an attempt to make
the design work efficient by treating hair consisting of
multiple strands simultaneously as tuft models for the pur-
pose of designing the hair style and performing the render -
ing and the motion representation separately strand by
strand has been made.
3.1. The tuft model
The elements required in determining the hair style
include the shape of the hair and the correlations between
the head and the hair, as well as between the hairs them-
selves. These elements can be simply treated and the hair
can be approximated to real human hair in a short time by
modeling multiple hair strands simultaneously.
A method of representing hair by tuft models and a
tool for editing have been proposed by Chen and col-
leagues. Their tuft model assigns positions in basic units of
shape, rendering, and motion representation, and has been
shown to reduce the computational load. The tuft model
presented in this paper assigns positions by treating multi -
ple strands simultaneously, and the size of a tuft, the cross-
sectional shape, and the assigned positions on the head can
be designated freely on a GUI, depending on the hair style
to be edited, and thus increases the representability and
flexibility of the method. In addition, rendering and motion
modeling are performed strand by strand as discussed later.
Since third-order B-spline curves are curves approxi-
mating more than seven control points, more than seven
rectangles are defined as tuft models as shown in Fig. 2. The
rectangle is a set of shape control points, and the tuft model
Fig. 1. 3D B-spline curve.
Fig. 2. Tuft model.
32
is edited by rotating, shifting, and transforming the shape
control rectangles. The shape control points are obtained on
the basis of the shape control rectangle to generate spatial
curves.
3.2. Construction of editing system
In this system, hair is constructed by repeating three
procedures: designating the tuft model generation area,
editing the tuft model, and pasting the tuft model onto the
head.
(1) Designation of area
An arbitrary polygon (a pentagon in Fig. 3) is desig-
nated by the mouse and the area for generating the tuft
model is designated. Simultaneously, the following pa-
rameters required for pasting and editing the tuft model are
determined.
(a) Determination of local coordinate system
The local coordinate system of the area is determined,
with the algebraic sum [Fig. 3(a)] of the normal vector of
the polygon designated as the Z axis of the local coordi-
nates.
(b) Shape of shape control rectangle
The initial size of the shape control rectangle is
determined from the maximum and minimum values of the
x and y coordinates of the frame constructed by connecting
the center of gravity of the designated polygon from the
X�Y plane [Fig. 3(b)] of the local coordinate system. The
initial shape of the shape control rectangle is a long rectan-
gle.
(c) Determination of area
Similarly, the relative coordinate values of the hair
generated by the polygon for the shape control rectangle
determined by (b) are determined by projecting the polygon
information of the head model onto the X�Y plane of the
local coordinate system, determining whether the center of
gravity of the projected polygon is contained inside the
frame on the X�Y plane, and taking this polygon as the area
of the tuft model if it is contained. This shape control
rectangle and the control point coordinates on the space are
obtained by obtaining the relative coordinate values with
the center of the shape control rectangle as the origin of the
local coordinate system.
(2) Editing of tuft model
The tuft model is edited by rotating, shifting, and
transforming the shape control rectangle of the tuft model.
More complicated hair styles can be represented by increas-
ing the number of control points or the number of shape
control rectangles. An example of editing the tuft model is
shown in Fig. 4. Since a twisted shape can be represented
by using rotations many times, as shown in Fig. 4(b), and a
wavy shape can be represented by using shifts many times,
as shown in Fig. 4(c), a permanent wave hair style can be
modeled.
Fig. 3. Selected region. Fig. 4. Examples of tuft editing.
33
(3) Pasting
To automate the process of matching the head with a
tuft model, pasting is performed by combining the tuft
model with the local coordinate system of an area. Rota-
tions and shifts are made such that the normal vector of the
first shape control rectangle of the tuft model and the Z axis
of the area coincide. However, since a tuft model is con-
structed from a plane, if a tuft model is pasted unmodified
as shown in Fig. 4, gaps can occur between the tuft models,
resulting in an unnatural hair style, since the head is curved.
Thus, rotations and transformations are performed on the
hair strand by strand, taking the state of bending of the head
as a cylinder as shown in Fig. 5.
3.3. Replication method
Approximately 50 to 60 tuft models per hair style are
constructed and about 1300 strands of hair, one strand per
polygon, are generated. A hair image is created by a render -
ing module by increasing the number of strands of hair if
this number is too small. In addition, the width of repre-
sentation is broadened by using two different replicating
methods.
In representing a hair style with small collections of
tufts as hair with permanent waves, a representative shape
is replicated, increasing the number of strands of hair, with
the head polygon as one unit. In contrast, in representing a
uniform hair style such as straight hair, a new shape is
obtained from a shape control rectangle, increasing the
number of strands, with a tuft model as one unit. A hair style
having smooth highlights can be generated by this ap-
proach. In addition, variations are imparted to the hair ends
by using representative shapes determined by the modeling
tools and random numbers in both replication methods.
Although about 100,000 strands of hair exist on the
human head, about 50,000 strands are considered in order
to obtain sufficient quality without the head surface show-
ing, considering the processing time in real rendering.
However, about 90,000 strands are considered in animation.
3.4. Interface
The screen for hair style construction is constructed
in a GUI and all stages can be controlled by the mouse. This
interface screen, shown in Fig. 6, is composed of four
components: an entire screen (lower left) on which the head
model and hair are shown, a tuft model editing screen
(lower right), a shape control rectangle editing screen (up-
per right), and a control panel (upper left).
Since a hair style is constructed from tuft models and
a tuft model is constructed from a shape control rectangle
in this system, editing can be done for each constructing
element. Specifically, the editing of the hair style is done
on the whole screen, the editing of the tuft model is done
on the tuft model editing screen, and the editing of the shape
control rectangle is done on the shape control rectangle
editing screen. Since individual editing results are fed back
onto all editing screens instantaneously, interactive editing
is possible. In addition, the hair generated on the head is the
representative hair.
4. Rendering
4.1. Cylindrical pipe model
In reproducing the real texture of hair by rendering,
a normal vector must be determined at all points on theFig. 5. Pasting tuft model.
Fig. 6. Interface.
34
spatial curves. However, the values obtained by the spatial
curves mentioned earlier are only coordinate values in
space, and local structures are not defined. In this paper,
rendering of spatial curves is made possible [5] by comput-
ing a normal vector for an arbitrary point on the spatial
curves, assuming that the spatial curve line is a very thin
tube as shown in Fig. 7. In this method, the spatial curve is
divided evenly into 100 parts, the luminance at 101 points
is obtained, and color compensation is performed.
Lambert�s model is used in computing the diffuse
reflection components and Phong�s model is used in com-
puting the specular reflection components.
4.2. Antialiasing
Since a very thin object such as hair is treated in this
paper, aliasing occurs very frequently. Thus, eliminating
aliasing is very important. Since a hidden surface is elimi -
nated by using a Z buffer algorithm, using an antialiasing
method suitable for the Z buffer method is important.
Thus, in this paper, a Z buffer, a color buffer having
a resolution several times that of the actual CRT is prepared,
an image with resolution is synthesized using this Z buffer,
and then an image with low resolution is generated by
assigning the average number of pixels to one picture.
4.3. Thinning of hair ends
The thickness of a strand of hair is not constant, and
thinning the hair ends is considered. Thus, it is considered
that the thinner the object observed, the harder it is to
recognize, since it assimilates with objects behind it or the
background. Thus, considering this phenomenon, thinning
of hair ends approximated by spatial curves without three-
dimensional structures is achieved by using α blending,
which produces gradations of the α value from points
before dividing into 100 parts, with α = 0 at the hair end as
shown in Fig. 8.
5. Hair Motion Control
The positions of the shape control points may be
obtained by certain methods to control the motion of a
curve. The contraction or elongation of hair by external
forces can be ignored compared with the overall shape
changes. Thus, in this paper, the motions of segments are
simulated by connecting the shape control points estab-
lished in a space by a rod-shaped rigid body and obtaining
numerical solutions of an equation of motion that takes
account of the external forces acting on each rigid body and
the restoring forces acting between neighboring rigid bod-
ies. Figure 9 shows the shape control points connected by
a rigid body rod.
Each rod is expressed in a polar coordinate system
and its motion can be considered as consisting of rotational
motion in two directions, θ and φ. Letting the sum of all the
forces applied to a segment, such as the wind force and
gravitational force, be F and the positional vector of the
Fig. 7. Cylindrical pipe.
Fig. 8. Thinned down hair tip. Fig. 9. Segment model.
35
center of gravity of the segment be r, the rotational moment
is represented as N = F × r, so that the equation of motion
of an arbitrary segment becomes
where m, l, and ω are the mass, length, and angular velocity
of a segment. The end point of Segn becomes the beginning
point of Segn+1 and the computation is repeated from the
root to the tip of hair.
6. Collision Decision and Avoidance of
Occlusion
In modeling hair, the shapes of the spatial curves are
determined in such a way that hair is not present inside the
head. However, if simulation of motion is done while
ignoring the existence of the head, hair ends up being buried
inside the head. To avoid this problem, a scheme using
simulated external forces [9] and a scheme using a collision
decision buffer in a cylindrical coordinate system [10] have
been proposed. The former scheme prevents hair from
being stuck inside the head by the simulated external forces
by establishing a simulated external force area that sur-
rounds the head. Although the computing rate is increased
to the extent that special collision processing is not required
by the former scheme, rigorous collision decisions may not
be made by this scheme. In addition, the simulated external
forces must be empirically established so as to prevent hair
from getting stuck inside the head. On the other hand, in the
latter scheme, collisions are detected by comparing the
shape control points transformed into a cylindrical coordi -
nate system with a table storing the distances from the
center of an object to the object surface in a cylindrical
coordinate system which has been constructed in advance.
However, since a collision decision buffer is expressed in a
cylindrical coordinate system in this scheme, a buffer must
be used for each object and the node coordinate conversions
must be computed for each object. The computing time is
expected to increase with this scheme in an environment in
which a number of objects coexist, as in the case of arms,
for example. Although a scheme [12] using collision deci-
sion buffers is adopted in this paper, object groups are not
detected and the points on an object are detected by the
scheme described here.
6.1. Collision decision
In this paper, high-speed decision based only on
comparison operations is performed by computing the in-
tersection points between the polygon and the vertical line
in advance, as discussed below. First, as shown in Fig. 10,
a virtual plane not intersecting all objects, which has a size
allowing all objects associated with collision to be pro-
jected in parallel, is established. Here, a plane parallel with
the x�y plane is considered. Next, a two-dimensional ar-
rangement corresponding to this plane is used. In the ele-
ments [xn, yn] of the arrangement, the coordinates of the
points intersected by a vertical line dropped from point
(xn, yn) corresponding to the plane are stored in the inter-
section order, as shown in Fig. 10. Here, the arrangement
established on the virtual plane is such that the places where
objects do not exist are made sparse by using a quadtree.
As shown in Fig. 11, the virtual plane is divided into four
regions, and the areas are further divided into four regions
if an object of collision exists, but are not divided if such
an object does not exist.
Letting the coordinates of the control point deciding
the shape of the hair be represented by (xp, yp, zp) in colli-
sion decision, the element closest to (xp, yp) is referenced
from the arrangement constructed in advance. Since the
virtual plane is established so as to contain all objects
initially when the coordinates of the control point exceed
the range of the arrangement, objects are nonexistent in this
case. Since the coordinate values z1, z2, z3 . . . of all of the
points of intersection with the polygon at the same xp, yp are
stored in this element, if it is assumed that an object is all
closed and that the coordinate value zp of the control point
z is in the range of
this control point necessarily exists outside all objects.
6.2. Avoiding occlusion
Occlusion of hair inside the head cannot be avoided
by collision decision alone. In avoiding the occlusion of
hair, a certain procedure must be performed on the hair at
(1)
Fig. 10. Collision decision buffer.
36
the time of collision to prevent it from sticking inside the
head.
Observing the actual movement, if hair collides with
the head, the hair is considered to flow along the head
surface as it comes in contact with the head. Here this
phenomenon is simulated by the following method in order
to represent more natural hair movement. As shown in Fig.
12, the velocity and acceleration of node P on the object
surface can be divided into the components normal and
tangential to the object surface. The tangential component
is considered to be unrelated to occlusion of hair inside the
head. Thus, by preserving only the tangential components
of the velocity and the acceleration of this node, the node
can continue a flowing movement on the object surface and
occlusion inside the object can be prevented. In Fig. 12, N
is the normal vector to the object surface. This vector is
obtained by preparing one right-angled parallelepiped for
each polygon constituting an object in advance and decid-
ing with which right-angled parallelepiped or polygon the
colliding decided node has collided. In addition, if the node
belongs to multiple right-angled parallelepipeds, the poly-
gon that minimizes the distance between the node and the
center of gravity of the polygon is taken as the colliding
polygon.
Although the tangential direction vectors of the ob-
ject exist in an infinite number and form tangential planes,
letting the velocity vector of node P be f, the tangential
component of f has the direction of the line of intersection
of the plane formed by the normal vector N and f and its
tangential plane. Thus, the tangential direction component
vector of f is expressed as
where fN is the unit vector of f and can be obtained as
fN = f / |f|. Thus, the tangential components of f are given
by
Similarly, the tangential direction of the acceleration of
node P can be obtained. These components realize the
natural movement of hair by being specified as the new
velocity and acceleration of node P.
Fig. 11. Collision decision buffer by quadtree. (2)
Fig. 12. Collision detect.
(3)
Fig. 13. Loose waves.
37
7. Hair Style Image Synthesis
A permanent wave hair style called �loose waves�
obtained by the scheme proposed in this paper is shown in
Fig. 13. The hair image created by the hair style design
system using a real model shown in Fig. 14(a) is shown in
Fig. 14(b). A hair style called a doll bob is created and part
of an animation sequence is shown in Fig. 15. The respec-
tive numbers of hair strands and of tuft models were 55,616
and 58 in the image shown in Fig. 13, and 78,834 and 49 in
the images in Fig. 14; the number of hair strands was
89,724, the number of tuft models was 61, and the wind
force was sinusoidal with an amplitude value of 1.3 in the
images in Fig. 15. The highlights are just along the head in
Fig. 14(b). In addition, a permanent wave hair style, which
had been difficult to image up to now, has been madeFig. 14. Comparison with real image.
Fig. 15. Example of animation.
38
possible as shown in Fig. 13. Comparing Figs. 14(a) and
14(b) reveals that a very close impression could be realized.
Hair naturally blown by wind could be represented by
animation as shown in Fig. 15.
About 7 minutes was required for the construction of
one frame, using a Silicon Graphics IRIS 02 (R5000, 180
MHz).
8. Conclusions
In this paper, a scheme for modeling hair in tuft units
is proposed and a hair style design system which can
construct the shape of a complicated hair style graphically
and by dialogs is proposed. A hair image can be generated
by representing a large number of strands with a small
amount of shape data by assuming a spatial curve to be a
cylindrical pipe. The thinness and smoothness of hair,
which could not be reproduced in the past, can be realized
by rendering with an image buffer whose degree of resolu-
tion is several times that of the display, or by α blending.
In addition, an algorithm for performing accurate judgment
of collisions between the human head and the hair is pro-
posed, and also a scheme for avoiding occlusion of hair
inside the head by realistic modeling.
The realism of animation and imaging of hair will be
further improved in the future by improving the perform-
ance of the motion control algorithm and the rendering
algorithm. The algorithms must be made simple and effi -
cient to make real-time syntheses possible, and optimal
trade-offs between quality and synthesis time must be
found.
In designing hair having a realistic look, a system
allowing hair styles rich in variations is required. In order
to develop such a system, modeling tools considering hair
decorations, and the like are being developed, seeking
realistic representations of loose hair, stray hair, and so on.
Tools for adjusting hair styles to represent an existing hair
style cut by scissors or partially curled are also essential.
REFERENCES
1. Yamana T, Suenaga Y. Hair representation using an-
isotropic reflection models. Tech Rep IEICE
1989;PRU87-3.
2. Tojo H, Miyahara M, Murakami K, Hirota K. Repre-
sentation of the texture of hair by computer graph-
ics�Applications of anisotropic reflection models
and normal mappings. Shingaku Giho, IE 89-34,
1989.
3. Kajiya JT, Kay TL. Rendering fur with three dimen-
sional textures. Computer Graphics (Proc SIG-
GRAPH 89) 1989;23:271�280.
4. Watanabe H, Suenaga Y. Hair generation using trian-
gular prisms and tuft models. Joho Shori Gakkai
Zentai, 5K-10, p 715�716, 1989.
5. Kobayashi S, Morishima S, Harashima H. Motion
models of filamentous objects and simulation by CG.
6th NICOGRAPH Theses Contest, p 29�36, 1990.
6. Kobayashi S, Morishima S, Harashima H. Motion
models of filamentous objects and simulation by CG.
Tech Rep IEICE 1991;PRU90-127.
7. Sugano Y, Kobayashi S, Morishima S, Harashima H.
Hair style automatic generation for hair designing.
Shingaku Shunki Zentai, D-662, 1991.
8. Saegusa F, Morishima S. Motion representation of
hair based on dynamics modeling. Informatic Graph-
ics and CAD Review, p 25�30, 1997.
9. Anjo K, Usami H, Kurihara T. Hair representation by
three-dimensional computer graphics. Informatic
Graphics and CAD Symposium, p 127�134, 1991.
10. Thalman N, Kuraihara T, Thalman D. An integrated
system for modeling, animating and rendering hair.
EUROGRAPHICS�93, p 211�221, 1993.
11. Chen L-H, Saeyor S, Dohi H, Ishizuka M. A system
of 3D hair style synthesis based on wisp model. The
Visual Computer 1998;15:159�170.
12. Shinya M, Forgue M-C. Interference detection
through rasterization. J Visualization and Computer
Animation 1991;2:132�134.
39
AUTHORS (from left to right)
Keisuke Kishi received his B.E. degree from the Department of Electrical and Electronic Engineering of S eikei University
in 1998, and is currently in the master�s program. His research interests include represent ation of hair by computer graphics.
Shigeo Morishima (member) received his B.E., M.E., and D.Eng. degrees from the Department of Electronic Engin eering
of the University of Tokyo in 1982, 1984, and 1987. He is now an associate professor in the Department of Electrical and
Electronic Engineering of Seikei University. He was a visiting research fellow at the Univ ersity of Toronto in 1994�95. His
research interests include computer graphics, computer vision, and multimodal interface s. He received a 1992 IEICE
Achievement Award, and is a member of IEEE, ACM, the Acoustic Society of Japan, the Televisi on Society, and others.
40