21
Capturing Facial Details by Space-time Shape-from- shading Yung-Sheng Lo * , I-Chen Lin * , Wen-Xing Zhang * , Wen-Chih Tai , Shian-Jun Chiou CAIG Lab, Dept. of CS, National Chiao Tung University * Chunghwa Picture Tubes, LTD. 1

Capturing Facial Details by Space- time Shape-from-shading Yung-Sheng Lo *, I-Chen Lin *, Wen-Xing Zhang *, Wen-Chih Tai †, Shian-Jun Chiou † CAIG Lab,

  • View
    219

  • Download
    1

Embed Size (px)

Citation preview

Page 1: Capturing Facial Details by Space- time Shape-from-shading Yung-Sheng Lo *, I-Chen Lin *, Wen-Xing Zhang *, Wen-Chih Tai †, Shian-Jun Chiou † CAIG Lab,

1

Capturing Facial Details by Space-time Shape-from-shading

Yung-Sheng Lo*, I-Chen Lin*, Wen-Xing Zhang*, Wen-Chih Tai†, Shian-Jun Chiou†

CAIG Lab, Dept. of CS, National Chiao Tung University*

Chunghwa Picture Tubes, LTD.†

Page 2: Capturing Facial Details by Space- time Shape-from-shading Yung-Sheng Lo *, I-Chen Lin *, Wen-Xing Zhang *, Wen-Chih Tai †, Shian-Jun Chiou † CAIG Lab,

2

Outline

Introduction Acquisition of facial motion Space-time shape-from-shading Experiment and results Conclusion

Page 3: Capturing Facial Details by Space- time Shape-from-shading Yung-Sheng Lo *, I-Chen Lin *, Wen-Xing Zhang *, Wen-Chih Tai †, Shian-Jun Chiou † CAIG Lab,

3

Introduction Performance-driven method is one of the most

straightforward method for facial animation.

Expression details, e.g. wrinkles, dimples, are key factors but difficult for motion capture.

Original captured images Deformation without details

Page 4: Capturing Facial Details by Space- time Shape-from-shading Yung-Sheng Lo *, I-Chen Lin *, Wen-Xing Zhang *, Wen-Chih Tai †, Shian-Jun Chiou † CAIG Lab,

4

Introduction (cont.) Physics-based simulation and blend shape methods try

to mimic the details.

But, the synthesized details are not the exact expressions.

Muscle-based [E. Sifakis et al. 2005] Blend shape [Z. Deng et al. 2006]

Page 5: Capturing Facial Details by Space- time Shape-from-shading Yung-Sheng Lo *, I-Chen Lin *, Wen-Xing Zhang *, Wen-Chih Tai †, Shian-Jun Chiou † CAIG Lab,

5

Introduction (cont.) Our goal is to enhance existing motion capture tech.

and capture facial details.

With the captured images and directional lighting, our optimization-based shape-from-shading (SFS) can estimate details from shading in video.

With facial detailsCaptured video

Page 6: Capturing Facial Details by Space- time Shape-from-shading Yung-Sheng Lo *, I-Chen Lin *, Wen-Xing Zhang *, Wen-Chih Tai †, Shian-Jun Chiou † CAIG Lab,

6

The proposed method Combines the benefits of motion

capture and shape-from-shading. Motion capture and stereo reconstruction

accurate on feature points and general geometry.

Unreliable corresponding matching at textureless regions

Shape-from-shading don’t need detailed point correspondence for

textureless regions. Estimate relative undulation. Sensitive to noise.

Motion capture + Space-time shape-from-shading.

Page 7: Capturing Facial Details by Space- time Shape-from-shading Yung-Sheng Lo *, I-Chen Lin *, Wen-Xing Zhang *, Wen-Chih Tai †, Shian-Jun Chiou † CAIG Lab,

7

The proposed method

Page 8: Capturing Facial Details by Space- time Shape-from-shading Yung-Sheng Lo *, I-Chen Lin *, Wen-Xing Zhang *, Wen-Chih Tai †, Shian-Jun Chiou † CAIG Lab,

8

Approximate geometry by Mocap Tracking by block matching and stereo reconstruction.

Deforming a generic face model by radial-basis functions (RBF).

Page 9: Capturing Facial Details by Space- time Shape-from-shading Yung-Sheng Lo *, I-Chen Lin *, Wen-Xing Zhang *, Wen-Chih Tai †, Shian-Jun Chiou † CAIG Lab,

9

Facial details by SFS Estimating time-varying details by iterative

approximating shape V and reflectance R.

Input image

T PNum

t

Num

ptptp ISynRVO

1 1

2),(

Page 10: Capturing Facial Details by Space- time Shape-from-shading Yung-Sheng Lo *, I-Chen Lin *, Wen-Xing Zhang *, Wen-Chih Tai †, Shian-Jun Chiou † CAIG Lab,

Space-time constraints Only SFS is not enough. For more reliable detailed motions, we proposed

using space-time constraints.

10

Highly sensitive to noise After applying our spatial constraints

Page 11: Capturing Facial Details by Space- time Shape-from-shading Yung-Sheng Lo *, I-Chen Lin *, Wen-Xing Zhang *, Wen-Chih Tai †, Shian-Jun Chiou † CAIG Lab,

Spatial constraints Mostly continuous surface

High spatial coherence

11

j

tjj

tpCStp zw

zkCS 2)1

(

∆ For J Є Neighbor(p)∆ Neighbor(p) denotes the 8-neighbor pixel set ∆ Wj is an adaptive weight ∆ Kcs is the weight for spatial constraints.

Reduce the noise

noiseZtp

Ztj

Ztp

Ztj

noise

ZtpZtj

Page 12: Capturing Facial Details by Space- time Shape-from-shading Yung-Sheng Lo *, I-Chen Lin *, Wen-Xing Zhang *, Wen-Chih Tai †, Shian-Jun Chiou † CAIG Lab,

Still flicker According to biomechanics properties:

A human facial surface should gradually transit between expressions.

Temporal constraints

12T0T1

flicker

T2

]3,3[ i ,)1

( 2)( wherez

wzkCT

ipittpCTtp

T2T0

A video image sequence

Page 13: Capturing Facial Details by Space- time Shape-from-shading Yung-Sheng Lo *, I-Chen Lin *, Wen-Xing Zhang *, Wen-Chih Tai †, Shian-Jun Chiou † CAIG Lab,

Space-time shape-from-shading

Finally, our objective function becomes

13

])[(1 1

2tptp

Num

t

Num

ptptp CTCSISynO

T p

spatial constraints

+ temporal constraints

+shading constraints

=Space-time shape-from-shading

Page 14: Capturing Facial Details by Space- time Shape-from-shading Yung-Sheng Lo *, I-Chen Lin *, Wen-Xing Zhang *, Wen-Chih Tai †, Shian-Jun Chiou † CAIG Lab,

Performance issue if applied our optimization to the whole face.

DOF is too large Assigned some small windows.

We preferred areas with more wrinkles and creases.

14D.O.F=N*M (pixels) *i(frames)

N

M

Video image sequence

Fi

… F2F1

F3

Page 15: Capturing Facial Details by Space- time Shape-from-shading Yung-Sheng Lo *, I-Chen Lin *, Wen-Xing Zhang *, Wen-Chih Tai †, Shian-Jun Chiou † CAIG Lab,

Experiment Illumination-controlled (single light source) Two video streams. (HDV, 1280*720 ,30 fps)

We pasted 25 to 30 markers on human face.

15

{C1}

{C2

}

Page 16: Capturing Facial Details by Space- time Shape-from-shading Yung-Sheng Lo *, I-Chen Lin *, Wen-Xing Zhang *, Wen-Chih Tai †, Shian-Jun Chiou † CAIG Lab,

Facial detailed results and Comparison

16

Page 17: Capturing Facial Details by Space- time Shape-from-shading Yung-Sheng Lo *, I-Chen Lin *, Wen-Xing Zhang *, Wen-Chih Tai †, Shian-Jun Chiou † CAIG Lab,

Result of synthesis

Generic model: 6078 vertices 6315 polygons

17deformation (RBF)subdivision per-pixel normal mapping

Page 18: Capturing Facial Details by Space- time Shape-from-shading Yung-Sheng Lo *, I-Chen Lin *, Wen-Xing Zhang *, Wen-Chih Tai †, Shian-Jun Chiou † CAIG Lab,

Result of animation

18

Page 19: Capturing Facial Details by Space- time Shape-from-shading Yung-Sheng Lo *, I-Chen Lin *, Wen-Xing Zhang *, Wen-Chih Tai †, Shian-Jun Chiou † CAIG Lab,

Conclusion & Future work We propose capturing detailed motion by

conventional Mocap and advanced shape-from-shading. Doesn’t need additional devices, paint pigments, or

restrict the wrinkle shape.

With spatial and temporal constraints, our optimal shape-from-shading is more reliable. Reflectance parameters are also estimated.

19

Page 20: Capturing Facial Details by Space- time Shape-from-shading Yung-Sheng Lo *, I-Chen Lin *, Wen-Xing Zhang *, Wen-Chih Tai †, Shian-Jun Chiou † CAIG Lab,

Conclusion & Future work In addition to Phong model, we will extend the

concept to other reflectance models. E.g Cook-Torrance BRDF model, BSSRDF, etc)

Currently, SFS is only applied to designated segments. An more efficient SFS for the whole face will make our animation more realistic.

20

Page 21: Capturing Facial Details by Space- time Shape-from-shading Yung-Sheng Lo *, I-Chen Lin *, Wen-Xing Zhang *, Wen-Chih Tai †, Shian-Jun Chiou † CAIG Lab,

Thank for your attention!

21

forehead details

between eyebrows

smile