30
1 23 Multimedia Tools and Applications An International Journal ISSN 1380-7501 Multimed Tools Appl DOI 10.1007/s11042-017-4483-6 Cropping-resilient 3D mesh watermarking based on consistent segmentation and mesh steganalysis Han-Ul Jang, Hak-Yeol Choi, Jeongho Son, Dongkyu Kim, Jong-Uk Hou, Sunghee Choi & Heung-Kyu Lee

hklee.kaist.ac.krhklee.kaist.ac.kr/publications/Multimedia Tools(With Han...tools, a number of 3D models are being shared via various 3D printing communities such as Thingiverse [31]

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: hklee.kaist.ac.krhklee.kaist.ac.kr/publications/Multimedia Tools(With Han...tools, a number of 3D models are being shared via various 3D printing communities such as Thingiverse [31]

1 23

Multimedia Tools and ApplicationsAn International Journal ISSN 1380-7501 Multimed Tools ApplDOI 10.1007/s11042-017-4483-6

Cropping-resilient 3D mesh watermarkingbased on consistent segmentation and meshsteganalysis

Han-Ul Jang, Hak-Yeol Choi, JeonghoSon, Dongkyu Kim, Jong-Uk Hou,Sunghee Choi & Heung-Kyu Lee

Page 2: hklee.kaist.ac.krhklee.kaist.ac.kr/publications/Multimedia Tools(With Han...tools, a number of 3D models are being shared via various 3D printing communities such as Thingiverse [31]

1 23

Your article is protected by copyright and all

rights are held exclusively by Springer Science

+Business Media New York. This e-offprint is

for personal use only and shall not be self-

archived in electronic repositories. If you wish

to self-archive your article, please use the

accepted manuscript version for posting on

your own website. You may further deposit

the accepted manuscript version in any

repository, provided it is only made publicly

available 12 months after official publication

or later and provided acknowledgement is

given to the original source of publication

and a link is inserted to the published article

on Springer's website. The link must be

accompanied by the following text: "The final

publication is available at link.springer.com”.

Page 3: hklee.kaist.ac.krhklee.kaist.ac.kr/publications/Multimedia Tools(With Han...tools, a number of 3D models are being shared via various 3D printing communities such as Thingiverse [31]

Cropping-resilient 3D mesh watermarking basedon consistent segmentation and mesh steganalysis

Han-Ul Jang1 & Hak-Yeol Choi1 & Jeongho Son1 &

Dongkyu Kim1 & Jong-Uk Hou1 & Sunghee Choi1 &

Heung-Kyu Lee1

Received: 28 July 2016 /Revised: 22 December 2016 /Accepted: 6 February 2017# Springer Science+Business Media New York 2017

Abstract This paper presents a new approach to 3D mesh watermarking using consistentsegmentation and mesh steganalysis. The method is blind, statistical, and highly robust tocropping attack. The primary watermarking domain is calculated by shape diameterfunction and the outliers of segments are eliminated by computing the consistencyinterval of vertex norms. In the watermark embedding process, the mesh is divided intoseveral segments and the same watermark is inserted in each segment. In the watermarkextraction process, the final watermark among watermark candidates extracted frommultiple segments is determined through watermark trace analysis that is kind of meshsteganalysis. We analyze the watermark trace energy of multiple segments of a mesh anddetect the final watermark in the segment with the highest watermark trace energy. To

Multimed Tools ApplDOI 10.1007/s11042-017-4483-6

* Heung-Kyu [email protected]

Han-Ul [email protected]

Hak-Yeol [email protected]

Jeongho [email protected]

Dongkyu [email protected]

Jong-Uk [email protected]

Sunghee [email protected]

1 School of Computing, Korea Advanced Institute of Science and Technology (KAIST), 291,Daehak-ro, Yuseong-gu, Daejeon 34141, Republic of Korea

Author's personal copy

Page 4: hklee.kaist.ac.krhklee.kaist.ac.kr/publications/Multimedia Tools(With Han...tools, a number of 3D models are being shared via various 3D printing communities such as Thingiverse [31]

analyze the watermark trace energy, we employ nonlinear least-squares fitting. Theexperimental results show that the proposed method not only achieves significantly highrobustness against cropping attack, but also resists common signal processing attackssuch as additive noise, quantization, smoothing and simplification.

Keywords 3Dmeshwatermarking . Segmentation . Steganalysis . Cropping

1 Introduction

With the growing prosperity of the 3D printer market and the convenience of 3D model editingtools, a number of 3D models are being shared via various 3D printing communities such asThingiverse [31]. The distributed 3D models are formed as 3D mesh and the majority of themodels are not protected by any security technology. The 3D mesh watermarking being used isa technique to protect the copyright of 3D mesh and it can be classified into non-blind andblind approaches [32]. Non-blind watermarking methods [13, 23, 24, 26, 35, 38, 39] showhigh robustness against various attacks and have good imperceptibility. However, they need anoriginal mesh for registration during the watermark extraction process and they have limitedutilization. In contrast, blind mesh watermarking methods [2, 3, 5–7, 14, 18–20, 22, 27, 34, 42,43] are practical techniques that do not require an original mesh and they are being activelyresearched. Nevertheless, most of the blind mesh watermarking approaches have vulnerabilityto cropping because it breaks the synchronization of the watermarks. Even though a fewwatermarking methods [2, 19, 22, 42] considered cropping, either they need side informationor they only endure cropping of a small portion of a 3D mesh.

The cropping generally occurs during the 3D printing and editing process, and it includes‘tear off’, ‘cut off & paste’, and ‘partition’ as shown in Fig. 1. For example, a unicorn modelwithout a horn, a synthesized model, and a partition of a model can be shared withoutinfringement of copyright due to watermark desynchronization caused by cropping. Therobustness against cropping is essential for effective copyright protection of a 3D model.

The objective of this article is to propose a robust and blind 3D mesh watermarking systemagainst cropping attack. The proposed method employs consistent segmentation [29] based onshape diameter function (SDF) for robustness against cropping and it supports watermarksynchronization during the watermark extraction procedure. We also employ a steganalysistechnique [17, 36, 37], which is to analyze steganography techniques [1, 11, 12, 21, 25], toimprove the detection rate of watermark extraction. While the purpose of the existingsteganalysis methods is to determine the presence of secret messages, the proposed methoduses a steganalytic algorithm to find the area with high watermark trace energy (WTE) anddetect the final watermark in that area. Therefore, the final watermark is detected in the meshsegment that is not cropped when cropping occurs in the mesh. The method consists ofsegmentation, watermark insertion considering the roughness of a 3D mesh for visualmasking, and watermark extraction using watermark trace analysis (WTA) to determine thefinal watermark.

To begin, we review relevant 3D mesh watermarking techniques in Section 2, taking specialcare to highlight their limitation. Then, the consistent segmentation technique is described inSection 3. The main idea of the proposed method and the proposed cropping-resilientwatermarking system are explained in Section 4 and Section 5, respectively. The performanceof the proposed system is reported in Section 6.

Multimed Tools Appl

Author's personal copy

Page 5: hklee.kaist.ac.krhklee.kaist.ac.kr/publications/Multimedia Tools(With Han...tools, a number of 3D models are being shared via various 3D printing communities such as Thingiverse [31]

2 Related work

The 3D mesh watermarking methods can be categorized as spectral-based [19, 23, 42], radial-based [5, 7, 20, 27], and segmentation-based [2, 22]. Ohbuchi et al. [23] proposed non-blind

Fig. 1 Examples of cropping models: (a-1) Original model, (a-2) Cut model, (b-1) Original models, (b-2)Synthesized model, (c-1) Original model, and (c-2) Partitioned model

Multimed Tools Appl

Author's personal copy

Page 6: hklee.kaist.ac.krhklee.kaist.ac.kr/publications/Multimedia Tools(With Han...tools, a number of 3D models are being shared via various 3D printing communities such as Thingiverse [31]

watermarking method using mesh spectral domain. The technique is robust to cropping, butthere is a limitation that the original mesh is required in the watermark extraction process. Liuet al. [19] introduced a blind spectral watermarking technique using Dirichlet manifoldharmonic transform. The system needs registration to orient the parametric mesh, and theparametric mesh is partitioned into a grid of embedding regions. This method toleratescropping attacks that reduce 2% of the boundary of a mesh. Zaid et al. [42] proposed awavelet-based watermarking technique. However, the norm of wavelet coefficients used forwatermarking domain is not invariant to uniform scaling, which is a kind of similaritytransform, and it has a limitation that it is robust against cropping in small region of a mesh.

Cho et al. [7] proposed a radial-based approach that modified the distribution of the Euclideandistances between all vertices and the center of mass. This watermarking system employshistogram mapping functions (HMF) to revise the distribution and does not require registrationfor watermark extraction. Luo and Bors [5, 20] designed a surface preserving watermarkingmethods using distribution of vertex norms. However, those two watermarking systems are veryvulnerable to cropping attack that disturbs synchronization during watermark extraction process.Rolland et al. [27] proposed a blind optimization-based watermarking framework. Their tech-nique amongHMF-based watermarking techniques are relatively robust, such as common signalattacks such as noise addition, quantization, smoothing, etc. However, the technique of insertinga single watermark into the whole mesh has a weak point that the synchronization of thewatermark detection region is damaged when a part of the mesh is cropped.

To improve the robustness against cropping attacks, segmentation-based watermarkingmethod has been explored. Alface et al. [2] designed the mesh watermarking algorithm to berobust to cropping attack. The watermarking system employs geodesic distances of the mesh todetect feature points that are less sensitive to cropping. The mesh is decomposed into multiplesegments and each segment is a set of neighbors of a feature point. The system repeatedlyembeds the samewatermark into each segment using the statistical watermarking technique [7].However, the robustness relies on correct synchronization between the partitions of theembedding process and those of the extraction process. Handling this synchronization issueremains an open challenge and this algorithm needs improvement to provide greater robustness.Mun et al. [22] proposed a mesh watermarking method using a shape diameter function. Theirmethod resists cropping with large area if the segments of the cropped mesh are almost the sameas those of the original mesh. The watermarking system determines the final watermark using amajority voting system that requires almost identical segments. Table 1 provides a summary ofthe strengths and weaknesses of the main methods mentioned in this paper.

In this paper, we propose a cropping-resilient watermarking method using consistentsegmentation and mesh steganalysis to handle the watermark synchronization issue. The samewatermark is inserted for each segment of the 3D mesh. And the final watermark is determinedamong the watermarks detected in multiple segments at the phase of the watermark detection.The final watermark is determined as the watermark detected in the segment with the highestWTE that is the most reliable segment. The watermark trace energy is calculated by WTAusing nonlinear least-squares fitting.

3 Segmentation using shape diameter function

Mesh segmentation has become an important technique in various geometric model-ling and computer graphics tasks and applications. Segmentation assists

Multimed Tools Appl

Author's personal copy

Page 7: hklee.kaist.ac.krhklee.kaist.ac.kr/publications/Multimedia Tools(With Han...tools, a number of 3D models are being shared via various 3D printing communities such as Thingiverse [31]

parameterization, texture mapping, shape matching, morphing, multi-resolution model-ling, mesh editing, compression, animation and more [28]. Segmentation may beclassified into surface-type segmentation and part-type segmentation. First, surface-type segmentation employs surface geometric properties of the mesh such as planarityor curvature to create surface patches. This segmentation is not intuitive for humansight because it considers volumetrically meaningful parts such as minimum curvature,not general parts such as the head of a Horse. Second, part-type segmentation is usedto partition the object into semantic components creating general parts such as legs ofa Horse, as shown in Fig. 2. This segmentation is rooted in the study of humanperception and it is appropriate for modelling by assembling parts of objects.

Part-type segmentation is appropriate for 3D mesh watermarking because the segmentationremains consistent irrespective of cropping. The consistency is preserved by using localinformation, not the whole range of a 3D mesh. The proposed watermarking method employsthe consistent mesh partitioning scheme [29] to satisfy both higher consistency of segmentarea, and to achieve faster computation than other partitioning schemes. The consistentsegmentation is based on SDF (a shape property) and distinguishes 3D object parts consis-tently. The target SDF value is computed by averaging SDF values within a certain range ofthe target face. The segmentation is consistent when a large amount of mesh is cropped. This is

Fig. 2 The SDF-based segmenta-tion of a Horse model

Table 1 Strengths and weaknesses of the 3D mesh watermarking algorithms

Algorithms Blindness Invariant to Similaritytransformation

Robustness vs.Common attacks

Robustness vs.Cropping

[23] no yes ++ ++[19] yes yes ++ -[42] yes no ++ -[7] yes yes ++ -[20] yes yes ++ -[5] yes yes ++ -[27] yes yes ++ -[2] yes yes + +[22] yes yes - +Proposed yes yes + ++

Multimed Tools Appl

Author's personal copy

Page 8: hklee.kaist.ac.krhklee.kaist.ac.kr/publications/Multimedia Tools(With Han...tools, a number of 3D models are being shared via various 3D printing communities such as Thingiverse [31]

because the SDF values within a local range that is not cropped, are preserved. The consistentsegmentation process includes three steps.

(1) Calculation of SDF values of mesh:

Several rays from each face are sent to inward-normal direction (the opposite side of its facenormal), which is shown in Fig. 3. Then the total length of rays from the face to the oppositefaces is averaged. The average value is the SDF value of the vertex.

(2) Soft-clustering based on SDF values:

This action performs soft-clustering of the faces to k clusters using their SDF values. Thedistribution of the SDF values gets fitted to a Gaussian mixture model (GMM) by performingan expectation-maximization (EM) algorithm.

(3) Actual partitioning using k-way graph-cut:

This action determines the final segmentation of the mesh. To smooth the boundaries ofsegments, an alpha expansion graph-cut algorithm [40] is employed. The k-way graph cutassigns a segment number to each face, considering both probability vector from the EM step,and the quality of the boundaries.

4 Main idea of the proposed watermarking method

Being both blind and cropping-resilient is an open problem in the 3D mesh watermarking fieldbecause cropping damages the relation of 3D shape descriptors that is usually used for blindmesh watermarking domain. A 3D shape descriptor itself can alter, depending on registrationsuch as rotation, scaling, translation, and vertex reordering. Hence, most of blind methods needthe base of the 3D mesh and the relations between the base and 3D shape descriptors for

(a) (b)

Fig. 3 Examples of rays sent tothe opposite side of the mesh,where blue rays are within theaccepted range of the medianwhile red rays are rejected outliers

Multimed Tools Appl

Author's personal copy

Page 9: hklee.kaist.ac.krhklee.kaist.ac.kr/publications/Multimedia Tools(With Han...tools, a number of 3D models are being shared via various 3D printing communities such as Thingiverse [31]

successful watermarking. The embedded watermark of the 3D mesh can be extracted withoutany additional information if the relation of the 3D shape descriptors is preserved. However,cropping breaks the relation of the whole region of the mesh, as a result, desynchronization ofthe watermark occurs.

Cropping is categorized as the local deformation which varies only part of a mesh, notthe whole region of a mesh. The domain of most blind watermarking methods relies onthe relationships among all parts of a mesh. For example, radial-based watermarkingmethods need the center and the histogram information. However, cropping changes boththe center and the histogram for watermark synchronization as shown in Fig. 4 andFig. 5, respectively.

To achieve being blind and cropping-resilient, we propose a watermarking method based onpart-type segmentation and mesh steganalysis. We insert the same watermark for each segmentof a mesh and detect the final watermark in the most reliable segment. The proposed methodinvolves two main ideas. First, the proposed method employs consistent watermarkingdomain. The primary watermarking domain is calculated by SDF based segmentation [22].However, the segmentation itself does not guarantee synchronization of the watermarkingdomain because the boundaries of segments fluctuate a great deal. To resolve the boundaryfluctuation, the boundary vertices of segments are excluded in the watermarking domain usingthe confidence interval of the distribution of vertex norms. Second, the proposed methodextracts the final watermark based on WTA. Segmentation-based watermarking methods canextract several watermarks when the segments are damaged by attacks such as cropping,additive noise, and so on. The false positive rate is significantly increase if we simply try tomatch all extracted watermarks with the correct watermark. Hence, determination of the finalwatermark among several potential watermarks is very important. The final watermark isdetermined through the mesh steganalysis method. The HMF for watermarking is a type ofnonlinear function. We analyze the characteristic of the HMF and calculate the WTE of eachsegment. The final watermark is the watermark of the segment with the largest WTE.

The proposed method consists of segmentation, watermark embedding (includingroughness-based visual masking), and watermark extraction (using WTA) to determine thefinal watermark. The proposed method is mainly focused on providing robustness againstcropping attack, but is also robust against general attacks such as additive noise, quantization,smoothing, and simplification.

(a) (b)

Fig. 4 Example of center changecaused by cropping: The red circleindicates the center of a mesh. aAn Elephant model, b Thecropped model

Multimed Tools Appl

Author's personal copy

Page 10: hklee.kaist.ac.krhklee.kaist.ac.kr/publications/Multimedia Tools(With Han...tools, a number of 3D models are being shared via various 3D printing communities such as Thingiverse [31]

5 Cropping-resilient watermarking

The proposed scheme consists of embedding and extraction procedures as shown in Fig. 6. Forthe embedding process, consistent segmentation is performed based on calculation of SDF androughness. Then, the same watermark is inserted for each segment. Awatermark bit is insertedin each bin of each segment using the improved HMF, with consideration of the roughness of

(a) (b)Fig. 5 Example of histogram change caused by cropping: The red line indicates the boundary of a bin. aHistogram of Elephant model, b Histogram of the cropped model

(a)

(b)

Fig. 6 Block diagrams of (a) Watermark embedding and (b) Watermark extraction

Multimed Tools Appl

Author's personal copy

Page 11: hklee.kaist.ac.krhklee.kaist.ac.kr/publications/Multimedia Tools(With Han...tools, a number of 3D models are being shared via various 3D printing communities such as Thingiverse [31]

the surface. This HMF has been used for statistical watermarking in a number of methods [5, 7,20, 41]. For the proposed extraction process, a watermarked mesh is segmented and thevertices of the segment are divided. Then, the mean of vertex norms is calculated, and thewatermark bit is extracted from each bin of each segment. Finally, WTE of each segment iscalculated, then the extracted bit-sequence of the segment with the largest WTE is determinedas the final watermark. The proposed method can extract the final watermark even if a largeamount of the mesh has been cropped.

5.1 Watermark embedding procedure

The watermark embedding procedure is illustrated in Fig. 6 (a). The watermark bit-sequence isrepeatedly inserted into each segment to survive under cropping attack. The specific embed-ding algorithm is as follows.

1) Step 1. Calculation of roughness values:

Roughness is a local measure of geometric noise on the surface and it can be employed tooptimize the trade-off between robustness and imperceptibility [15]. The watermarking systemembeds a watermark appropriately considering the surface roughness of the 3D mesh. Theroughness value r of each vertex is computed using a local roughness measure that is based on

Fig. 7 Distribution of vertex norms obtained from a segment of Elephantmodel, where red dashed lines indicateεi , low and εi , upper of the segment

Multimed Tools Appl

Author's personal copy

Page 12: hklee.kaist.ac.krhklee.kaist.ac.kr/publications/Multimedia Tools(With Han...tools, a number of 3D models are being shared via various 3D printing communities such as Thingiverse [31]

curvature analysis of local windows of a 3D mesh. The roughness values are employed forvisual masking in Step 5.

2) Step 2. Segmentation based on SDF:

The SDF values of the mesh are computed by calculating the distances from the faces to theopposite faces, and averaging the distances. Then, the distribution of the SDF values gets fittedto a Gaussian mixture model (GMM) by performing an expectation-maximization (EM)algorithm [4]. The actual partition is decided using the k-way graph cut. The input mesh Ois decomposed into L segments such that each segment Si is used for embedding a watermarkbit-sequence (i = 1, . . ., L).

3) Step 3. Bin division:

The vertex set Vi of the segment Si is divided intoM bins where a watermark bit is insertedinto each bin. The range of bins to be embedded should be properly selected to avoid theoutliers that disturb the synchronization between watermark embedding and watermarkextraction. As an example, Fig. 7 shows the distribution of a segment of an Elephant model.It demonstrates that the distribution of a segment is close to normal distribution because aboundary of a segment is cut and few vertices on the boundary exist. The synchronization iseffectively preserved by removing the outliers using the confidence interval. We denote thecenter of gravity of the Si and k-th vertex of Si as ci = (xi , g, yi , g, zi , g) and vi , k = (xi , k, yi , k, zi , k),respectively. Cartesian coordinates of a vertex vi , k ∈ Vi are converted into spherical coordinates(ρi , k, θi , k, ϕi , k) as follows:

ρi;k ¼ ci−vi;k�� ��

θi;k ¼ tan−1yi;k−yi;g

� �xi;k−xi;g� �

ϕi;k ¼ cos−1zi;k−zi;g� �

ρi;k:

ð1Þ

Vi are grouped intoM bins according to their distances from the center of the segment. The j-thbin of Si is defined as:

Bi; j ¼ ρi;k∈Si���εi;low þ jþM ⋅ϑ−1ð Þ εi;upper−εi;low

M

� �≤ρi;k

< εi;low þ j−M ⋅ϑð Þ εi;upper−εi;lowM

� �ð2Þ

where i = 1 , . . . , L, j = 1 , . . . ,M, and k = 1 , . . . ,Ki , j. Bi , j is the j-th bin of Si and it hasKi , j vertex norms. εi , low and εi , upper are the low endpoint and the upper endpoint, respectively,of the distribution of ρi. Here, ϑ ∈ [0, ϑmax] is a trimming ratio [5] to prevent crypto-attackers from guessing the watermark code and breaking the watermark. The value ϑis generated by a secret key.

Multimed Tools Appl

Author's personal copy

Page 13: hklee.kaist.ac.krhklee.kaist.ac.kr/publications/Multimedia Tools(With Han...tools, a number of 3D models are being shared via various 3D printing communities such as Thingiverse [31]

To remove outliers that disturb the watermarking synchronization, εi , low and εi , upper arecomputed as:

εi;low ¼ μi−Z1−αc

.2

σiffiffiffin

pi;

εi;upper ¼ μi þ Z1−αc

.2

σiffiffiffin

pi;

ð3Þ

where αc is the inverse of a significance level and the confidence interval is stated as 100(1−αc)%. Z1−αc=2 is the 1 −αc/2 critical value of the standard normal distribution. μi;

σiffiffin

piare the

mean and the standard error of vertex norm ρi of Si, respectively. The proposed methodemploys 90% confidence interval for determining bin range and the αc is set to 0.1.

4) Step 4. Vertex normalization:

The vertex norms are normalized to the range [0, 1] using:

~ρi; j;k ¼ρi; j;k−ρi; j;min

ρi; j;max−ρi; j;minð4Þ

where ~ρi; j;k is the normalized vertex norm. ρi , j , min , ρi , j , max are the minimum value and the

maximum value of the vertex norms in Bi , j.

5) Step 5. Mean modification reflecting the roughness values:

A watermark bit wi , j is inserted into each bin modifying the mean value of the bin as:

μ∼′i; j ¼

1=2þ α i f wi; j ¼ þ11=2−α i f wi; j ¼ −1 ð5Þ

where α is the watermark strength that affects the robustness and the imperceptibility of thewatermark. The watermarking with higher α provides more robustness and lessimperceptibility. μ∼′i; j is the watermarked mean value of the bin Bi , j.

To modify the mean of the bin, we propose using an improved HMF that reflects theroughness values for visual masking as:

eρ0i; j;k ¼ ~ρi; j;k� �χi; j;k

;

χi; j;k ¼χi; j;k−Φ if wi; j ¼ þ1χi; j;k þ Φ if wi; j ¼ −1

�;

Φ ¼ Δχ⋅ 1−γð Þ þ γ⋅ri; j;k ;ð6Þ

where ρ′i , j , k, χi , j , k, and ri , j , k are the watermarked vertex norm, the modification parameter,and the roughness value of vi , j , k, respectively. When wi , j = + 1, χi , j , k decreases and the meanvalue μi , j increases. Conversely, when wi , j = − 1, χi , j , k increases and the mean value μi , jdecreases.Δχ is the step size to change χ when performing the improved HMF once, and γ is

Multimed Tools Appl

Author's personal copy

Page 14: hklee.kaist.ac.krhklee.kaist.ac.kr/publications/Multimedia Tools(With Han...tools, a number of 3D models are being shared via various 3D printing communities such as Thingiverse [31]

the strength of visual masking. If Δχ is set too high, the distortion of the watermarked modelincreases noticeably. Also, if γ is set too high, the distortion in rough areas of the meshincreases dramatically while the distortion in smooth areas of the mesh decreases sharply.Therefore, it is necessary to use appropriate Δχ and γ. The appropriate Δχ and γ arecalculated experimentally. The modification parameter χi , j , k depends on the roughness valueri , j , k of the vertex vi , j , k. In other words, we insert a watermark bit more strongly in a rougharea that is not well recognized by the human eye, and insert a watermark relatively weaker ina smooth area that is perceived by the human eye. The HMF is repeatedly conducted until thecondition of mean modification is satisfied.

6) Step 6. Vertex reconstruction:

The watermarked vertex norms are obtained by the inverse normalization function as:

ρ0i; j;k ¼ ~ρ0i; j;k ρi; j;max−ρi; j;min

� �þ ρi; j;min: ð7Þ

Then, the spherical coordinate of the watermarked vertex is converted to Cartesian coor-dinate system to reconstruct the watermarked mesh by:

x0i;k ¼ ρ0i;kcosθi;ksinϕi;k þ xi;gy0i;k ¼ ρ0i;ksinθi;ksinϕi;k þ yi;gz0i;k ¼ ρ0i;kcosϕi;k þ zi;g

: ð8Þ

5.2 Watermark extraction procedure

The watermark extraction procedure is illustrated in Fig. 6 (b). The system extracts thewatermark bit-sequence from each segment; then determines the final watermark usingWTA. The specific extraction algorithm is as follows.

1) Step 1. Bin synchronization:

The same process as for the watermark embedding procedure is conducted (Step 1 to 4 inthe watermark embedding procedure).

2) Step 2. Mean calculation:

The watermark bit is extracted by calculating the mean of the bin in each segment:

w′′i; j ¼ þ1 i f μ′′∼i; j > 1=2

w′′i; j ¼ −1 i f μ′′∼i; j < 1=2

ð9Þ

where w ′ ′i , j and eμ00i; j are the extracted watermark bit and the mean of j-th bin of i-th segment,

respectively. The watermark bit-sequence of each segment is completed by concatenating theextracted watermark bits of the bins of the segment.

Multimed Tools Appl

Author's personal copy

Page 15: hklee.kaist.ac.krhklee.kaist.ac.kr/publications/Multimedia Tools(With Han...tools, a number of 3D models are being shared via various 3D printing communities such as Thingiverse [31]

3) Step 3. Final watermark decision:

This is the step of determining the final watermark that is the most reliable watermarkamong watermark candidates extracted from multiple segments of the stego mesh. The finalwatermark of the stego mesh is determined through analysis of the watermark trace based onnonlinear least-squares fitting. The nonlinear least-squares is a form of least squares analysis,and is used to find coefficients of the fitted curve [9]. The nonlinear least-squares curve fittingproblems of the form is as follows:

minx

f xð Þk k22 ¼ minx

f 1 xð Þ2 þ f 2 xð Þ2 þ :::þ f n xð Þ2� �

: ð10Þ

The improved HMF in the watermark embedding procedure modifies the data curve of thebin from a linear function to an exponential function (a kind of nonlinear function). Theestimated exponential function, the same form as the improved HMF Eq. 6, by nonlinear curvefitting is as follows:

Y ¼ X u ð11Þwhere X, Y, and u are the normalized vertex norms in the bin that is not watermarked, thenormalized vertex norms in the watermarked bin, and the estimated modification parameter ofthe bin by nonlinear curve fitting curvefit, respectively.

The HMF modifies the mean of the bin to the target value over 1/2, when the watermark bitis +1. At this time, the exponent value u of a fitted curve of the bin decreases. On the other hand,u of the bin increases when the watermark bit is −1. Therefore, the WTE is calculated by the udistance between an exponent value u of a fitted curve of each bin and the initial value (udistance =|u − 1|). The u distances of fitted curves of an original mesh and a watermarked meshare represented in Fig. 8. If the u distance of the bin is larger than the watermark trace thresholdτ, then it increases the WTE of the segment. Fig. 9 shows that u distances of a watermarkedmesh are remarkably distinguished from those of an original mesh. The algorithm thatcalculates the WTE of each segment is shown in Algorithm 1. WTEi, Xi , j, and Yi , j are theWTE of segment Si, the normalized vertex norms in the bin Bi , j that is not watermarked, and thenormalized vertex norms in the watermarked bin B′i , j, respectively. The proposed method onlyhas the information of Yi , j because it is blind watermarking. We assume that the distribution ofXi , j is uniformly distributed over unit range [0,1] because the normalized vertex norms in thebin of the original mesh are close to uniformly distributed random variables.

Multimed Tools Appl

Author's personal copy

Page 16: hklee.kaist.ac.krhklee.kaist.ac.kr/publications/Multimedia Tools(With Han...tools, a number of 3D models are being shared via various 3D printing communities such as Thingiverse [31]

The final watermarkWf is determined from the watermark bit-sequence of the segment withthe largest WTE among all extracted watermarksW of segments. The equation for determiningthe final watermark is as follows:

W f ¼ argmaxWi∈∀W

WTEif g 1≤ i≤Lð Þ: ð12Þ

The watermark bit-sequence of the segment with the largest WTE is set as the finalwatermark Wf.

(a) (b)Fig. 8 Fitted curve: (a) Original mesh and (b) Watermarked mesh; where the blue line indicates a fitted curve toa bin of a segment of Horse model, and the black circle indicates normalized vertex norms of a bin

(a) (b)Fig. 9 u distances of the fitted curve: (a) Original mesh and (b) Watermarked mesh; where the blue line indicatesu distances of bins of a segment of Horse model and the red line indicates τ that classifies a bin of an originalmesh and a bin of a watermarked mesh

Multimed Tools Appl

Author's personal copy

Page 17: hklee.kaist.ac.krhklee.kaist.ac.kr/publications/Multimedia Tools(With Han...tools, a number of 3D models are being shared via various 3D printing communities such as Thingiverse [31]

6 Experimental results

In the following we present the experimental results of the proposed watermarking method.The proposed method has been tested on six 3D mesh models: Armadillo, Elephant, Horse,Stormtrooper, Tyrannosaurus, Unicorn, as shown in Fig. 10. Armadillo, Elephant, and Horsehave been mainly used for benchmarking [33], while Stormtrooper, Tyrannosaurus, andUnicorn are publicly shared models for 3D printing [31]. Table 2 describes the characteristicsof six models. The quality of the geometry of the 3D mesh is measured by Metro [8] in termsof maximum root mean square error (MRMS). A perceptual distance between an originalmodel and a watermarked model is evaluated by the mesh structural distortion measure(MSDM) proposed in [16].

In our experiments, the number of rays, the number of clusters, and the smoothing factor areset as 25, 4, and 0.7, respectively. The watermark embedding strength α is set as 0.1, the stepsize for parameter χ is initialized to 0.001, the strength of visual masking γ is set as 0.2, thewatermark payload M is 16-bit, and the trimming ratio ϑ is bounded by ϑmax = 0.15 forbalancing robustness, imperceptibility, and security. We employs 90% confidence interval fordetermining bin range and theαc is set to 0.1. The watermark trace threshold τ is set as 0.15 thatsatisfies very low false positive rate Pfp, of the watermark extraction algorithm (Pfp ≤ 10−5).

The comparison watermarking method of Mun et al. (MunWM) [22] has the samewatermarking system as the proposed technique. MunWM divides a mesh into multiplesegments and inserts a watermark in each segment. Hence, the watermarking parameters areset to be the same as the proposed method. The watermark embedding strength α is set as 0.1,

(a) (b) (c)

(d) (e) (f)Fig. 10 Original 3D models used in the experiments: (a) Armadillo, (b) Elephant, (c) Horse, (d) Stormtrooper,(e) Tyrannosaurus, and (f) Unicorn

Multimed Tools Appl

Author's personal copy

Page 18: hklee.kaist.ac.krhklee.kaist.ac.kr/publications/Multimedia Tools(With Han...tools, a number of 3D models are being shared via various 3D printing communities such as Thingiverse [31]

the step size for parameter χ is initialized to 0.001, and the watermark payload M is set as 16-bit. The watermarking method of Rolland et al. (RollandWM) [27] is a watermarkingtechnique for inserting a single watermark into the entire mesh, which is different from theproposed technique. When the watermark strength parameter α is set to 0.1 as in the proposedmethod, the average MSDM of the watermarked meshes is 0.44. In this case, since the mesh isseverely distorted enough to be easily recognized by the human visual system, theimperceptibility of watermarking is not satisfied. Therefore, the degree of the distortion appliedto the mesh by watermarking is matched with the distortion degree of the proposed methodand the robustness test of RollandWM is performed. For RollandWM, we set thewatermarking strength parameter α to 0.03 to satisfy the condition that the average MSDMof the watermarked meshes is 0.22. Table 3 demonstrates baseline imperceptibility of theproposed method. We assess the robustness against cropping and signal processing attacks thatinclude additive random noise, quantization, smoothing, and simplification.

6.1 Evaluation of robustness against cropping

The average robustness for cropping is evaluated by calculating the mean of BERs in fiveattack scenarios. The watermarked models are cropped with various vertex-reduction ratios(from 10% to 90%). The comparison of evaluation results against cropping is summarized inFig. 11. The proposed method shows very high robustness against cropping attacks with lessthan 70% of vertex-reduction on the average. Especially, for Armadillo and Stormtroopermodels, the robustness is significantly high with less than 80% of vertex-reduction. Thecomparison method, MunWM, shows lower robustness and more frequent fluctuations thanthose of the proposed method. These fluctuations occur because MunWM determines the finalwatermark using all watermark candidates of the segments in the mesh. If the cropping regionoccurs through several segments, the segmentation of the mesh can be changed. Then, thewatermark candidates of the inaccurately divided segments can affect the final watermark,therefore, a high BER can occur. Another comparison technique, RollandWM, is observed to

Table 2 Characteristics of the 3D models used in experiments

Model Number of vertices Number of faces

Armadillo 26,002 52,000Elephant 24,955 49,918Horse 19,851 39,698Stormtrooper 19,600 39,108Tyrannosaurus 49,999 99,994Unicorn 49,999 99,994

Table 3 MRMS and MSDM of the proposed watermarking technique

Model MRMS (10−2) MSDM

Armadillo 7.23 0.21Elephant 0.05 0.20Horse 0.15 0.25Stormtrooper 0.17 0.24Tyrannosaurus 3.53 0.23Unicorn 4.67 0.30

Multimed Tools Appl

Author's personal copy

Page 19: hklee.kaist.ac.krhklee.kaist.ac.kr/publications/Multimedia Tools(With Han...tools, a number of 3D models are being shared via various 3D printing communities such as Thingiverse [31]

(a) (b)

(c) (d)

(e) (f)Fig. 11 Robustness against cropping. (a) Armadillo, (b) Elephant, (c) Horse, (d) Stormtrooper, (e) Tyranno-saurus, (f) Unicorn

Multimed Tools Appl

Author's personal copy

Page 20: hklee.kaist.ac.krhklee.kaist.ac.kr/publications/Multimedia Tools(With Han...tools, a number of 3D models are being shared via various 3D printing communities such as Thingiverse [31]

be very vulnerable to cropping. For RollandWM, the basis position of the mesh changeseven at relatively small cropping of less than 10% of vertex-ratio. The resultsdemonstrate that the proposed method provides a very high robustness againstcropping with lower fluctuation of BER.

6.2 Evaluation of robustness against signal processing attacks

The average robustness for each signal processing attack is evaluated by calculating the meanBER in five attack scenarios over three models used for watermark benchmarking [33]. Theproposed watermark is experimentally invariant to the content preserving attacksincluding vertex/face reordering in the mesh and similarity transformation (i.e. rota-tion, scaling, translation (RST) and their combination) because we employ vertexnorms which are RST invariant features.

Four signal processing attacks are evaluated in our experiments. The first attack isadditive noise and the noise amplitude is uniformly distributed in the interval [−A, A],where A is the maximum amplitude of the random additive noise, is relative to theaverage distance form the vertices to the mesh center. Second, vertex coordinates areuniformly quantized according to a quantization bit: an 8-bit quantization means thateach coordinate is rounded to one of the 256 possible levels. Third, in a smoothingattack, the mesh is processed by two-steps of smoothing with different iteration numberswhile fixing the deformation factor λ as 0.5 [30]. Lastly, for surface simplification, weuse Garland and Heckbert^s quadric-error-metric-based method [10] with various vertex-reduction ratios. The close-ups of attacked models are shown in Fig. 12.

(a) (b)

(c) (d)

Fig. 12 Visual impact onwatermarked Armadillo modelafter signal processing attacks. aNoise A = 0.2%, b 8-bit quantiza-tion, c Smoothing λ = 0.5 and 10iterations, d Simplification with10% vertex-reduction

Multimed Tools Appl

Author's personal copy

Page 21: hklee.kaist.ac.krhklee.kaist.ac.kr/publications/Multimedia Tools(With Han...tools, a number of 3D models are being shared via various 3D printing communities such as Thingiverse [31]

The performance of the proposed method against additive noise, is shown in Fig. 13. Themaximum amplitude of noise is set as 0.2% and the proposed scheme shows the highestrobustness against additive noise. MunWM shows the lowest robustness.

Fig. 14 indicates robustness evaluation results against quantization. The watermarkedmodels are quantized by various bits from 14 to 8, and the proposed method shows the highestrobustness against quantization. RollandWM shows higher robustness than MunWM

Fig. 15 presents the robustness evaluation results against and smoothing. Smoothing withvarious iteration numbers is applied to the watermarked models, and the results describes thatour method is very robust below 5 iterations. However, the proposed scheme shows lowrobustness for smoothing with more than 5 iterations. MunWM shows higher robustness thanthe proposed method, but their method is still vulnerable to smoothing. RollandWM shows thehighest robustness and the BER increases with increasing the iteration number of smoothing,but it is still robust against smoothing.

Fig. 16 demonstrates the robustness evaluation results of surface simplification. We test therobustness against simplification varying vertex-reduction ratio from 1% to 10%. Although theproposed scheme has high robustness for simplification, it shows a relatively lower robustnessresult than RollandWM. MunWM shows the lowest robustness.

6.3 Discussion and comparison with other watermarking methods

In this subsection, we discuss the strengths and limitations of the proposed watermarkingtechnique. We also compare our technique with RollandWM andMunWM. RollandWM is the

Fig. 13 Average robustness against additive noise

Multimed Tools Appl

Author's personal copy

Page 22: hklee.kaist.ac.krhklee.kaist.ac.kr/publications/Multimedia Tools(With Han...tools, a number of 3D models are being shared via various 3D printing communities such as Thingiverse [31]

Fig. 14 Average robustness against quantization

Fig. 15 Average robustness against smoothing

Multimed Tools Appl

Author's personal copy

Page 23: hklee.kaist.ac.krhklee.kaist.ac.kr/publications/Multimedia Tools(With Han...tools, a number of 3D models are being shared via various 3D printing communities such as Thingiverse [31]

most robust blind mesh watermarking algorithm proposed so far, and MunWM is a blind meshwatermarking technique that has the highest robustness against cropping. Our method is morerobust against cropping compared with other watermarking methods, and shows high robust-ness for cropping with 50% vertex-reduction or more. In signal processing attacks, theproposed method is robust within low degree of distortion that is scarcely recognized byhuman visual system. The severe distortion can affect the watermark detection because itchanges the histogram of shape diameter function values that is used for mesh segmentation.Therefore, our watermarking method is suitable for protecting 3D printing contents that cancause a various range of cropping without other severe distortions.

The limitation of our watermarking technique is that the payload is lower than that of othertechniques. The proposed method divides the mesh into multiple segments and repeatedlyinserts the watermark for each segment. Therefore, the payload of the proposed method islower than that of RollandWM that inserts single watermark into the entire mesh. However, ifwe use a dense mesh, we can increase the payload because we have enough vertices to insert awatermark per mesh segment.

6.4 Imperceptibility assessment

In the following experiments we evaluate imperceptibility performance of the Horse model.Fig. 17 illustrates the evaluation results when varying the watermark strength α from 0.05 to0.3. It can be observed that the distortion of the model surface increases when increasing α.Fig. 18 presents the visual distortion of the model when varying the watermark capacity from16 to 96. The distortion of the model with high watermark capacity is less than the distortion

Fig. 16 Average robustness against simplification

Multimed Tools Appl

Author's personal copy

Page 24: hklee.kaist.ac.krhklee.kaist.ac.kr/publications/Multimedia Tools(With Han...tools, a number of 3D models are being shared via various 3D printing communities such as Thingiverse [31]

with low capacity. The proposed method with higher watermark capacity requires more bins toembed a watermark, then the number of vertices in each bin decreases. A mean value of a binwith a few vertices is easily changed with less distortion. In other words, the watermarkingmethod with high watermark capacity provides high fidelity, but less robustness againstdistortion attacks.

Fig. 18 Distortion with various watermark payload

Fig. 17 Distortion with various watermark strength α

Multimed Tools Appl

Author's personal copy

Page 25: hklee.kaist.ac.krhklee.kaist.ac.kr/publications/Multimedia Tools(With Han...tools, a number of 3D models are being shared via various 3D printing communities such as Thingiverse [31]

7 Conclusions and future work

In this paper, we have presented a blind 3D mesh watermarking method that is robust againstcropping attack. The synchronization of thewatermark is achieved using segmentation based onSDF that is a highly consistent part-type segmentation technique. We employ the steganalysistechnique to determine the final watermark. We analyze the WTE for each mesh segment usinga nonlinear curve fitting and determine the final watermark as the watermark detected in thesegment with the highest WTE that is the most reliable watermark. As shown in the experi-ments, the proposed method achieves very high robustness against cropping compared withother watermarking methods. Moreover, the watermark is fairly robust against various signalprocessing attacks. However, there are several drawbacks of the proposed method. This methodis not applicable to very small models because the watermark system needs sufficient numbersof vertices to divide segments effectively. It is also vulnerable to severe distortions that changethe histogram of the shape diameter function values. In future work, we will investigatesolutions to the problems caused by small models and severe distortions.

Acknowledgements This research project was supported by Ministry of Culture, Sports and Tourism (MCST)and from Korea Copyright Commission in 2016.

References

1. Abu-Marie W, Gutub A, Abu-Mansour H (2010) Image based steganography using truth table based anddeterminate Array on RGB indicator. International Journal of Signal and Image Processing (IJSIP) 1(3):196–204

2. Alface PR, Macq B, Cayre F (2007) Blind and robust watermarking of 3D models: How to withstand thecropping attack? IEEE International Conference on Image Processing pp 465–468

3. Awad AI, Hassanien AE, Baba K (2013) A blind robust 3D–watermarking scheme based on progressivemesh and self organization maps. In: Communications in Computer and Information Science, vol 381.Springer Berlin Heidelberg, Berlin

4. Bilmes JA (1998) A gentle tutorial of the EM algorithm and its application to parameter estimation forgaussian mixture and hidden markov models. ReCALL 4(510):126

5. Bors AG, Luo M (2013) Optimized 3D watermarking for minimal surface distortion. IEEE Trans ImageProcess: a publication of the IEEE Signal Processing Society 22(5):1822–1835

6. Chen HK, Chen WS (2016) GPU-accelerated blind and robust 3D mesh watermarking by geometry image.Multimedia Tools and Applications 75(16):10077–10096

7. Cho JW, Prost R, Jung HY (2007) An oblivious watermarking for 3-D polygonal meshes using distributionof vertex norms. IEEE Trans Signal Process 55(1):142–155

8. Cignoni P, Rocchini C (1998) Metro: measuring error on simpli ed surfaces. Comput Graph 17(2):167–1749. Coleman TF, Li Y (1996) An interior, trust region approach for nonlinear minimization subject to bounds.

SIAM J Optim 6(2):418–44510. Garland M, Heckbert PS (1997) Surface simplification using quadric error metrics. SIGGRAPH pp 209–

21611. Gutub A (2010) Pixel indicator technique for RGB image steganography. Journal of Emerging Technologies

in Web Intelligence (JETWI) 2(1):56–6412. Gutub A, Al-Qahtani A, Tabakh A (2009) Triple-a: secure RGB image steganography based on random-

ization. AICCSA-2009 - The 7th ACS/IEEE International Conference on Computer Systems andApplications 1(3):400–403

13. Kanai S, Date H, Kishinami T (1998) Digital watermarking for 3D polygons using multiresolution waveletdecomposition. Proc Sixth IFIP WG 5:296–307

14. Konstantinides J, Mademlis A, Daras P, Mitkas P, Strintzis M (2009) Blind robust 3-D mesh watermarkingbased on oblate spheroidal harmonics. IEEE Trans Multimedia 11(1):23–38

Multimed Tools Appl

Author's personal copy

Page 26: hklee.kaist.ac.krhklee.kaist.ac.kr/publications/Multimedia Tools(With Han...tools, a number of 3D models are being shared via various 3D printing communities such as Thingiverse [31]

15. Lavoué G (2009) A local roughness measure for 3D meshes and its application to visual masking. ACMTransactions on Applied Perception 5(4):1–23

16. Lavoué G, Drelie Gelasca E, Dupont F, Baskurt A, Ebrahimi T (2006) Perceptually driven 3D distancemetrics with application to watermarking. Proceedings SPIE 6312, Applications of Digital ImageProcessing XXIX, 63120L (August 24, 2006). doi:10.1117/12.686964

17. Li Z, Bors AG (2016) 3D mesh steganalysis using local shape features. In: 2016 I.E. InternationalConference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, pp 2144–2148

18. Liu Y, Prabhakaran B, Guo X (2008) A robust spectral approach for blind watermarking of manifoldsurfaces. Proceedings of the 10th ACM workshop on Multimedia and security - MM&Sec ‘08 p 43

19. Liu Y, Prabhakaran B, Guo X (2012) Spectral watermarking for parameterized surfaces. IEEE Trans InfForensics Secur 7(5):1459–1471

20. LuoM, Bors AG (2011) Surface-preserving robust watermarking of 3-D shapes. IEEE Trans Image Process:a publication of the IEEE Signal Processing Society 20(10):2813–2826

21. Marvel LM, Boncelet CG, Retter CT (1999) Spread spectrum image steganography. IEEE Trans ImageProcess 8(8):1075–1083

22. Mun SM, Jang HU, Kim DG, Choi SH, Lee HK (2015) A robust 3D mesh watermarking scheme againstcropping. In: International Conference on 3D Imaging, pp 1–6

23. Ohbuchi R, Takahashi S, Miyazawa T, Mukaiyama A (2001) Watermarking 3D polygonal meshes in themesh spectral domain. Graphics Interface pp 9–17

24. Ohbuchi R, Mukaiyama A, Takahashi S (2002) A frequency-domain approach to watermarking 3D shapes.Comput Graphics Forum 21(3):373–382

25. Parvez MT, Gutub A (2008) RGB intensity based variable-bits image steganography. APSCC 2008 -Proceedings of 3rd IEEE Asia-Pacific Services Computing Conference 1(3):196–204

26. Praun E, Hoppe H, Finkelstein A (1999) Robust mesh watermarking. SIGGRAPH pp 49–5627. Rolland-nevizère X, Doërr G, Member S, Alliez P (2014) Triangle surface mesh watermarking based on a

constrained optimization framework. IEEE Trans Inf Forensics Secur 9(9):1491–150128. Shamir A (2008) A survey on mesh segmentation techniques. Comput Graphics Forum 27(6):1539–155629. Shapira L, Shamir A, Cohen-Or D (2008) Consistent mesh partitioning and skeletonisation using the shape

diameter function. Vis Comput 24(4):249–25930. Taubin G (1995) A Signal Processing Approach To Fair Surface Design. SIGGRAPH pp 351–35831. Thingiverse (2016). Retrieved July 26, 2016, from https://www.thingiverse.com32. Wang K, Lavoué G, Denis F, Baskurt A (2008) A comprehensive survey on three-dimensional mesh

watermarking. IEEE Trans Multimedia 10(8):1513–152733. Wang K, Lavoué G, Denis F, Baskurt A (2010) A benchmark for 3D mesh watermarking. Shape Modeling

International Conference (SMI) 2010:231–23534. Wang K, Lavoué G, Denis F, Baskurt A (2011) Robust and blind mesh watermarking based on volume

moments. Comput Graph 35(1):1–1935. Wu J, Kobbelt L (2005) Efficient spectral watermarking of large meshes with orthogonal basis functions.

Vis Comput 21(8–10):848–85736. Yang Y, Pintus R, Rushmeier H, Ivrissimtzis I (2014) A steganalytic algorithm for 3D polygonal meshes. In:

2014 I.E. International Conference on Image Processing (ICIP), IEEE, pp 4782–478637. Yang Y, Pintus R, Rushmeier H, Ivrissimtzis I (2016) A 3D steganalytic algorithm and steganalysis-resistant

watermarking. IEEE Trans Vis Comput Graph 23(2):1002–101338. Yin K, Pan Z, Shi J, Zhang D (2001) Robust mesh watermarking based on multiresolution processing.

Comput Graph 25(3):409–42039. Yu Z, Ip HH, Kwok L (2003) A robust watermarking scheme for 3D triangular mesh models. Pattern

Recogn 36(11):2603–261440. Zabih R, Kolmogorov V (2004) Spatially coherent clustering using graph cuts. Proceedings of the 2004 I.E.

Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2. CVPR, pp 437–44441. Zafeiriou S, Tefas A, Pitas I (2005) Blind robust watermarking schemes for copyright protection of 3D mesh

objects. IEEE Trans Vis Comput Graph 11(5):596–60742. Zaid AO, Hachani M, Puech W (2015) Wavelet-based high-capacity watermarking of 3-D irregular meshes.

Multimedia Tools and Applications 74(15):5897–591543. Zhan YZ, Li YT, Wang XY, Qian Y (2014) A blind watermarking algorithm for 3D mesh models based on

vertex curvature. Journal of Zhejiang University SCIENCE C 15(5):351–362

Multimed Tools Appl

Author's personal copy

Page 27: hklee.kaist.ac.krhklee.kaist.ac.kr/publications/Multimedia Tools(With Han...tools, a number of 3D models are being shared via various 3D printing communities such as Thingiverse [31]

Han-Ul Jang received the B.E. degree in information computer engineering from Ajou University, Korea and theM.S. degree in computer science from Korea Advanced Institute of Science and Technology (KAIST), Korea, in2012 and 2014, respectively. He is currently pursuing the Ph.D. degree at KAIST. His research interests includecomputer vision, machine learning, digital watermarking, and multimedia security.

Hak-Yeol Choi received his BS in computer science from Yonsei University, Korea andMS degrees in computerscience from Korea Advanced Institute of Science and Technology, Korea, in 2011 and 2013 respectively. He iscurrently working toward his Ph.D. degree in Multimedia Computing Lab., School of Computing, KAIST. Hisresearch interests include 3D mesh watermarking, multimedia forensics, stereoscopic display, and signalprocessing.

Multimed Tools Appl

Author's personal copy

Page 28: hklee.kaist.ac.krhklee.kaist.ac.kr/publications/Multimedia Tools(With Han...tools, a number of 3D models are being shared via various 3D printing communities such as Thingiverse [31]

Jeongho Son received his B.S. (2011) and M.S. (2013) degrees in computer science from KAIST, Korea. He iscurrently a Ph.D. candidate working at Geometric Computing Lab. (GCLAB) in KAIST. His research area isconcerned with computer graphics and computational geometry, including mesh watermarking, 3D printingoptimization, and developable surface.

Dongkyu Kim received his B.S. degree in electronic engineering from Hanyang University, Seoul, Korea, in2013, and M.S. degree in electrical engineering from KAIST, Daejeon, Korea, in 2015. He is currently workingtoward his Ph.D. degree in Multimedia Computing Lab., School of Computing, KAIST. His research interestsinclude digital forensics, and multimedia content watermarking.

Multimed Tools Appl

Author's personal copy

Page 29: hklee.kaist.ac.krhklee.kaist.ac.kr/publications/Multimedia Tools(With Han...tools, a number of 3D models are being shared via various 3D printing communities such as Thingiverse [31]

Jong-Uk Hou received his B.S. degree in Information and Computer Engineering from Ajou University, Korea,in 2012. He received his M.S. degree in Web Science and Technology from Korea Advanced Institute of Scienceand Technology, Korea, in 2014. He is currently working toward his Ph.D. degree in Multimedia ComputingLab., School of Computing, KAIST. He was awarded a Global Ph.D. Fellowship from National ResearchFoundation of Korea in 2015. His major interests include various aspects of information hiding, multimediasignal processing, and computer vision.

Sunghee Choi received the B.S. degree in computer engineering from Seoul National University in 1995 and theM.S. and Ph.D. degrees in computer science from the University of Texas at Austin in 1997 and 2003,respectively. She is currently an associate professor of the School of Computing at Korea Institute of Scienceand Technology (KAIST). Her research interests include computational geometry, geometric modeling, computergraphics, and visualization.

Multimed Tools Appl

Author's personal copy

Page 30: hklee.kaist.ac.krhklee.kaist.ac.kr/publications/Multimedia Tools(With Han...tools, a number of 3D models are being shared via various 3D printing communities such as Thingiverse [31]

Heung-Kyu Lee received a BS degree in electronics engineering from Seoul National University, Seoul, Korea,in 1978, and MS and PhD degrees in computer science from Korea Advanced Institute of Science andTechnology, Korea, in 1981 and 1984, respectively. Since 1986 he has been a professor in the Department ofComputer Science, KAIST. He has authored/coauthored over 100 international journal and conference papers. Hehas been a reviewer of many international journals, including Journal of Electronic Imaging, Real-Time Imaging,and IEEE Trans. on Circuits and Systems for Video Technology. His major interests are information hiding,digital watermarking, and multimedia forensics.

Multimed Tools Appl

Author's personal copy