6
Proceedings of the 2012 International Conference on Machine Learning and Cybeetics, Xian, 15-17 July, 2012 3D MODEL ESTIMATION USING A SINGLE RGB-D IMAGE RICKY JUN-BO LI, RONG-HUA LUO, HUA-QING MIN Intelligent Soſtware and Robot Lab, School of Computer Science and Engineering, South China University of Technology, Guangzhou, China E-MAIL: [email protected].rhluo@scut.edu.cn Abstract: Model estimation is important for robotic grasping. Since in order to make a decision about how to grasp the object, the robot should know where the target is and what it looks like. In this paper we propose a method of 3D model estimation using a single RGB-D image. The target object is segmented out from background using SAC and convex hull algorithm. And a clustering method is designed to separate different objects. Aſter recognizing the types of the objects, the parameters of the model of objects are estimated according partial observed information and the symmetrical property of objects. Although the parameter estimation process may vary for different kinds of model, yet the key of the method are SAC and Ordinary Least Squares (OLS). Experimental results show that our method is effective in model analyzing. Keywords: Model estimation; RGB-D image; Object segmentation; Model Prediction 1. Introduction Grasping is a basic task for home service robot. But before grasping an object the robot should estimate the 3D pose and model of it, so as to make a decision about how to grasp. For this puose our work is focused on estimating the 3D model of the object om a single RGB-D image. Multiple kinds of sensors are used for object recognition and grasping including cameras [1], stereo cameras [2], 3D sensors [3], or their combinations [4]. Richtsfeld proposed a vision-based grasping method using geomeic shape to model objects [5]. Collet uses SIFT feates [6] to match a model in database so as to estimate the pose of the object in grasping [7]. Since 3D sensors can obtain depth information, reconsuction of 3D models with them can be solved more directly [8, 9]. In [10-11] Sample Consensus is used to reconsuct a ll object model or smooth the target surface by fitting basic model. The method in [12] doesn't estimate the pameters of the object, but use shape-surface for object representation. Our method use RSAC and OLS (short for Ordiny 978-1673-1487-9/12/$31.00 ©2012 IEEE Least Squares) to estimate the 3D model of object using the 3D data point clouds gotten om Kinect. The proposed method has been using for object grasping with a robot of our lab, as is shown in Figurel. The remainder of this paper is organized as follows. We first explain how to segment object om RGB-D image in section 2, then the method for model estimation is presented in detail in section 3. At last we show its experiment. Figure I. Object grasping with a robot of our lab 2. Object Segmentation Figure 2 depicts the necessary steps for segmentation of the objects om the raw point cloud data acquired om Kinect camera on our robot. Since Kinect has been calibrated, the raw point cloud data can be generated om depth ame using a ansform maix. The task for our robot is to recognize the objects on a plane such as a table and then grasp it. The processes in the pipeline will be explained step by step. Preprocess Plane Get the Estimation convex hull . Distinguish Remove different outliers objects Figure 2. e pipeline of object segmentation 1182

[IEEE 2012 International Conference on Machine Learning and Cybernetics (ICMLC) - Xian, Shaanxi, China (2012.07.15-2012.07.17)] 2012 International Conference on Machine Learning and

Embed Size (px)

Citation preview

Page 1: [IEEE 2012 International Conference on Machine Learning and Cybernetics (ICMLC) - Xian, Shaanxi, China (2012.07.15-2012.07.17)] 2012 International Conference on Machine Learning and

Proceedings of the 2012 International Conference on Machine Learning and Cybernetics, Xian, 15-17 July, 2012

3D MODEL ESTIMATION USING A SINGLE RGB-D IMAGE

RICKY JUN-BO LI, RONG-HUA LUO, HUA-QING MIN

Intelligent Software and Robot Lab, School of Computer Science and Engineering, South China University of Technology, Guangzhou, China

E-MAIL: [email protected]@scut.edu.cn

Abstract: Model estimation is important for robotic grasping. Since

in order to make a decision about how to grasp the object, the robot should know where the target is and what it looks like. In

this paper we propose a method of 3D model estimation using a single RGB-D image. The target object is segmented out from background using RANSAC and convex hull algorithm. And a clustering method is designed to separate different objects. After recognizing the types of the objects, the parameters of the model

of objects are estimated according partial observed information and the symmetrical property of objects. Although the parameter estimation process may vary for different kinds of

model, yet the key of the method are RANSAC and Ordinary

Least Squares (OLS). Experimental results show that our method is effective in model analyzing.

Keywords: Model estimation; RGB-D image; Object segmentation;

Model Prediction

1. Introduction

Grasping is a basic task for home service robot. But before grasping an object the robot should estimate the 3D pose and model of it, so as to make a decision about how to grasp. For this purpose our work is focused on estimating the 3D model of the object from a single RGB-D image.

Multiple kinds of sensors are used for object recognition and grasping including cameras [1], stereo cameras [2], 3D sensors [3], or their combinations [4]. Richtsfeld proposed a vision-based grasping method using geometric shape to model objects [5]. Collet uses SIFT features [6] to match a model in database so as to estimate the pose of the object in grasping [7]. Since 3D sensors can obtain depth information, reconstruction of 3D models with them can be solved more directly [8, 9]. In [10-11] Sample Consensus is used to reconstruct a full object model or smooth the target surface by fitting basic model. The method in [12] doesn't estimate the parameters of the object, but use shape-surface for object representation.

Our method use RANSAC and OLS (short for Ordinary

978-1-4673-1487-9/12/$31.00 ©2012 IEEE

Least Squares) to estimate the 3D model of object using the 3D data point clouds gotten from Kinect. The proposed method has been using for object grasping with a robot of our lab, as is shown in Figurel.

The remainder of this paper is organized as follows. We first explain how to segment object from RGB-D image in section 2, then the method for model estimation is presented in detail in section 3. At last we show its experiment.

Figure I. Object grasping with a robot of our lab

2. Object Segmentation

Figure 2 depicts the necessary steps for segmentation of the objects from the raw point cloud data acquired from Kinect camera on our robot. Since Kinect has been calibrated, the raw point cloud data can be generated from depth frame using a transform matrix. The task for our robot is to recognize the objects on a plane such as a table and then grasp it. The processes in the pipeline will be explained step by step.

Preprocess It Plane It Get the Estimation convex hull

... Distinguish

.. Remove different outliers objects

Figure 2. The pipeline of object segmentation

1182

Page 2: [IEEE 2012 International Conference on Machine Learning and Cybernetics (ICMLC) - Xian, Shaanxi, China (2012.07.15-2012.07.17)] 2012 International Conference on Machine Learning and

Proceedings of the 2012 International Conference on Machine Learning and Cybernetics, Xian, 15-17 July, 2012

2.1 Preprocess of Point Cloud Data

In the raw point cloud there are too many data points which will increase the computational cost and make the procedure ineffective. Firstly we create a voxel grid over the input point data. In each voxel, all the points present will be approximated with their centroid. This will remove many redundant data points and make it more creditable for statistic.

After that, we limit the range of distance. The depth frame captured by Kinect is ranged from 1m to 10m, however, what we interested in is the object closed to the robot. Thus it's necessary to set a maximum distance. Here distance is limited ups to 3m, and the value of this parameter depends on the length of robot arms and the adequacy of environment information.

2.2 Plane Estimation

We will find the supporting plane in two steps in a way from coarse to fine. Firstly we use RANSAC algorithm to locate the plane and compute approximate parameters. Note that the plane we're looking for should be nearly horizontal, which means in our coordinate system the direction of normal of the plane should parallel to Y-axis. So when selecting the optimal plane, the constraint should be taken into account. In the next step, we obtain the points belonging to the plane to optimize plane parameters. Here OLS is adapted. Compare to PLS (Partial Least Squares), it's easier to achieve with no performance reduction in our experiment. Due to the advanced use of RANSAC and OLS, the detail will be explained in the following.

2.3 Get the convex hull

What we really interested in is the objects placed on the table. However, the interest points around objects can't be exacted just with the plane parameters. In this paper the convex hull of table is considered. The points are projected onto the plane of the table. If a projected point is outside of the region of the table it will be removed, because all the target objects are placed on the table. And we set a safe margin to remove the noise near the boundary of the table. That means points which are proj ected inside the convex hull will be checked again according to the distance between the projected points and the edges of convex hull. If the distance is smaller than a presetting threshold (here is lOcm), the point is supposed to be not creditable, so it should be excluded as is shown in Figure 3.

Figure 3. (top left and right) Plane extraction and convex hull analysis; (bottom left) Points inside the convex hull (red), in the bounder (green), outside(blue). (bottom right) the judgment range (orange for inside, pale green for bounder, purple for outside).

2.4 Remove outliers

Use the method described above, we have removed most of the irrelative points. But for some reason, Kinect receive some error data while measuring confusing regions. Besides, every device has its own accuracy grade. In our acquired point cloud data, some points which belong to the table will escape from the range of table. As a result, they're considered as an object point. For this sake, we use a statistical method to remove them according to their isolation. As mention in [14], we compute the mean f.t and standard deviation (J of nearest neighbor distances, and trimming the points which fall outside the f.t ± a . (J. The value of a

depends on the size of the analyzed neighborhood k. In our experiment, we set k = 50 and a = 0.2. An example of removed outliers is shown in Figure 4.

2.5 Distinguish Different Objects

Since several objects may be placed on the table. Before estimating the model of objects we should segment them from each other. We divide data points into clusters using Euclidean Cluster Extraction [15]. A point cluster 0i =

{Pi E P} is distinct from another point cluster OJ = {Pj E p}, if min Ilpi - Pj II � dth where dth is a maximum imposed distance threshold. This clustering method can be speeded up using KD-tree.

1183

Page 3: [IEEE 2012 International Conference on Machine Learning and Cybernetics (ICMLC) - Xian, Shaanxi, China (2012.07.15-2012.07.17)] 2012 International Conference on Machine Learning and

Proceedings of the 2012 International Conference on Machine Learning and Cybernetics, Xian, 15-17 July, 2012

a) create a KD-tree for the input point cloud dataset P; b) set up an empty list of clusters C, and a queue of the

points that need to be checked Q; c) then for every point Pi E P, perform the following

steps: • add Pi to the current queue Q; • for every point Pi E Q do:

search for the set P� of point neighbors of Pi in a sphere with radius r < dth; for every neighbor pf E P� , check if the point has already been processed, and if not add it to Q;

• when the list of all points in Q has been processed, add Q to the list of clusters C, and reset Q to an empty list;

d) the algorithm terminates when all points Pi E Phave been processed and are now part of the list of point clusters C.

Figure 5. Algorithm of euclidean cluster extraction

The algorithm of object segmentation is shown in Fig.5. In our experiment, points are densely on the surface of objects, and the setting dth = 0.02 (smaller than 2cm), minimum cluster size Smin = 100 is effective. An example of object segmentation is shown in Figure 6.

Figure 6. Object segmentation

3. Model Estimation

After segmentation, the point clouds from different objects have been classified, which means that they can be analyzed respectively. Since no extra information is obtained except for object point clouds and plane parameters, we attempt to gain more valuable data. Normal vector is topped the list, it's crucial in 3D analysis. Here we use a method called MLS (moving least squares, see [16], [17]). It provides the computation of normal vector in PCL as well as a way to smooth the object surface.

3.1 Model prediction using normal vector

To make our method adaptive, we design a predictor to guess which type of model the object belongs to. If the prediction is right, the analysis complexity can be reduced from O(n) to 0(1) (n is the number of models), otherwise the

sequent detection is executed to find a suitable correspondence.

On each point, we find its nearest neighbors using KD-tree, here k=3. The difference is compute using the formula below:

k.

d(po) = L,)Vi - Vo liz (1) i=l

where Po is the detected point, Pi (i = 1,2, ... , k) is the nearest points to Po and Vi is the corresponding normal vector of Pi' The difference is computed on the difference of each L2 norm. This value become undependable while the point lies on the edge of object, because the normal vector is unstable and the neighbors will lie on another adjacent plane. To avoid this, we sort all these differences ascending, pick up the section which is representative, and finally we get the mean value of section. According to statistics, the mean value of cylinder is higher than the one of cube. We set a boundary line with the threshold 11th h ld = 0.07. If " > " res 0 t" t"threshold' we consider it as a cylinder, otherwise it's a cube.

3.2 Cube estimation

The estimation method is partially similar to plane detection mentioned above. Firstly, we randomly select three points to construct a plane. In general, plane equation can be written like this:

Ax + By + Cz + D = 0 (3) There are two cases: if D = 0, the equation degenerate

to Ax + By + Cz = 0 , otherwise, a scale factor -l/D is used, and the plane equation turns to Ax + By + Cz - 1 = O.

Then the distance d between plane and the points belong to the object is calculated. The distance threshold is set to 0.007, the points with distance larger than 0.7cm will be trimmed. But it's not guaranteed that all the points in the range lie on the same plane. Therefore the angular distance u

between normal vectors of two compared points is set. Ideally, the angular distance is n/2 between two points which belong to adjacent plane. To increase robustness, we set angular threshold Uthreshold = n/5. The number of points that satisfy the two conditions will be counted. A possible plane should contain at least 10% of total points. We put all these possible planes into a priority queue according to the mean of angular distance.

We get various planes from RANSAC, but most of them are homologous. Some planes might just have a nice angular difference to each other and we should pick up only one among them. The judgment is based on the mean of angular distance described above. The queue pop-up the plane parameters sequentially, if one is against to the selected planes, it should be discarded (Here the minimum angular

1184

Page 4: [IEEE 2012 International Conference on Machine Learning and Cybernetics (ICMLC) - Xian, Shaanxi, China (2012.07.15-2012.07.17)] 2012 International Conference on Machine Learning and

Proceedings of the 2012 International Conference on Machine Learning and Cybernetics, Xian, 15-17 July, 2012

distance is set to nj 6 ). Finally we expect to maintain 3 planes. Redundant or scanty planes will cause an analysis failure. 7 shows the plane estimated RANSAC.

Figure 7. RANSAC for plane estimation, points in red and yellow are corresponded in distance, but only yellow are corresponded in direction.

Next, Ordinary Least Squares (OLS) is applied for plane parameter adjustment. The optimal plane parameter should satisfy the following function:

fplane = min L fplane (Pi) i

=min L(A*xi+B*Yi+C *Zi+D)Z (4) i

In general, we assume D =f:. 0, which means the origin of coordinates is outside of the plane (For the special case D = 0, the analysis is the same but the compute matrix need to be changed). Then we use a scaled conversion to make D = -1, the first derivative of fplane should equal to ° when

the parameters are optimal, and the function can be written as:

afplane "\' ---aA = L, 2 * (A * xi + B * Yi + C * zi - 1) * xi (5) i

afplane "\' � = L, 2 * (A * xi + B * Yi + C * zi - 1) * Yi (6) i

afplane "\' ----ac = L, 2 * (A * xi + B * Yi + C * zi - 1) * zi (7) i

Here the calculation is quite simple using Cramer's Rule, matrix form is show below:

LX/ LXi * Yi LXi * zi [A] LXi

LXi * Yi L y/ LYi * Zi * B = L Yi (8) LXi*Zi LYi*Zi LZ/ C L Zi

After revaluate, supporting points of each plane are used to analyze boundaries. They're projected onto the cross line, then the maximum distance between the proj ected points will be recorded as the length of edge and its two points as vertexes.

The number pn of planes is pivotal in our analysis. Of course pn = 3 is what we expect. When pn = 1, we don't get enough information to estimate three different edges of cube, and pn > 3 will lead to a confusion that we consider it failed. However, a special situation should be noticed when pn = 2. Being affected by the viewpoint or the reflection property of different materials, it's common to find

only 2 planes. Therefore, the third plane is structured base on the detected two planes and their cross line. We orthogonalized the third plane using its normal vector:

i j k n3 = nl x nz = det [Xl YI Zl] (9)

Xz Yz Zz Then the vertex in the cross line is used to get the plane

parameters, and compute the rest cross line and vertex. Finally the hidden vertexes are estimate by symmetry,

using simple vector addition, as is shown in Figure 8.

Figure 8. The vertexes generated by computing (blue) and inferred (red).

3.3 Cylinder estimation

The method for cylinder estimation is also based on OLS. But this time the estimation formula is not about plane but cycle, because the cylinder is considered vertical to the table plane. After projection to that plane, cylinder is compressed to a cycle and cycle OLS can be used to estimate. With the help of plane parameter, 3D data points can turn into 2D points for further analyzing. In Chapter 2, the table plane parameters are recorded as Apx + Bpy + Cpz - 1 = ° (assume Dp =f:. 0), also we can calculate its normal vector np'

With the information above, a proper transform matrix can be derived from current 3D world coordinate to 2D plane coordinate using translation matrix T and rotation matrices Rx, Ry. We combine them and form the final transform

matrix M = Ry * Rx * T to make a correspondence between

np and Z-axis. Thus we can transform a 3D point P3d = (xo, Yo, zo, lY into 2D points PZd = (x, y, O,lY using

PZd = M * P3d' Since 2D point is available, constructing a cycle

equation below is convenient for analyzing: XZ - 2Ax + yZ - 2By - C = ° (10)

With this form, it can be figured out the circle center

(A,B) and its radius R = ..JAz + BZ + C. We suppose the error term (3 like that:

(3 = XZ - 2Ax + yZ - 2By - C (11) For simplification Z = XZ + yZ is defined here, then

circle OLS function can be written as:

1185

Q = L (3f = L(Zi - 2Axi - 2BYi - c)Z (12) i i

Page 5: [IEEE 2012 International Conference on Machine Learning and Cybernetics (ICMLC) - Xian, Shaanxi, China (2012.07.15-2012.07.17)] 2012 International Conference on Machine Learning and

Proceedings of the 2012 International Conference on Machine Learning and Cybernetics, Xian, 15-17 July, 2012

Also, the first derivation is computed: aQ "

aA = L 2(Zi - 2Axi - 2BYi - C ) * (-2Xi) = 0 i

aQ "

aB = L 2(Zi - 2Axi - 2BYi - C ) * (-2Yi) = 0 i

aQ " ac = L 2(Zi - 2Axi - 2BYi - C ) * (-1) = 0

i Its matrix form combined with Cramer's Rule:

L 2x/ L 2xiYi L Xi [A] L xizi

L 2xiYi L 2Yi2 LYi * B = LYiZi

L 2xi L 2Yi L 1 C L zi

(13)

(14)

(15)

(16)

Therefore, the radius and location of projected circle has been estimated. The last thing is to detect the height of cylinder, and it's done by checking the distance of all data points to table plane. Owing to the remove of outliers, the maximum distance is regarded as height of cylinder.

Figure 9. Points on the object are projected onto XY-Plane

4. Experimental results

We selected several objects with different type of models or different sizes, as is shown in Figure 10. The comparison is made between the estimation and ground truth, which is measured by hand:

Figure 10. 4 testing objects (milktea box, teapot box, stainless-steel mug, plastic beaker from left to right)

1: bl I fi . h d'fli a e . our testmg 0 JJects Wit I erent properties Object Name Type Parameters (meter)

I Milktea box cube 0.15*0.095*0.05

2 Teapot box cube 0.175*0.12*0.1

3 Stainless-steel cylinder height:0.16

mug radius:0.0415

4 Plastic beaker truncated cone height:0.18

radius:0.03-0.0375

The precision function is based on the relative error: Ix -x I _ ob truth

1000/. ex - x 70 Xtruth

(17) A collection of objects with various poses is analyzed,

some of them are showed in Figure 11, and then a statistics table is printout.

Figure II. Some experimental results

According to the experimental statistics, the difference between observations and ground truth is displayed in Figure 12.

0.003

0.0025 ,

1 0.002 • max vohune

0.0015 -min volume 0.001 • ground truth

.;. .. t 0.0005 •

0

obi 1 obi 2 obi 3 obi 4

Fig 12. Stock chart of volume differences

Table 2 Relative error on cube Relative error: length width height volume object I 2.07% 3.78% 13.89% 15.72% object 2 4.40% 4.62% 4.96% 11.82%

1186

Page 6: [IEEE 2012 International Conference on Machine Learning and Cybernetics (ICMLC) - Xian, Shaanxi, China (2012.07.15-2012.07.17)] 2012 International Conference on Machine Learning and

Proceedings of the 2012 International Conference on Machine Learning and Cybernetics, Xian, 15-17 July, 2012

Table 3. Relative error on cylinder Relative error: radius height volume object 3 24.14% 3.83% 40.22% object 4 10.00% 5.27% 15.22%

The error originates from three aspects. Firstly, near infrared light is obtained to detect depth, its error is related to the distance between device and objects. Since we proceed from a single RGB-D Image, no continuous information is gathered to avoid absolute error. Besides, the size of objects is also a pivotal factor to relative error referring to the error formula, so we can see the relative error with smaller value is higher when comparing three edges of cube (We assume that length> width> height in order to unify measurement). The third factor is the material of objects, object 3 have an insufferable error of radius (24.14%), that's because transparent surface can't be detected by near infrared light, the light though the surface and lead to a radius reducing.

5. Conclusions

In this paper, we proposed a method to estimate 3D model using a single RGB-D image. Many utility algorithms are obtained in the whole procedure, and the cores are two: RANSAC and OLS, which are quite effective in modeling. Prediction is considered before estimation to make a better and faster modeling. However, currently we can just estimate cube and cylinder. In our future work, we will estimate the model with higher accuracy, and try to detect more basic models and the combination of them.

6. Acknowledgements

This work is partially supported by NNSF of China (No. 61005061), Fundamental Research Funds for the Central Universities (No. 2012ZZ0067) and Foundation for Distinguished Young Talents in Higher Education of Guangdong (No. LYM08015). Natural Science Foundation of Guangdong Province of China(N o. 925106410 10000 10). The authors also gratefully acknowledge the reviewers' helpful comments and suggestions, which have improved the presentation.

7. References

[1] M. Ulrich, C. Wiedemann, and C. Steger, "Cad-based recognition of 3d objects in monocular images", in International Conference on Robotics and Automation, 2009, pp. 1191-1198.

[2] U. Hillenbrand, "Pose clustering from stereo data", in Proceedings VISAPP International Workshop on Robotic Perception - RoboPerc, 2008.

[3] B. Steder, G. Grisetti, M. Van Loock, and W. Burgard,

"Robust online model-based object detection from range images", IROS, St. Louis, MO, USA, October 2009.

[4] Z.-C. Marton, D. Pangercic, N. Blodow, J. Kleinehellefort, and M. Beetz, "General 3D Modelling of Novel Objects from a Single View", in IEEEIRSJ Int. Conf. on Intelligent Robots and Systems, Taiwan, 2010.

[5] M. Richtsfeld and M. Vincze, "Grasping of Unknown Objects from a Table Top", in Workshop on Vision in Action: Efficient strategies for cognitive agents in complex environments, 2008.

[6] D. G. Lowe, "Distinctive image features from scale-invariant keypoints", International Journal of Computer Vision, vol. 60, no. 2, 2004.

[7] A. Collet, D. Berenson, S. S. Srinivasa, and D. Ferguson, "Object Recognition and Full Pose Registration from a Single Image for Robotic Manipulation", ICRA, Kobe, Japan, 2009.

[8] Y. Zhang, A. Koschan, and M. Abidi, "Superquadric Representation of Automotive Parts Applying Part Decomposition", Journal of Electronic Imaging, Special Issue on Quality Control by Artificial Vision, Vol. 13, No. 3, pp. 411-417, 2004.

[9] M. Ruhnke, B. Steder, G. Grisetti, and W. Burgard, "Unsupervised learning of 3d object models from partial views", ICRA, Kobe, Japan, 2009.

[10] N. Blodow, R. B. Rusu, Z. C. Marton, and M. Beetz, "Partial View Modeling and Validation in 3D Laser Scans for Grasping", in 9th IEEE-RAS International Conference on Humanoid Robots (Humanoids), Paris, France, December 7-10 2009.

[11] R. Schnabel, R. Wahl, and R. Klein, "Efficient RANSAC for Point-Cloud Shape Detection", Computer Graphics Forum, vol. 26, no. 2, pp. 214-226, June 2007.

[12] R. B. Rusu, N. Blodow, Z. C. Marton, and M. Beetz, "Close-range Scene Segmentation and Reconstruction of 3D Point Cloud Maps for Mobile Manipulation in Human Environments", IROS, St. Louis, MO, USA, October 11-15 2009.

[13] A. E. Johnson, "Spin-images: A representation for 3-d surface matching", Tech. Rep., 1997.

[14] Z. C. Marton, R. B. Rusu, and M. Beetz, "On Fast Surface Reconstruction Methods for Large and Noisy Datasets", ICRA, Kobe, Japan, May 12-17, 2009.

[15] R. B. Rusu, "Semantic 3D Object Maps for Everyday Manipulation in Human Living Environments", PhD Thesis , 2009.

[16] D. Levin, "The approximation power of moving least-squares", Math. Comp., Vol. 67, 1998, pp. 1517-1531.

[17] Lancaster, P. and Salkauskas, K.",Surfaces generated by moving least squares methods", Mathematics of Computation., 37, 141-158. 1981.

1187