16
Computer-Aided Design 53 (2014) 46–61 Contents lists available at ScienceDirect Computer-Aided Design journal homepage: www.elsevier.com/locate/cad Octree-based, automatic building façade generation from LiDAR data Linh Truong-Hong, Debra F. Laefer Urban Modelling Group (UMG), School of Civil, Structural and Environmental Engineering (SCSEE), University College Dublin (UCD), Newstead G27, Belfield, Dublin 4, Ireland highlights Introducing a new automatic method to detect boundary points of façade features. Boundary points were extracted based on a local score of data points. The algorithm automatically detected all openings and filled non-openings. This was achieved without any supplemental datasets or user knowledge. Building models were generally more accurate and faster than previous works. article info Article history: Received 15 August 2012 Accepted 1 March 2014 Keywords: Light Detection and Ranging (LiDAR) Terrestrial laser scanning Masonry buildings Geometric modelling Computational modelling Finite element modelling abstract This paper introduces a new, octree-based algorithm to assist in the automated conversion of laser scan- ning point cloud data into solid models appropriate for computational analysis. The focus of the work is for typical, urban, vernacular structures to assist in better damage prediction prior to tunnelling. The pro- posed FaçadeVoxel algorithm automatically detects boundaries of building façades and their openings. Next, it checks and automatically fills unintentional occlusions. The proposed method produced robust and efficient reconstructions of building models from various data densities. When compared to mea- sured drawings, the reconstructed building models were in good agreement, with only 1% relative errors in overall dimensions and 3% errors in openings. In addition, the proposed algorithm was significantly faster than other automatic approaches without compromising accuracy. © 2014 Elsevier Ltd. All rights reserved. 1. Introduction Laser scanning, also known as Light Detection and Ranging (Li- DAR), rapidly and accurately acquires topography of object sur- faces and generates a set of data known as a point cloud. Laser scanning is gaining popularity in reverse engineering [1–3] and vi- sualization [4,5], yet exploiting the data for structural analysis re- mains a topic for development [6,7]. Developing automatic, robust, and efficient methods to reconstruct geometric building models for computational modelling has the potential to save money and time for projects needing city-scale modelling. Computational models are especially important in structural engineering, when assessing the status of existing buildings or any risks related to adjacent construction works. For tunnelling projects this can be highly problematic, because the measured drawings used as the basis for the solid models needed for Finite Corresponding author. E-mail addresses: [email protected] (L. Truong-Hong), [email protected] (D.F. Laefer). Element Modelling (FEM) are rarely available for the vast majority of structures that comprise the architectural fabric of many his- toric cities. Overcoming this data gap has traditionally required the manual surveying of each structure. Along a single tunnel route, this may include hundreds of structures for each kilo- metre of tunnelling, thereby making the financial and tempo- ral requirements of such surveying cost-prohibitive. For exam- ple, a study of the area along the first kilometre of the upcom- ing Dublin Metro which included 449 buildings potentially at risk [8]; mostly small (2–4 storey), slender, load-bearing, rectangular masonry buildings with regular features and little, if any, orna- mentation [9]. Of the 449 buildings, measured drawings of some type (not necessarily complete) were available for only 27.6% of the buildings. These types of buildings are extremely vulnera- ble to tunnel-induced subsidence damage. During Dublin’s last tunnelling project, 1 in 8 buildings along the route were dam- aged at the cost of nearly e 5 million [10]. As such, the aims of this paper are to introduce an efficient and reliable approach for reconstructing the accurate geometry from LiDAR data of the load-bearing masonry buildings in a form that is compati- ble with FEM analysis. The proposed method can automatically http://dx.doi.org/10.1016/j.cad.2014.03.001 0010-4485/© 2014 Elsevier Ltd. All rights reserved.

Octree-based, automatic building façade generation from LiDAR data

  • Upload
    debra-f

  • View
    212

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Octree-based, automatic building façade generation from LiDAR data

Computer-Aided Design 53 (2014) 46–61

Contents lists available at ScienceDirect

Computer-Aided Design

journal homepage: www.elsevier.com/locate/cad

Octree-based, automatic building façade generation from LiDAR dataLinh Truong-Hong, Debra F. Laefer ∗

Urban Modelling Group (UMG), School of Civil, Structural and Environmental Engineering (SCSEE), University College Dublin (UCD), Newstead G27,Belfield, Dublin 4, Ireland

h i g h l i g h t s

• Introducing a new automatic method to detect boundary points of façade features.• Boundary points were extracted based on a local score of data points.• The algorithm automatically detected all openings and filled non-openings.• This was achieved without any supplemental datasets or user knowledge.• Building models were generally more accurate and faster than previous works.

a r t i c l e i n f o

Article history:Received 15 August 2012Accepted 1 March 2014

Keywords:Light Detection and Ranging (LiDAR)Terrestrial laser scanningMasonry buildingsGeometric modellingComputational modellingFinite element modelling

a b s t r a c t

This paper introduces a new, octree-based algorithm to assist in the automated conversion of laser scan-ning point cloud data into solid models appropriate for computational analysis. The focus of the work isfor typical, urban, vernacular structures to assist in better damage prediction prior to tunnelling. The pro-posed FaçadeVoxel algorithm automatically detects boundaries of building façades and their openings.Next, it checks and automatically fills unintentional occlusions. The proposed method produced robustand efficient reconstructions of building models from various data densities. When compared to mea-sured drawings, the reconstructed building models were in good agreement, with only 1% relative errorsin overall dimensions and 3% errors in openings. In addition, the proposed algorithm was significantlyfaster than other automatic approaches without compromising accuracy.

© 2014 Elsevier Ltd. All rights reserved.

1. Introduction

Laser scanning, also known as Light Detection and Ranging (Li-DAR), rapidly and accurately acquires topography of object sur-faces and generates a set of data known as a point cloud. Laserscanning is gaining popularity in reverse engineering [1–3] and vi-sualization [4,5], yet exploiting the data for structural analysis re-mains a topic for development [6,7]. Developing automatic, robust,and efficientmethods to reconstruct geometric buildingmodels forcomputationalmodelling has the potential to savemoney and timefor projects needing city-scale modelling.

Computational models are especially important in structuralengineering, when assessing the status of existing buildings orany risks related to adjacent construction works. For tunnellingprojects this can be highly problematic, because the measureddrawings used as the basis for the solid models needed for Finite

∗ Corresponding author.E-mail addresses: [email protected] (L. Truong-Hong),

[email protected] (D.F. Laefer).

http://dx.doi.org/10.1016/j.cad.2014.03.0010010-4485/© 2014 Elsevier Ltd. All rights reserved.

Element Modelling (FEM) are rarely available for the vast majorityof structures that comprise the architectural fabric of many his-toric cities. Overcoming this data gap has traditionally requiredthe manual surveying of each structure. Along a single tunnelroute, this may include hundreds of structures for each kilo-metre of tunnelling, thereby making the financial and tempo-ral requirements of such surveying cost-prohibitive. For exam-ple, a study of the area along the first kilometre of the upcom-ing Dublin Metro which included 449 buildings potentially at risk[8]; mostly small (2–4 storey), slender, load-bearing, rectangularmasonry buildings with regular features and little, if any, orna-mentation [9]. Of the 449 buildings, measured drawings of sometype (not necessarily complete) were available for only 27.6% ofthe buildings. These types of buildings are extremely vulnera-ble to tunnel-induced subsidence damage. During Dublin’s lasttunnelling project, 1 in 8 buildings along the route were dam-aged at the cost of nearly e 5 million [10]. As such, the aimsof this paper are to introduce an efficient and reliable approachfor reconstructing the accurate geometry from LiDAR data ofthe load-bearing masonry buildings in a form that is compati-ble with FEM analysis. The proposed method can automatically

Page 2: Octree-based, automatic building façade generation from LiDAR data

L. Truong-Hong, D.F. Laefer / Computer-Aided Design 53 (2014) 46–61 47

eliminate unrealistic holes due to incomplete data or occlusions.The derived building models were benchmarked against a selec-tion of other currently available solutions.

Reconstruction often involves two main steps: segmentationand feature reconstruction. Segmentation extracts point clouds onplanar features or of the façade objects in order to remove irrele-vant points. The remaining portion of the point cloud is then usedto reconstruct building models. In this paper, the segmentationstep was conducted by employing software associated with the Li-DAR scanner, whereas the building reconstruction with the neces-sary feature detection is the focus of the contribution herein.

2. Related works

To date, several approaches have been developed to semi-automatically (e.g. [11]) or automatically (e.g. [12,13]) reconstructthe building geometry from LiDAR data (both airborne (ALS) andterrestrial (TLS)) and is sometimes combined with photogramme-try. The processes can be divided into either a model-driven or adata-driven approach. In the former, geometric primitives are ini-tially described and building features are subsequently fitted basedon point cloud data [14,15]. This technique has limited applicabil-ity to complex buildings and results in relatively low geometricaccuracy, thereby preventing easy usage in computational mod-els. With the data-driven technique, building boundaries and fea-tures are generated directly from the point cloud [13,16,17]. Suchmethods can be applied to arbitrary shapes by fitting polygonsor surfaces, as is often done in reconstructing building roofs [11,18]; however, this approach is more sensitive to noise from the in-put data than the model-driven approach. Additionally, there aresome interactive tools to support users in the fast reconstructionof building models (e.g. [19,20]).

Amongst data-driven techniques, outlines of buildings and theirfeatures (e.g. doors and windows) are commonly generated frompoints lying on a dataset’s boundaries. To assist in this end, Pu andVosselman [21] identified points on boundaries of building fea-tures as the end points of triangles in triangulated irregular net-works (TINs), which have lengths exceeding a specified threshold.The boundary points of each feature are categorized into upper,lower, left and right groups, and a minimum-bounding rectanglewas subsequently fitted to each feature. However, for the overallfaçade, an upper boundary line was generated from upper con-tour points by least-squares fitting, and the extreme vertices ofthe upper line were projected onto the ground plane to create twovertical boundary lines [11]. In related research, Boulaassal et al.[22] segmented points on planes by the RANdom Sample Consen-sus (RANSAC) algorithm and extracted contour boundary pointsof openings from a two-dimensional (2D) Delaunay triangulationsimilar to the work by Pu and Vosselman [11]. Those boundarypoints were transformed into parametric objects [22]. While theseefforts successfully extracted sufficient boundary points to gener-ate outline polygons of major features, they were highly sensitiveto user-defined length thresholds and generated varying levels ofgeometric accuracy.

The quality of fitting polygons to building features has been im-proved by combining various data sources. For example, Beckerand Haala [12] generated boundary lines derived from a photo-graph using a Sobel filter algorithm and from TLS data using a half-disc approach. These boundary lines were then combined togetherto improve accuracy, and airborne LiDAR and 2D ground plans pro-vide the building’s outlines. The façade details were improved byintegrating boundary lines of openings into the building models[12,23]. In that approach, reconstructing building models involvedintegrating the boundary lines for cell decomposition. Opening de-tails were subsequently refined by integrating image data. Simi-larly, Pu and Vosselman [11] used a Canny extractor for detecting

Fig. 1. Octree representation.

feature edge from a photograph and Delaunay triangulation forextracting boundary points from the TLS data. The quality of the re-sultingmodels depended on the accuracy inmapping the boundarylines and the availability of such data. Despite these significant ad-vances, a fully automatic approach for reconstructing highly de-tailed and accurate building models strictly from LiDAR data stillremains a challenge [1].

Alternatively, volumetric methods have been employed to re-construct object surfaces. In such approaches, the object’s surfacescan be determined from a signal function distance (i.e. the dis-tance from an arbitrary point to its oriented target plane [24]), anoriented-charge (e.g. the linear local distance field and a normalfield between a point and its neighbours [25]), or a radial basisfunction, which approximates multivariable functions by a linearcombination of terms based on a single, univariate function [26].

Using an altogether different method, Curless and Levoy [27]employed a volumetric method to reconstruct surfaces from rangeimages by applying a cumulative, weighted, signed distance func-tion. This method successfully filled gaps from curving spaces butcould not, however, generate surfaces for arbitrary objects and re-quired a voxel size smaller than any anticipated feature for reliabledetection. Consequently, the method was rather computationallyexpensive. To overcome this, Pulli et al. [28] used an octree rep-resentation in which the whole input data set became the initialcubical volume and was recursively subdivided into eight smallercubes until reaching a predefined sub-division depth (Fig. 1). Inthat approach, each voxel was classified as ‘‘inside’’, ‘‘boundary’’,or ‘‘outside’’ by comparing the voxel’s location to the sensor andrange data. Consequently, themesh of the object’s surface was cre-ated as the interface shared between outside cubes and other ones.Notably the octree has been used for the storage and compressionof huge TLS point clouds [29], as well as for visualization [30].

In another approach to compute an approximate distance field,Wang et al. [25] used oriented charges of each voxel of an octreerepresentation, whichwas determined by points within neighbourvoxels. However, surface features smaller than the smallest voxelsize risked going undetected. To help overcome this, Dalmasso andNerino [31] used compact radial support functions to estimate asurface based on a multi-scale, volumetric approach. Child vox-els were classified as surface, inner, or outer voxels based strictlyon the voxel’s position with respect to the sensor. Surface voxelswere continuously divided until reaching a specific scale or untilthe voxel contained only one data point. In an example of usingoctree representations in building reconstruction, Wang et al. [32]extracted sample points on boundary openings by examining ad-joining voxels of a selected voxel along the vertical and horizontaldirections. Data points within the selected voxel were classified asboundary points, if at least one adjoining voxel was empty. Thismethod was successful in consistently detecting all openings butwith relatively low geometric accuracy.

In summary, although many approaches have been developedto generate geometric models of building façades from LiDAR data,

Page 3: Octree-based, automatic building façade generation from LiDAR data

48 L. Truong-Hong, D.F. Laefer / Computer-Aided Design 53 (2014) 46–61

Fig. 2. Workflow of the façade algorithm.

most are of limited use for computational modelling because ofone ormore of the following: (1) their primary aim is visualization;(2) they are largely manual or semi-automatic, and thereby rely onuser experience and/or supplemental data; (3) they need extensiveuser interaction before being fully useable within a commercialFEM package; or (4) they lack the geometric accuracy formeaning-ful engineering applications [33]. In structural analysis, allowableerror depends on various factors including structure type, analy-sis purpose, and cost. While the geometry for a structural analysismodel is traditionally presumed to be completely accurate, a 5%deviation is generally considered acceptable [34]. Therefore, a vi-able automated approach must be able to discern object bound-aries, locate major features of structural interest, and overcomelocalized data occlusions. The FaçadeVoxel (FV) algorithm is pro-posed by the authors for such a purpose and benchmarked hereinfor load-bearing, unreinforced masonry building façades.

3. The proposed FaçadeVoxel algorithm

The FV algorithm has three main steps: (1) use of a hierarchi-cal data structure to build a volumetric representation of an objectby employing an octree representation; (2) extraction of bound-ary points from data within the voxels on boundaries of holes andthe façade, along with the generation of their boundary lines; and(3) re-determination of voxel properties by comparing a voxel’sposition to a set of derived boundary lines (Fig. 2). The geomet-ric building model generated from this algorithm is subsequentlyimported directly into a finite element package for structural anal-ysis. To achieve this, both topology and geometry of the buildingmodel are converted into suitable data formats by a Boundary-Representation (B-Rep) [6].

3.1. Octree representation (step 1)

Sample points are defined as a set of points, Pi (i = 1, . . . , n),which have three coordinates, (xi, yi, zi) ∈ R3 in Cartesian coor-dinate system. Step one uses an octree representation as a volume

initially bounding all sample points of the façade, where scanneddata are projected onto a vertical fitting plane (Fig. 2). As a terres-trial laser scanner cannot collect the façade thickness from a singlescan of a building’s exterior, a preselected voxel dimension alongthe depth direction of the façade is currently assigned. Thus, thesubsequent subdivision mechanism of the octree representation isanalogous to that of a quadtree [35], where a voxel is subdividedinto four smaller voxels along the length and height directions ofthe façade. However, as the long-termgoal of thiswork is for three-dimensional (3D) building reconstruction and for its application tohighly articulated buildings, as well as those of limited decoration,the structure is herein described as an octree, even though the cur-rent appearance is that of a quadtree.

Each voxel is described by its geometry, address, and status.Geometry is defined by coordinates of each voxel’s two oppositecorners; for example the corners O and O′ at the octree depth of 0(Fig. 1). Similar to Tang [36], each node in the octree structure in theproposed algorithm has an address that reflects the relationshipbetween adjacent nodes and the path from a current node to theroot,which is represented by a string of integers (Fig. 1). Finally, thevoxel’s status is determined based on a number of sample pointscontained within the considered voxel. The voxel is classified as‘‘full’’, if the voxel contains at least one sample point. All others areclassified as ‘‘empty’’.

Selecting an appropriate termination criterion of an octree isnot a trivial task for automatic building reconstruction. Previously,several approaches have been employed. Ayala et al. [37] selectedtermination based on a minimal voxel size, while Pulli et al. [28]used a predefined maximum tree depth, and Wang et al. [25]adopted a maximum threshold of sample points contained withina voxel. In the proposed FV algorithm, the minimum voxel sizeis considered as the termination condition for recursively sub-dividing the octree representation, where the shortest side (eitherhorizontal or vertical) of the voxel is less than half of the expectedminimum opening size. The criterion is based on an assumptionthat the inside of an opening is represented either by an empty

Page 4: Octree-based, automatic building façade generation from LiDAR data

L. Truong-Hong, D.F. Laefer / Computer-Aided Design 53 (2014) 46–61 49

(a) Expected opening inTLS data shown in greenlines.

(b) Octree with a voxeldimension of more thanhalf of the expectedminimum openingdimension.

(c) Octree with a voxeldimension of less thanhalf of the expectedminimum openingdimension.

Fig. 3. Octree representation with various defined voxel sizes.

voxel or by a cluster of empty voxels. In the example presented inFig. 3, when a voxel’s dimension is defined as greater than half ofthe minimum opening size of 0.4 m [38], no empty voxels appear,thereby incorrectly implying the presence of no openings(Fig. 3(b)). By applying the criterion, four empty voxels appear,thereby clearly indicating a potential opening (Fig. 3(c)). Thus,based on these premises, the required octree depth along the x- andy-directions can be expressed as Eqs. (1) and (2).

depthx =

log2

xmax − xmin

MinVoxelSize

(1)

depthy =

log2

ymax − ymin

MinVoxelSize

(2)

where xmax, xmin, ymax, ymin are the minimum and maximum co-ordinates of the sample input points. The term MinVoxelSize isset equal to half of the smallest façade feature expected to be de-tected, which in this case is a window; the sign [] means that thevalue is rounded up to the nearest integer. In these equations,(xmax − xmin) and (ymax − ymin) are, respectively, the length andheight of the bounding box. The maximum octree depth is definedas being equal to the smaller value of depthx or depthy.

3.2. Extract boundary points (step 2)

Glass or highly reflective materials tend not to return a LiDARsignal. Thus, it was hypothesized that boundary points on openingscould be extracted from data points within the point cloud thatlie on the perimeter of any empty voxel and that boundary pointsof the building are on the outer perimeter of the given dataset.Specific boundary point detection strategies are presented below.

3.2.1. Boundary point detection for openings (steps 2.1a–2.3a)The portion of the FV algorithm for detecting boundary points

of realistic openings is comprised of three subparts: (1) extractcandidate points, (2) determine boundary points and (3) removeunreal openings (Fig. 2). For extracting candidate points (step2.1a), from point clouds (Fig. 4(a)), the voxelization model wasgenerated, and then the full voxels along the entire perimeter of thehole that was represented by an empty voxel cluster are detectedby a flood-filling technique [39] (Fig. 4(b)). The sample pointswithin these full voxels on the proximity boundaries of openingsare identified as candidate points. However, if only a few candidatepoints lie on the voxel’s surfaces (for example see the dashed blackline segments representing the surfaces projected onto a 2D planein Fig. 4(b)), boundary points may be missed. To overcome this

problem, other full voxels adjoining the full voxels on the opening’sboundaries are added to the set of full voxels containing candidatepoints (dark grey rectangles in Fig. 4(b)). The candidate points arethen segregated from the set of full voxels (Fig. 4(c)).

After a set of the candidate points are determined, in step2.2a, boundary points are extracted one-by-one (result shown inFig. 4(d)). In this process, the centre of the empty voxel group(which is the average of the centres of all voxels in the group) isconsidered as its rotational centre, called P0. From the candidatepoints, the closest point to P0 in the Euclidean metric is defined asthe initial reference point, P1 (Fig. 5(a)). A point in the neighbour-ing points {P2} of the reference point P1 is the target point, if thispoint has the smallest score resulting from combining the angle atP1 between two vectors P1P0 and P1P2i and the perimeter of the tri-angle P0P1P2i (Fig. 5(b)) as accomplished through comparing otherpoints in the neighbourhood. Details of searching boundary pointsare described below.

To determine a target point, the neighbouring points of the ref-erence point, P1 were extracted from the full voxels around thehole (Fig. 4(b)) by a k-nearest neighbour (kNN) technique [40].The vector P0P1 rotates about the point P0 to search the boundarypoints around the hole. Thus, only sample points within the neigh-bouring points lying on a hemisphere (the grey area with filledpoints-Fig. 5(b)), which called the sub-candidate points P2i ∈ {P2},were investigated (as expressed by Eq. (3)). As the target point wasdetermined from the sub-candidate points of the reference point,the searching process required at least 3 points, thereby making 3triangles (P0P1P2i). This implies that with an arbitrary location ofP0 and a uniform regular distribution within the point cloud theneighbourhood size should be 8 kNNs. However, as uniform distri-bution cannot be assured due to variations in the environment, thequantity of neighbouring points for P1 was experimentally chosenas 10 points (Fig. 5(a)).CP2 = {P2 ∈ P|P2 ∈ a hemisphere defined by P0P1} (3)where P is the neighbourhood of the reference point, P1.

The boundary score for the angle P1 and a triangle perimeterof each triangle are given as Eqs. (4) and (5): P1

= P1

max( P1)(4)

∆(P2)

=∆ (P2)

max(∆(P2))(5)

where P1 is the angle at P1 made by the two vectors P1P0 and P1P2i,and ∆(P2) is the perimeter of the triangle P0P1P2i, where P2i is thepoint in the sub-candidate points, {P2} (Fig. 5(b)).

Page 5: Octree-based, automatic building façade generation from LiDAR data

50 L. Truong-Hong, D.F. Laefer / Computer-Aided Design 53 (2014) 46–61

(a) TLS data. (b) Voxelization model—octree depth of 5.

(c) Candidate boundary points. (d) Resulting boundary points.

Fig. 4. Process of window reconstruction. Notes: Black and unfilled holes are, respectively, point clouds and candidate points, while diamond points are target points. Darkand light grey rectangles are full voxels containing candidate points and others, while white space represents an interior opening. Dashed black lines are voxel surfacesprojected on a 2D plane.

(a) Select neighbouring points ofthe reference point, P (∗)

1 .(b) Determine target points from thesub-candidate points (+) .

Fig. 5. Searching boundary points of openings (doors and windows). Notes: (*) Black points are neighbouring points of P1; (+) Sub-candidate boundary points are unfilledpoints in a grey hemisphere.

To increase the robustness of determining the boundary points,these boundary scores are combined into a weighted sum usingEq. (6):(P2)

= w

P1

+w∆

∆(P2)

(6)

in which w and w∆ are, respectively, the weights for a score ofthe angle at P1 and the triangle perimeter, where w +w∆ = 1. Inthe proposed algorithm, a uniform weighting was selected as thedefault value distribution. Once the target point is determined, itis assigned as the reference point for the next iterative detectioncycle, and the process continues, until the cumulative angle P0equals 360°.

Additionally, to eliminate unrealistic openings due to missingdata or occlusions (step 2.3a), equivalent dimensions of openingsare computed from boundary points of each building feature us-ing a histogram. Employing the proposed approach outlined byTruong-Hong et al. [6], the pre-extracted boundary points of eachopening are analysed byhistograms along the x- and y-coordinates.

For awindow, the boundary points are visible as the two histogrampeaks along the x- and y-directions corresponding to the verti-cal and horizontal boundary lines, respectively (Fig. 6). However,for ground floor doors and floor-to-ceiling plate glass windows,only one peak (along the histogram’s y-direction) appears. Subse-quently, the equivalent dimensions of the opening can be calcu-lated according to Eqs. (7) and (8).

Ho =

n

i=1yupi

n−

mj=1

ydownj

m

(7)

Lo =

l

i=1xlefti

l−

kj=1

xrightj

k

(8)

Page 6: Octree-based, automatic building façade generation from LiDAR data

L. Truong-Hong, D.F. Laefer / Computer-Aided Design 53 (2014) 46–61 51

(a) Histogram for x-coordinates of boundary pointsalong x-direction.

(b) Histogram for y-coordinates of boundarypoints along y-direction.

Fig. 6. Using histograms to determine height and length of a window.

where n and m, and k and l are, respectively, the number ofboundary points belonging to two peaks (‘‘up’’ and ‘‘down’’) alongthe y-histogram, and to two peaks (‘‘left’’ and ‘‘right’’) along thex-histogram.Notably, for a hole along the ground surface, the downpeak is the smallest column in the y-histogram, with respect toy-coordinate boundary points.

Finally, the holes are considered real openings (e.g. doors orwindows), if their characteristics satisfy those of common open-ings in building façades involving minimum lengths and widthsequal or greater than 0.4m [6,7,12,21] and a height-to-length ratiobetween 0.25 and 5.0 [38,41], as per the function f (Ho, Lo,Ho/Lo),where Ho and Lo are the height and length of the opening, respec-tively (Eq. (9)).f (Ho, Lo,Ho/Lo)

=

if Ho ≥ 0.4; Lo ≥ 0.4; 0.25 ≤ Ho/Lo ≤ 5.0 : openingOtherwise : non-opening. (9)

3.2.2. Boundary point detection for the façade (steps 2.1b–2.3b)Similar to boundary point extraction for openings, the façade

boundary point detection process involves the following: extract-ing the candidate boundary points (Step 2.1b) and determiningthe target boundary points from the candidate points (Step 2.2b)(Fig. 2). As the boundary points of the façade are on the envelopeof the input data (either a convex or concave hull) (Fig. 7(a)), theprocess initially extracts full voxels, either connected to the bound-ing box or those that are outermost along the vertical and horizon-tal grids (dark grey voxels in Fig. 7(b)). Subsequently, the samplepoints contained within full voxels are considered to be candidatepoints for the searching of boundary points (Fig. 7(c)). The resultsare shown in Fig. 7(d). Details of this process are provided below.

Analogous to detecting boundary points of the openings, withthe rotational centre, P0, is an average of all candidate samplepoints. The process starts with an initial reference point, P1, whichis the closest point to the minimum x-, y-coordinates of the inputdata in the Euclidean metric. From the reference point (P1), theneighbouring points are determined by using kNN searching with10 kNNs (Fig. 8(a)). However, only sample points in the neighbour-ing points lying on a hemisphere (as determined by the ray P0P1)are considered for the searching of target points. These are shownas unfilled points in Fig. 8(b) and are called the sub-candidatepoints {P2}. The point P2i ∈ {P2} is the target point, if the angle P1of the triangle P0P1P2i is the greatest in a set of angles P1 of trian-gles established by P0, P1, and P2i (Fig. 8(b)). Next, the target point isassigned as the reference point for the next iterative detection, andthe process is repeated, until the accumulated angle at P0 equals360 degrees (Fig. 8(b)). The resulting detection is shown in Fig. 7(d).

3.2.3. Determine vertical and horizontal boundary lines of the façadeand its openings (step 2.4)

Since, an area of a convexhull of boundary points of each open-ing is always smaller than that of the façade, the group of theboundary points forming the largest area of the convexhull con-stitutes the façade’s boundary points. In the FV algorithm, bound-ary lines of a building façade and its openings, which are assumedto involve only vertical and horizontal lines (orthogonal to eachother around boundaries and openings), are directly generatedfrom boundary points detected in the previous sections (steps 2.2band 2.3a). From those, the full voxels in the octree representationgenerated from the TLS data (Fig. 9(a)) that contained boundarypoints are extracted (dark grey rectangles in Fig. 9(b)).

Next, the extracted full voxels of each building feature are di-vided into sub-clusters of full voxels corresponding to horizontaland vertical boundary lines by using a grid clustering technique.The voxels containing boundary points determined in a previousstep are on the same horizontal or vertical grid to be extracted.From that the grid voxels are only considered in order to re-extractthe boundary points possessed by these voxels for reconstruct-ing boundary lines. However the boundary points in the grid mustsatisfy the following conditions: (1) the maximum distance be-tween two boundary points in the grid is not less than the min-imum opening size, and (2) the minimum distance between twoadjacent boundary points belonging to adjoining full voxels is notgreater than half of the opening size (herein 0.4 m is the adoptedminimum opening size). As such, Fig. 10(a) illustrates the resultsof horizontal and vertical sub-clusters of full voxels for an open-ing. Finally, the boundary lines are determined from the boundarypoints within the full voxels of these voxel sub-clusters by using aleast-squares method (Fig. 10(b)) [42]. Additionally, the boundarylines of each building feature are adjusted to ensure continuity byextending or trimming the boundary lines at their points of inter-section (Fig. 9(c)).

3.3. Create complete building model (step 3)

As only full voxels belonging to the solidwall are converted intothe solid model for direct importation into FEM packages, voxelproperties in the initial octree representation must be re-assessed.For that, a new voxelization model is generated by dividing theinitial voxelization model by the boundary lines (Fig. 9(d)). Thenumber of child voxels depends on the number of boundary linesintersecting the parent voxel. There are two child voxels, if oneboundary line intersects the parent voxel, and four child voxels, ifthere are two intersections. However, no subdivision occurs, if noboundary lines intersect the voxel, or if the boundary lines are onthe boundaries of the parent voxel.

Page 7: Octree-based, automatic building façade generation from LiDAR data

52 L. Truong-Hong, D.F. Laefer / Computer-Aided Design 53 (2014) 46–61

(a) Input point cloud data ofthe façade.

(b) Full voxels containingcandidate points.

(c) Resulting candidate points. (d) Resulting target points.

Fig. 7. Process of detecting boundary points for a building façade. Notes: Black points are input data set; unfilled circle points are candidate points; and unfilled squarepoints are target points. Dark grey rectangles are full voxels containing candidate points; and light grey rectangles are other full voxels.

Subsequently, each voxel in the re-voxelization model is char-acterized by using the Flying Voxel method proposed by Truong-Hong et al. [6], in which voxels inside of openings or outside of thefaçade are labelled either as empty or as full. Fig. 9(e) shows there-characterized voxels of the octree voxelization in Fig. 9(d). Sub-sequently, the full voxels in the octree nodes are stored in a neutralfile (with the file extension .ANF), whereas the topology and geom-etry are described as a B-Rep scheme [6].

In summary, the FV algorithmwas designed as a novel approachto efficiently and accurately detect sample points on an object’sboundaries. In this, a sample point drawn from a voxel on an ob-ject’s boundaries can be declared as a boundary point, only if itsscore is higher than those of its neighbouring points. Therefore, theproposed approach does not trawl through all input data points toidentify boundary points, as is done for the other fully automated,processes (e.g. [6,7]). As such, the FV algorithm is proposed both toimprove accuracy of the resulting building models, while signifi-cantly reducing the computational cost.

4. Experimental tests, results, and discussion

4.1. Experimental data

To test the FV algorithm, three building façades in Dublin, Ire-land were scanned with a terrestrial laser scanner. These buildingswere selected for three reasons. Firstly, if the proposed algorithmcannot be shown towork reliablywith relatively simple structures,there is no use trying them on more complicated ones; notably 2Dfaçades have to date been preferable for use in structural analy-sis of masonry building rather than fully 3D ones. Secondly, thisbuilding type represents the vast majority of the architectural fab-ric of many urban centres. Thirdly, unlike most existing structures,independent measured drawings were readily available for thesestructures, which were used for ground truth verification.

Point cloud data of the building façades were acquired by aTrimble GS200 unit [43]. The data collection processing was con-trolled by the affiliated propriety software RealWorks Survey Ad-vanced (RWS) V6.3, which was installed on a laptop linked tothe scanner [44]. Although the scanner can collect data across a

wide range, typically two scanner positions were needed to ac-quire point clouds of each façade because of limitations caused bytraffic, terrain, and footpath space. Themaximumstandoff distancewas approximately 10 m for Building 1 (2 Anne St. South, DublinCity) and Building 2 (5 Anne St. South, Dublin City), and 30 m forBuilding 3 (2 Westmoreland St., Dublin city) (Figs. 11(a), 12(a) and13(a)). The TLS data were acquired with a sampling step of 10 mmat a 100 m range for Buildings 1 and 2, and of 20 mm for Building3. Additionally, in order to register the point clouds of each façade,there were three reference points defined for each scanner station.Geometries on the façade (e.g. corners of the windows or windowledges) were selected as the reference points, instead of using spe-cial targets or nails to mark the reference points. The referenceregions were scanned with a sampling step of 2 mm at 100 m inBuilding 1 and 2, andwith 10mmsampling step at 100m for Build-ing 3. The acquired data included x-, y-, z-coordinates, intensity,and RGB values. The intensity value varies according to the ma-terial surface reflectivity measured and the optical wavelength ofthe laser used; for more details of the data acquisition see Truong-Hong [45].

Pre-processing point clouds by the use of the RWSV6.3 involvedtwo main steps: (1) merging the scans from the multiple scannerstations, and (2) removing all irrelevant data points. Point cloudsfrom two scanner stations of each façade were manually mergedby using reference points of both the source and target stations. Atrial and error process was undertaken by selecting a pair of pointsfrom the source and target stations until the average error betweena pair of data sets could be expressed in terms of a distance errorless than 5 mm. As data from internal walls/objects or occludingelements (e.g. trees and buses) are often in different planes fromthe façade, those data pointswere removed subsequently using thein-built segmentation technique within the RWS V6.3 [44]. Detailsof the data pre-processing are fully available in Truong-Hong [45].Notably, processing time for this step done within the RWS V6.3software was typically less than 5 min for each of the three datasets. Finally, in order to evaluate efficiency and robustness of theproposed algorithm and to validate reliability of reconstructedbuilding models, four sampling densities data were tested for eachbuilding façade (Table 1), in which NS00 was the point cloud dataset obtained directly from the pre-processing step, and the other

Page 8: Octree-based, automatic building façade generation from LiDAR data

L. Truong-Hong, D.F. Laefer / Computer-Aided Design 53 (2014) 46–61 53

(a) Select neighbouring points of thereference point, P (∗)

1 .(b) Determine target points (+) .

Fig. 8. Identification of target points of a façade’s boundary. Notes: (*) Black points are neighbouring points of P1; (+) Sub-candidate boundary points are unfilled points ina grey hemisphere.

(a) Initial voxelization model(octree depth 6).

(b) Characterized voxelmodel (octree depth 6).

(c) Detected boundary linesof façade and its openings.

(d) New voxelization model(octree depth 7).

(e) Final voxelization model(octree depth 7).

Fig. 9. A building model reconstruction based voxelization. Notes: Light grey rectangles are full voxels; dark grey rectangles are full voxels containing boundary points ofthe façade and its openings; and white rectangles are empty voxels. The reconstructed building model is generated from input data of Fig. 7(a).

(a) Grouped vertical and horizontal clusters containingsample points on the boundary.

(b) Proposed boundary lines of the opening.

Fig. 10. Modified grid clustering technique.

three data sets (NS20, NS50, NS75) were re-sampled from NS00by using the random re-sampling data function built into the RWSV6.3 software [44].

The proposed approach was implemented in MATLAB scripts[46], and experimental tests were run on a Dell Precision

Workstation T5400 with a main system configurations as follows:Intel (R) Pentium (R) Xeon (8CPU) CPU speed 2 GHz with 24 GBRAM. Automatic generation of solid models for the three buildingfaçades are shown in Figs. 11–13 (each showing a distinct, sampledata density).

Page 9: Octree-based, automatic building façade generation from LiDAR data

54 L. Truong-Hong, D.F. Laefer / Computer-Aided Design 53 (2014) 46–61

(a) TLS data. (b) Point cloud aftercleaning.

(c) Initial voxelizationmodel.

(d) Detected boundarypoints.

(e) New voxelizationmodel.

(f) Final voxelizationmodel.

Fig. 11. Façade reconstruction of Building 1 (4.95 m length by 12.16 m height) based on NS00 dataset (Table 1).

(a) TLS data. (b) Point cloud aftercleaning.

(c) Initial voxelizationmodel.

(d) Detected boundarypoints.

(e) New voxelizationmodel.

(f) Final voxelizationmodel.

Fig. 12. Façade reconstruction of Building 2 (4.90 m length by 13.28 m height) based on S20 dataset (Table 1).

Page 10: Octree-based, automatic building façade generation from LiDAR data

L. Truong-Hong, D.F. Laefer / Computer-Aided Design 53 (2014) 46–61 55

(a) TLS data. (b) Point cloud after cleaning. (c) Initial voxelization model.

(d) Detected boundary points. (e) New voxelization model. (f) Final voxelization model.

Fig. 13. Façade reconstruction of Building 3 (19.36 m length by 17.0 m height) based on S50 dataset (Table 1).

(a) For façade length. (b) For façade height. (c) For façade opening area.

Fig. 14. Relative error of quantities of interest for three building façades.

Table 1Dataset sizes.

Building Sampling datasetNS00a S20 (2500 points/m2) S50 (400 points/m2) S75 (175 points/m2)

B1-2 Anne St. South 264,931 51,171 9,909 4,643B2-5 Anne St. South 190,865 51,884 11,119 5,366B3-2 Westmoreland St. 650,306 353,848 71,155 35,468a Sampling density of the original datasets of Buildings 1, 2 and 3 were respectively around 9.0k, 6.5k and 2.8k points/m2 .

4.2. Results

The FV algorithm was successful in reconstructing the buildingfaçades and all openings, as well as automatically filling occlusion-based openings (Figs. 11–13). Auto-generated façade dimensions

and opening areas were compared to CAD models obtained fromon-site surveys in order to assess geometric accuracy of thosebuilding models (Table 2). The algorithm slightly underestimatedlengths and heights—generally less than 1.1% (<121 mm-B2S75)and 1.0% (<53 mm-B1NS00), respectively, (Fig. 14(a) and (b)). For

Page 11: Octree-based, automatic building façade generation from LiDAR data

56 L. Truong-Hong, D.F. Laefer / Computer-Aided Design 53 (2014) 46–61

(a) Occlusion due to tree branches. (b) Occlusion due to a car.

Fig. 15. Examples of automatically filling unexpected holes.

Table 2Derived overall dimensions and opening areas of the 3 façades.

Building Aspect CAD Resulting geometries from the various sampling datasetsNS00 S20 S50 S75

B1-2 Anne St. SouthLength (m) 4.95 4.93 4.93 4.92 4.91Height (m) 12.16 12.04 12.05 12.04 12.05Opening area (m2) 30.70 29.98 30.10 30.33 30.58

B2-5 Anne St. SouthLength (m) 4.90 4.88 4.86 4.86 4.85Height (m) 13.28 13.28 13.29 13.27 13.27Opening area (m2) 34.46 32.79 32.84 33.07 33.32

B3-2 Westmoreland St.Length (m) 19.36 19.30 19.30 19.28 19.24Height (m) 17.00 16.91 16.90 16.91 16.88Opening area (m2) 96.20 96.28 96.59 98.02 99.11

Table 3Evaluation of FV algorithm with defect input data.

Aspect Measured drawings Competitive method Relative error (%)FV algorithm MBR Measured drawings vs. FV Measured drawings vs. MBR

Façade: Length (m) 4.95 4.92 4.86 0.59 1.88Façade: Height (m) 12.16 12.04 12.05 1.00 0.89Window: Length (m) 3.6 3.64 3.65 −1.11 −1.33Window: Height (m) 1.72 1.86 2.00 −7.97 −16.40

Building 3, the relative errors of the length and height were only0.6% (<117 mm-B3S75) and 0.7% (<116 mm-B3S75) (Fig. 14(a)and (b)). Additionally, the opening areas in the reconstructedmod-els of Building 1 and 2 underestimated those apertures by 2.3%(<0.72 m2 – B1NS00) and 4.8% (<1.67 m2 – B2NS00) in terms ofrelative errors, respectively (Fig. 14(c)). The opposite trendwas ob-served in Building 3, where the opening area in the reconstructedmodels was 3% greater (>2.91m2 – B3S75) (Fig. 14(c) and Table 2).

4.3. Discussion

The boundary point detection approach proposed in the FV al-gorithm extracted sufficient boundary points to consistently re-construct the three building façades and their actual openings(Figs. 11–13). The FV algorithm introduced a procedure to classifya hole as either a proper opening or an unrealistic hole due tomiss-ing or occluded data. The procedure succeeded in recognizing ar-bitrarily shaped holes (such as those produced by trees (Fig. 15(a))and cars (Fig. 15(b))) and automatically removed them. Notably,segmentation also results in small holes.

In the proposed algorithm, constrained conditions through agrid clustering technique are implemented to eliminate incorrectboundary points before generating boundary lines. The techniqueis based on the observation that voxels that possess incorrect

boundary points lie on a different grid from voxels containing thecorrect boundary points. The proposed algorithm was evaluatedagainst the use of an alpha shapewith line fitting [47]. For example,if the input data of Building 1 (2 Anne street South) contains sev-eral defects on the left boundary of the façade and lower and uppersides of the window in second storey, which as marked as circles(Fig. 16(a) and (b)). Obviously, boundary points from both FV algo-rithm and a comparative method contained unexpected boundarypoints (Fig. 16(a) and (b)) and the boundary lines were then gener-ated based on the boundary points (Fig. 16(c) and (d)). Notably, athreshold radius (1/α) of 75 mm implemented in the alpha shapemethod was selected experimentally to optimize output results.

Clearly, the comparative method negatively impacted recon-structed boundary lines: the boundary lines at the left side of thefaçade and ones at lower and upper side of the window migratefrom the actual boundaries (Fig. 16(d)). This problem does not oc-cur in the FV algorithm (Fig. 16(c)). Furthermore, the relative errorin the overall façade length of the comparativemethod is increasedup to 1.88% (as compared to only 0.59% for the FV method (Ta-ble 3)). Additionally, the window dimensions from the FV methodwere also better than ones derived from the comparative method.Of which, the window length in the FV method generated relativeerrors of only 1.11% (the FV method) vs. 1.33% (the comparativemethod). For thewindowheight, the difference in results was evenmore profound. The relative error for the FV method was −7.97%

Page 12: Octree-based, automatic building façade generation from LiDAR data

L. Truong-Hong, D.F. Laefer / Computer-Aided Design 53 (2014) 46–61 57

(a) Boundary points of thewindow and façade from FValgorithm.

(b) Boundary points of thewindow and façade from alphashape algorithm.

(c) Boundary points and lines of the window and façade from FValgorithm.

(d) Boundary points and lines of the window and façade based alphashape algorithm and fitting lines.

Fig. 16. Reconstructing boundary lines of the façade and window by combining alpha shape method and fitting line and by FV method. Note: Circles showed defect dataand unexpected boundary points.

vs. −16.40% for the comparative method. In conclusion, the FValgorithm can eliminate incorrect boundary points during recon-structing boundary lines.

The efficacy of the FV approach and its compatibility withcomputationalmodellingwere benchmarked against the commer-cial program Kubit [19] and two automatic approaches developedby the authors (the FaçadeDelaunay (FD) algorithm [6] and theFaçadeAngle (FA) [7]). The Kubit software allows users to use Auto-CAD tools to create building models based on point clouds. The FDand FA approaches automatically reconstruct building, where the

boundary points were extracted based on a length of triangle sidesin triangulation mesh and an angle criterion, respectively. Thesealgorithms are described elsewhere [6,7].

For evaluation, the S75 datasets with 175 pts/m2 (Figs. 16(a),17(a), and 18(a)) were selected to generate the building models totest robustness; denser datasets require longer acquisition timesand are, thus, less economical.

In Kubit, the building models were created within an AutoCADprogram semi-automatically, by manually identifying boundariesof the building and its openings using various, in-built toolbox

Page 13: Octree-based, automatic building façade generation from LiDAR data

58 L. Truong-Hong, D.F. Laefer / Computer-Aided Design 53 (2014) 46–61

(a) TLS data. (b) Kubit-based model. (c) FV algorithm.

(d) FD algorithm. (e) FA algorithm. (f) Measured drawingmodel.

Fig. 17. Solid models of Building 1 reconstructed from 175 points/m2 by using various approaches.

aids. For this, separate building photographs are needed to defineopenings realistically; see [33] for further details. The FD and FAalgorithms are described previously elsewhere [6,7].

With the Kubit program, realistic building models with smoothboundary lines were created rapidly (Figs. 16(b), 17(b), and 18(b)).While the resulting models were compatible for FEM importation,independent knowledge of the existing buildings’ façades (e.g. thenumber, size, and position of openings) was fully required. Theother approaches did not need this information. Resulting buildingmodels of each approach are shown in Figs. 16(b)–(e), 17(b)–(e),18(b)–(e). The reported dimensions were obtained by importingthe solid models in AutoCAD. These dimensions were comparedto ones from independently produced, on-site manual surveys(Figs. 16(f), 17(f), and 18(f)).

In general, the FV algorithm reconstructed building modelsmore accurately than the other approaches. The maximum rela-tive errors of the overall dimensions (height and length) were 1.1%(a length of Building 2) for the FV algorithm, and 1.9% (a length ofBuilding 2) for other automatic approaches and −3.0% for the Ku-bit program (Fig. 19(a) and (b)). Furthermore, the relative error ofopening areas in FV-basedmodelswere also smaller than other ap-proaches, in which themaximum error was 3.3% in FV-basedmod-els,while itwas 5.6% for other automatic approaches-basedmodelsand 7.2% in the Kubit-based ones. However, in Building 3, the rela-tive error in the FV-based model was slightly higher than those in

the other automatic-based models (−3.0% in FV-based model vs.−2.9% in FA-based ones). That is because boundary points of theopenings derived from the FV algorithm contained sample pointsthat were slightly further from the actual boundaries of the open-ing.Whenworkingwith lower density datasets, the accuracy of theopenings can be improved by increasing the number of neighbour-ing points. For this model and this density the kNN was increasedto 20, which was equal to that used in the FA algorithm.

Other geometric discrepancies were checked in terms of theopening dimensions. In the two small buildings (Building 1 and 2),the average absolute errors of opening dimensions in the build-ing models derived from the FV algorithm were generally smallerthan those from the other three approaches, while an oppositetrend was found in the larger building (Building 3) (Table 4). Assuch, the best result from the FV algorithm was −4.6 mm(Std =

116 mm), while ones from other automatic algorithms and com-mercial software were 7.5 mm (Std = 121.5 mm) and 4.7 mm(Std = 61.2 mm), respectively. However, in Building 3, the errorwith the FV-based model was slightly higher than ones from theautomatic algorithms. Sowhile the FV algorithmherein did not sig-nificantly improve the geometric accuracy of the building models,it did provide a major reduction in computational time, as is pre-sented in Fig. 21.

Using all of the datasets in Table 1, the FV algorithm provedclearly superior for datasets greater than 35.5k points, as was re-quired for even the least dense version of the 19.36 m wide by

Page 14: Octree-based, automatic building façade generation from LiDAR data

L. Truong-Hong, D.F. Laefer / Computer-Aided Design 53 (2014) 46–61 59

(a) TLS data. (b) Kubit-based model. (c) FV algorithm.

(d) FD algorithm. (e) FA algorithm. (f) Measured drawingmodel.

Fig. 18. Solid models of Building 2 reconstructed from 175 points/m2 by using various approaches.

Table 4Dimensional discrepancies of openings between CAD drawings vs. the point cloud based solid models.

Aspects Building 1 Building 2 Building 3Automatic Semi-automatic Automatic Semi-automatic Automatic Semi-automaticFV FD FA Kubit FV FD FA Kubit FV FD FA Kubit

Av., mm −4.6 −6.4 −14.0 28.8 7.5 11.0 22.8 75.6 −28.3 −14.5 −23.8 4.7Min. error, mm −139.0 −137.0 −149.0 −110.0 −126.0 −168.0 −130.0 −40.0 −177.0 −162.0 −177.0 −140.0Max.error, mm

187.0 175.0 178.0 250.0 366.0 360.0 354.0 370.0 117.0 39.0 45.0 120.0

Std., mm 116.0 118.1 122.1 110.0 121.5 121.9 126.1 102.7 56.9 50.1 54.1 61.2

17.0 m high Building 3 (Fig. 20). In very dense datasets such asthose directly obtained from the laser scanner, the FV algorithmwas consistently more than an order of magnitude faster and wasup to 25 times faster for the 650.3k point dataset.

5. Conclusions and future work

The FaçadeVoxel algorithm is proposed as a rapid and robust,fully-automated approach to detect boundary points of façade

features for many urban structures that otherwise lack existinggeometric documentation. This is done by examining only sam-ple points in the vicinity of detected boundary features. Samplepoints within full voxels around a boundary feature are extractedand examined based on a local score. Subsequently, these bound-ary points are used to generate fitting boundary lines, and all voxelsbelonging to a solid portion of the wall are determined. After-wards, the complete solid model of the building was stored in aneutral file as input for a commercial computational modelling

Page 15: Octree-based, automatic building façade generation from LiDAR data

60 L. Truong-Hong, D.F. Laefer / Computer-Aided Design 53 (2014) 46–61

(a) TLS data. (b) Kubit-based model. (c) FV algorithm.

(d) FD algorithm. (e) FA algorithm. (f) Measured drawing model.

Fig. 19. Solid models of Building 3 reconstructed from 175 points/m2 by using various approaches.

(a) Façade length. (b) Façade height. (c) Façade opening area.

Fig. 20. Relative errors of overall dimensions between CAD drawings vs. the FV, FD, FA algorithms and Kubit-based solid models.

programme. The algorithm consistently detected all openings forthree sample buildings and also succeeded in automatically fillingnon-openings, even at data densities as low as 175 pts/m2. Thiswas achievedwithout any supplemental datasets, user knowledge,or manual intervention. Furthermore, the resulting solid modelswere fully compatible with a common, commercial FEM pack-age. When compared to two previous research-based approachesand a commercial program, building models based on the FVmethod were generally more accurate, and up to 25 times fasterfor datasets of up to 650.3k points.

Like the three comparative approaches, the FV algorithm iscurrently limited to relatively simple 2D morphologies. A full 3D

model can be obtained theoretically by segmentation and sub-sequent reassembly of individual exterior walls. Future work isplanned in this direction. Additionally, further work is needed toaddress non-rectangular window openings.

Acknowledgements

This work was generously supported by Science FoundationIreland Grant 05/PICA/I830 and the European Union Grant ERC StG2012-307836-RETURN. Further thanks to Donal Lennon of UCD’sEarth Institute for his assistance with terrestrial data acquisition.

Page 16: Octree-based, automatic building façade generation from LiDAR data

L. Truong-Hong, D.F. Laefer / Computer-Aided Design 53 (2014) 46–61 61

Fig. 21. Processing time of FV algorithm vs. of FD and FA algorithms data fromTruong-Hong et al. [6,7].

References

[1] Haala N, Kada M. An update on automatic 3D building reconstruction. ISPRS JPhotogramm Remote Sens 2010;65:570–80.

[2] Koelman HJ. Application of a photogrammetry-based system to measure andre-engineer ship hulls and ship parts: an industrial practices-based report.Comput-Aided Des 2010;42:731–43.

[3] Yoo D-J. Three-dimensional surface reconstruction of human bone using aB-spline based interpolation approach. Comput-Aided Des 2011;43:934–47.

[4] PeternellaM, Steinerb T. Reconstruction of piecewise planar objects frompointclouds. Comput-Aided Des 2004;36:333–42.

[5] Lim EH, Suter D. 3D terrestrial LiDAR classifications with super-voxels andmulti-scale conditional random fields. Comput-Aided Des 2007;41:701–10.

[6] Truong-Hong L, Laefer D, Hinks T, Carr H. Flying Voxel method with delaunaytriangulation criterion for façade/feature detection for computation, ASCE. JComput Civ Eng 2012;26:691–707.

[7] Truong-Hong L, Laefer DF, Hinks T, Carr H. Combining an angle criterion withvoxelization and the flying voxel method in reconstructing building modelsfrom LiDAR data. Comput-Aided Civ Infrastruct Eng 2012;28:112–29.

[8] RPA, Environmental impact statement-metro north. 2011.[9] Clarke J, Laefer DF. Generation of a building typology for risk assessment due

to urban tunneling. In: BCRI conference. 2012.[10] Cassidy F. Port tunnel compo payout. In: Independent, 2006.[11] Pu S, Vosselman G. Knowledge based reconstruction of building models from

terrestrial laser scanning data. ISPRS J Photogramm Remote Sens 2009;64:575–84.

[12] Becker S, Haala N. Refinement of building facades by integrated processingof LiDAR and image data. In: PIA07—photogrammetric image analysis, 2007(2007) 36(33/W49A), 37–12.

[13] Zhou Q-Y, Neumann U. 2.5D dual contouring: a robust approach to creatingbuildingmodels from aerial LiDAR point clouds. In: 11th European conferenceon computer vision (ECCV 2010), 2010, p. 1–14.

[14] Haala N, Brenner C, Anders Kh. 3D urban GIS from laser altimeter and 2D mapdata. Int Arch Photogramm Remote Sens 1998;32:339–46.

[15] Hu J, You S, Neumann U, Park KK. Building modeling from LiDAR and aerialimagery. In: ASPRS 2004, 2004, p. 1–6.

[16] Vosselman G, Dijkman S. 3D building model reconstruction from point cloudsand ground planes. Int Arch Photogramm Remote Sens 2001;37–43.

[17] Dorninger P, Pfeifer N. A comprehensive automated 3D approach for buildingextraction, reconstruction, and regularization from airborne laser scanningpoint clouds. Sensors 2008;8:7323–43.

[18] Chen J, Chen B. Architectural modeling from sparsely scanned range data. IntJ Comput Vis 2008;78:223–36.

[19] Kubit, PointCloud, 1999.[20] Nan L, Sharf A, Zhang H, Cohen-Or D, Chen B. SmartBoxes for interactive urban

reconstruction. ACM Trans Graph 2010;29: Article 93.[21] Pu S, Vosselman G. Extracting windows from terrestrial laser scanning. In:

ISPRS workshop on laser scanning and silviLaser 2007, 2007, 2007, p. 320–5.[22] Boulaassal H, Landes T, Grussenmeyer P. Automatic extraction of planar

clusters and their contours on building façades recorded by terrestrial laserscanner. Int J Archit Comput 2009;7:1–20.

[23] Haala N, Becker S, Kada M. Cell decomposition for the generation ofbuilding models at multiple scales. In: Symposium of ISPRS commission III,photogrammetric computer vision PCV’06, 2006, p. 6.

[24] Hoppe H, DeRose T, Duchamp T, McDonald J, Stuetzle W. Surface reconstruc-tion from unorganized points. ACM SIGGRAPH 1992;71–8.

[25] Wang J, OliveiraMM,XieH, KaufmanAE. Surface reconstructionusing orientedcharges. In: Computer graphics international, 2005, 2005, p. 122–8.

[26] Buhmann MD. Radial basis functions: theory and implementations. Cam-bridge: Cambridge University Press; 2003.

[27] Curless B, Levoy M. A volumetric method for building complex models fromrange images. In: The 23rd annual conference on computer graphics andinteractive techniques. New Orleans, LA, USA: ACM; 1996. p. 303–12.

[28] Pulli K, Duchamp T, Hoppe H, McDonald J, Shapiro L, Stuetzle W. Robustmeshes from multiple range maps. In: International conference on recentadvances in 3-D digital imaging and modeling, 1997, p. 205–11.

[29] Elseberg J, Borrmann D, Nuchter A. One billion points in the cloud—an octreefor efficient processing of 3D laser scans. ISPRS J Photogramm Remote Sens2013;76:76–88.

[30] Wurm KM, Hornung A, Bennewitz M, Stachniss C, Burgard W. OctoMap: aprobabilistic, flexible, and compact 3Dmap representation for robotic systems.In: ICRA 2010 workshop best practice in 3D perception and modeling formobile manipulation, 2010, p. 8.

[31] Dalmasso P, Nerino R. Hierarchical 3D surface reconstruction based on radialbasis functions. In: Proceedings of 2nd international symposium on 3D dataprocessing, visualization and transmission, 2004, p. 574–9.

[32] Wang R, Bach J, Ferrie FP. Window detection frommobile LiDAR data. In: IEEEworkshop on applications of computer vision, 2011, p. 58–65.

[33] Laefer DF, Truong-Hong L, FitzgeraldM. Processing of terrestrial laser scanningpoint cloud data for computational modelling of building facades. RecentPatents Comput Sci 2011;4:16–29.

[34] Truong-Hong L, Laefer DF. Validating computational models from laserscanning data for historic facade. J Test Eval 2013;41:481–96.

[35] SametH,Webber RE. Hierarchical data structures and algorithms for computergraphics. Part I: fundamentals. IEEE Comput Graph Appl 1988;8:48–68.

[36] Tang Z. Octree representation and its applications in CAD. J Comput Sci Tech1992;7:29–38.

[37] Ayala D, Brunet P, Juan R, Navazo I. Object representation bymean of mominaldivision duadtrees and octrees. ACM Trans Graph 1985;4:41–59.

[38] Ripperda N. Determination of facade attributes for facade reconstruction.In: ISPRS congress, proceedings of commission III, 2008, p. 285–90.

[39] Agoston MK. Computer graphics and geometric modeling: implementation &algorithms. Springer Verlag London Limited; 2005.

[40] Toussaint GT. Geometric proximity graphs for improving nearest neighbormethods in instance-based learning and data mining. Int J Comput Geom Appl2005;15:101–50.

[41] Mayer H, Reznik S. Building façade interpretation from image sequences.In: RF Stilla U, Hinz S, editors. CMRT05. IAPRS, 2005, 2005, p. 55–60.

[42] Pighin F, Lewis JP. Practical least-squares for computer graphics. ACMSIGGRAPH 2007 Courses 2007;1–57.

[43] Trimble, GS200 3D Scanner, 1999.[44] Trimble, RealWorks survey: technical notes—RealWroks survey, 2005.[45] Truong-Hong L. Automatic generation of solidmodels of building façades from

lidar data for computational modelling. In: School of Architecture, Landscapeand Civil Engineering, University College Dublin, 2011.

[46] The MathWorks, MATLAB function reference, 2007.[47] Edelsbrunner H, Kirkpatrick DG, Seidel R. On the shape of a set of points in the

plane. IEEE Trans Inform Theory 1983;29:551–9.

Linh Truong-Hong is a research fellowatUrbanModellingGroup, University College Dublin, Ireland. He receivedPh.D. in Structural Engineering from University CollegeDublin in 2011. His research interests include geometricmodelling, structural healthmonitoring, soil–structure in-teraction modelling and structural modelling.

Debra F. Laefer is an associate professor at the Univer-sity College Dublin where she leads the Urban ModellingGroup. Her research interests include protection of thebuilt environment from natural and manmade activities.In recognition of her contributions, she is a recent recipi-ent of a European Research Council award in this area.