5
216 March 2010 PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING Over the past five to ten years the use and applicability of light detection and ranging (lidar) technology has in- creased dramatically. As a result, an almost exponential amount of lidar data is being collected across the country for a wide range of applications, and it is currently the technology of choice for high resolution terrain model creation, 3-dimensional city and infrastructure modeling, forestry and a wide range of scientific applications (Lin and Mills, 2010). The amount of data that is being deliv- ered across the country is impressive. For example, the U.S. Geological Survey’s (USGS) Center for Lidar Infor- mation Coordination and Knowledge (CLICK), which is a National repository of USGS and partner lidar point cloud datasets (Stoker et al., 2006), currently has 3.5 percent of the United States covered by lidar, and has approximately another 5 percent in the processing queue. The majority of data being collected by the commercial sector are from discrete-return systems, which collect billions of lidar points in an average project. There are also a lot of dis- cussions involving a potential National-scale Lidar effort (Stoker et al., 2008). However, in order to make the data useful for various applications, we must interpret and develop models to convert these billions of points into actual useable geospa- tial information. Traditionally in Geographic Information Systems (GIS), features are represented as vector points, lines, or polygons or as raster cells or pixels. Visualization of three-dimensional data has become an effective way for scientists and managers to view geographic data in a more realistic fashion. The most popular methods for making sense of lidar point clouds for modeling and 3D visualiza- ntroduction tion today are to convert the points to surfaces using raster grids, Tri- angulated Irregular Networks (TINs), or even voxels (Stoker, 2009). Two of the most produced and used lidar-derived products are bare earth Digital Elevation Models (DEMs), which are rasters produced with only the identified bare earth lidar returns, and intensity images, which are usually delivered as an interpolated raster of the first return intensity values. Background From lidar, we traditionally create surfaces by interpolating the bare earth points, the first return points and the intensity information as separate images. From the bare earth surfaces, we can calculate topo- graphic derivatives, such as slope and aspect (Jenson and Domingue, 1988). We can also create shaded-relief models by defining a lighting direction and calculating illumination values based on the surface pointing. These topographic derivatives have been created for the en- tire United States using traditional map-based DEMs in the Elevation Derivatives for National Applications (EDNA) project (http://edna. usgs.gov). EDNA is a multi-layered database derived from a version of the National Elevation Dataset (NED), which has been hydrologi- cally conditioned for improved hydrologic flow representation (Jen- son, 1991). The seamless EDNA database provides 30-meter resolu- tion raster and vector data layers including slope, aspect, contours, filled DEM, flow accumulation, flow direction, reach catchment seed points, reach catchments, shaded relief, sinks, and synthetic stream- lines. EDNA is currently incorporating lidar information into their databases.

ntroduction · 2011. 2. 10. · U.S. Geological Survey’s (USGS) Center for Lidar Infor-mation Coordination and Knowledge (CLICK), which is a National repository of USGS and partner

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: ntroduction · 2011. 2. 10. · U.S. Geological Survey’s (USGS) Center for Lidar Infor-mation Coordination and Knowledge (CLICK), which is a National repository of USGS and partner

216 March 2010 PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING

Over the past fi ve to ten years the use and applicability of light detection and ranging (lidar) technology has in-creased dramatically. As a result, an almost exponential amount of lidar data is being collected across the country for a wide range of applications, and it is currently the technology of choice for high resolution terrain model creation, 3-dimensional city and infrastructure modeling, forestry and a wide range of scientifi c applications (Lin and Mills, 2010). The amount of data that is being deliv-ered across the country is impressive. For example, the U.S. Geological Survey’s (USGS) Center for Lidar Infor-mation Coordination and Knowledge (CLICK), which is a National repository of USGS and partner lidar point cloud datasets (Stoker et al., 2006), currently has 3.5 percent of the United States covered by lidar, and has approximately another 5 percent in the processing queue. The majority of data being collected by the commercial sector are from discrete-return systems, which collect billions of lidar points in an average project. There are also a lot of dis-cussions involving a potential National-scale Lidar effort (Stoker et al., 2008).

However, in order to make the data useful for various applications, we must interpret and develop models to convert these billions of points into actual useable geospa-tial information. Traditionally in Geographic Information Systems (GIS), features are represented as vector points, lines, or polygons or as raster cells or pixels. Visualization of three-dimensional data has become an effective way for scientists and managers to view geographic data in a more realistic fashion. The most popular methods for making sense of lidar point clouds for modeling and 3D visualiza-

ntroduction tion today are to convert the points to surfaces using raster grids, Tri-angulated Irregular Networks (TINs), or even voxels (Stoker, 2009). Two of the most produced and used lidar-derived products are bare earth Digital Elevation Models (DEMs), which are rasters produced with only the identifi ed bare earth lidar returns, and intensity images, which are usually delivered as an interpolated raster of the fi rst return intensity values.

BackgroundFrom lidar, we traditionally create surfaces by interpolating the bare earth points, the fi rst return points and the intensity information as separate images. From the bare earth surfaces, we can calculate topo-graphic derivatives, such as slope and aspect (Jenson and Domingue, 1988). We can also create shaded-relief models by defi ning a lighting direction and calculating illumination values based on the surface pointing. These topographic derivatives have been created for the en-tire United States using traditional map-based DEMs in the Elevation Derivatives for National Applications (EDNA) project (http://edna.usgs.gov). EDNA is a multi-layered database derived from a version of the National Elevation Dataset (NED), which has been hydrologi-cally conditioned for improved hydrologic fl ow representation (Jen-son, 1991). The seamless EDNA database provides 30-meter resolu-tion raster and vector data layers including slope, aspect, contours, fi lled DEM, fl ow accumulation, fl ow direction, reach catchment seed points, reach catchments, shaded relief, sinks, and synthetic stream-lines. EDNA is currently incorporating lidar information into their databases.

Page 2: ntroduction · 2011. 2. 10. · U.S. Geological Survey’s (USGS) Center for Lidar Infor-mation Coordination and Knowledge (CLICK), which is a National repository of USGS and partner

PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING March 2010 217

by Jason Stoker

Band CombinationsWith traditional optical imagery, the information from a narrow wave-length range is gathered and stored in a channel, also referred to as a band. To make a color image, we combine and display various channels of information digitally using the three primary colors (red, green, and blue). The data from each channel is represented as one of the primary colors and, depending on the relative brightness (i.e., the digital value) of each pixel in each channel, the primary colors combine in different proportions to represent different colors. By representing various wave-lengths that are inside (or even outside) the visible spectrum in an RGB band combination image, we are able to highlight features we may not be able to detect if we examined only one channel at a time. These com-binations may manifest themselves as subtle variations in color rather than variations in gray tone, which is what would be seen when examin-ing only one image at a time. (http://www.ccrs.nrcan.gc.ca/resource/tu-tor/fundam/pdf/fundamentals_e.pdf). This band combination technique is particularly useful when representing wavelength bands that are out-side of the visible spectrum as an RGB image, where the hu-man eye is adapted to discern information.

This method is commonplace for visualizing various wavelength bands in multi- or hyperspectral instruments. However; this method can also be used as a novel approach to visualize lidar-derived information in a single image. Creating multi-band stacks of lidar derivatives allows for the incorporation of multiple derivatives into a single im-age. It also allows for simultaneous display of derivatives in a single source image compressed into a single fi le (such as a TIFF image) and allows us to display the relationships between these separate derivatives intuitively. We can cre-ate these band combinations for derivatives created from the fi rst refl ective surface, the bare earth surface, or what we have coined as the “Height Above Ground” (HAG), also called normalized digital surface model (nDSM) in other literatures, which is simply the value from the fi rst

refl ective surface minus the value of the bare earth surface. Figure 1 shows how we can represent four easily created lidar-derived layers: slope, elevation, intensity, and hillshade, as RGB images.

In these examples, because we have 4 derivatives, which we can represent by the red band, green band or blue band, we have 24 dif-ferent band combination permutations possible for visualization pur-poses for each type of surface (FRS, Bare, HAG). This is defi ned by the equation:

4P3 = 4!/(4!-3!) = 24If desired, we could also display one band with multiple channels,

e.g., display slope for both r and g channel. This would provide 64 options. We can also scale these layers separately as an 8-bit, 16-bit or 32-bit fl oating point representation of each layer, to highlight one band over another. Examples of how these various band combinations of the same data can look differently based on RGB band represen-tations are highlighted from data collected over Buncombe County, North Carolina in Figure 2.

continued on page 218

Page 3: ntroduction · 2011. 2. 10. · U.S. Geological Survey’s (USGS) Center for Lidar Infor-mation Coordination and Knowledge (CLICK), which is a National repository of USGS and partner

218 March 2010 PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING

We can also create these different band combinations for fi rst re-fl ective surfaces, bare earth surfaces, and the height above ground surfaces. Figure 3 shows a multi-band stacks where the red band is displaying slope, the green band is displaying elevation, and the blue band is displaying laser intensity. Even though we are representing the same type of information, each multi-band stack provides differ-ent information for the user. The First Refl ective Surface stack dis-

plays the top of canopy information. The bare earth surface provides the best information for geological and hydrological process. Third, the height above ground stack is the best visualization for land cover and canopy structure information. With added lidar-derived layers possible, we could increase the number of permutations available for visualization.

(a) (b)

(c) (d)

(a) (b)

(c)

continued from page 217

Page 4: ntroduction · 2011. 2. 10. · U.S. Geological Survey’s (USGS) Center for Lidar Infor-mation Coordination and Knowledge (CLICK), which is a National repository of USGS and partner

PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING March 2010 219

Finally, by converting this elevation information into an image for-mat, we can take advantage of the many image processing and clas-sifi cation methods that have been developed over the years. Instead of using various spectral responses in different wavelength bands for classifi cations, we can use the elevation differences for classifi cation. Figure 4 demonstrates a simple 10 class unsupervised classifi cation routine run on a Height Above Ground multi-band stack for Mount Rainier, Washington.

As more research is performed on lidar for land cover and canopy applications, classifi cations methods such as decision tree algorithms could become a very useful tool for making sense of the combina-tion of image or laser intensity and elevation information. In the fu-

(a) (b)

(a) (b)

ture, this method may be employed to create ‘super stacks’ of active lidar and passive optical information. Combining spectral informa-tion from imagery and vertical information from lidar into a single multi-band stack could help tease out age structure classes and other information that is diffi cult to do using just imagery or just lidar (Lee and Shan, 2003). Incorporating the laser intensity from lidar, which is usually a laser in the near-infrared wavelength, with three-band RGB imagery, we can represent the intensity of lidar as a pseudo Color-Infrared Image (Figure 5). By using the lidar intensity image (which is inherently georeferenced) with the First Refl ective Surface to reg-ister the RGB imagery, our chances of having properly co-registered pixels are increased.

continued on page 220

Page 5: ntroduction · 2011. 2. 10. · U.S. Geological Survey’s (USGS) Center for Lidar Infor-mation Coordination and Knowledge (CLICK), which is a National repository of USGS and partner

220 March 2010 PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING

ConclusionsThe creation of multi-band stacks of lidar-derived information and band combinations of these layers seems to have promise in bringing vertical information from lidar into a more imagery-friendly format. These methods are fairly easy to replicate, and open up a wealth of tools that were originally created for passive optical imagery. How to interpret these however, needs special attention and care. While we still are in the research phase of this development, we think that these techniques may be used in the future to deliver a suite of lidar-derived information in a single file. We encourage people to try this method out and help us determine if it has applicability in the future.

ReferencesLee, D.S., and J. Shan, 2003. Combining lidar elevation data and

IKONOS multispectral imagery for coastal classification map-ping, Marine Geodesy, 26:117–127.

Lin, Y-C., and J.P. Mills, 2010. Factors influencing pulse width of small footprint, full waveform airborne laser scanning data, Photogrammetric Engineering and Remote Sensing, 76(1):49-59.

Jenson, S.K., 1991. Applications of hydrologic information auto-matically extracted from digital elevation models, Hydrologi-cal Processes, 5:31–44.

Jenson, S.K., and J.O. Domingue, 1988. Extracting topographic structure from digital elevation data for geographic informa-tion system analysis, Photogrammetric Engineering and Re-mote Sensing, 54(11):1593-1600.

Stoker, J.M., S.K. Greenlee, D.B. Gesch, and J.C. Menig, 2006. CLICK: The new USGS Center for Lidar Information Coor-dination and Knowledge, Photogrammetric Engineering & Remote Sensing, 72(6): 613-616.

Stoker, J., D. Harding, and J. Parrish, 2008. The need for a Na-tional Lidar Dataset, Photogrammetric Engineering & Remote Sensing, 74(9): 1066-1068.

Stoker, J.M., 2009. Volumetric visualization of multiple-return Lidar data–using voxels, Photogrammetric Engineering and Remote Sensing, 75(2): 109-112.

continued from page 219

Rick CooperDavid L. Verbyla

Krista Amolins*Rafael E. H CamachoVladislav DimitrovDavid Etheridge*Alex J. Janssen

Mushairry Mustaffar

Amy E. Frazier*Binglei Gong*Thom S. Salter

Jeremy D. Thompson

Ryan Andrew Callihan*Michael McCullough

Cameron Bremer*Matthew P. Erb*

Margaret T. HermesShaun Arthur Langley*

Michael Frank*Timothy Kirchner*

David McFee*Crystal Annette Woodcum

Bryan Kent*Robert Killian*Ian Kramer*

Mahtab A. LodhiChau Nguyen*

Beni PatelKathleen Ann Roberts*

Catherine E. Saidat*Lisa VanVliet*

Brandice Anne Wilcox*

John Bryan Curtis*Matt J. Dicks*

Gregory R. Fleming*Govi Hines

It’s all about...yourcareerfuture

society

working together to advance the practice of geospatial technology

Andelyn Maree Jesson*Angela Kim*

Timothy George Kreick-Sand-born*

Martha Anne Mitchell*Sara Peterson*

Joseph M. Russavage*Sarah Schuette*

Erik Stewart*

Mark BakerLucia C. Lovison-Golob

Steven R. Douglas*Malinda Dunne*

Curtis Floor*Conrad D. Heatwole

Abraham Michael Levin-Nielsen*William Lee Robertson*

James StrunaLeanne Sulewski*

Justin Miles Calvert*Isabella Mariotto*Timothy Martin

Amanda Nicole NoonChris G. Padwick

Eric Brian Vossenkemper

Randolph Fulton Critzer*Aaron Mulford

Charles Nichols*James Preston Peterson II*

Elizabeth Lamastus StanfordBrad Stricherz*

Lewis P. Gushue, IIAaron KimbleHubiao LanFred Woods

Anthony James Landon*Ralph Reinhold

Howard VereginMargaret Lynn Voth*

*denotes student membership