Projects

The Digital Imaging and Remote Sensing (DIRS) laboratory conducts research in a wide variety of remote sensing and image procesing related topics. The following describe some recent projects conducted by members of our research group.

 

Development of a novel temperature-emissivity separation technique for ground-based measurements

The measurement of a target's spectral emissivity using ground-based radiance measurements proves especially difficult for low emissivity targets. Many natural and man-made materials exhibit this property in the longwave infrared (LWIR) region of the spectrum. This effort has developed a methodology that utilizes the adjacent water absorption bands to the LWIR atmospheric window to accomplish this goal. Techniques that rely upon the spectral smoothness of a spectral emissivity signature when the

material's temperature is correct often fail when applied to radiance measurements from low emissivity (high reflectivity) targets due to the disproportionate amount of reflected downwelling radiance. The high opocity of the water bands adjacent to this window allow for a consistent mentod of determinining the object temperature as these band act like blackbody emitters.

A publicly-available Python code has been made available on GitHub to perform temperature-emissivity separation using data gathered with a D&P Instruments Model 102 Portable FTIR Spectrometer. This code will allow the user to utilize either traditional smoothness-based TES or apply the developed water-band based methodology.

Sponsor: Office of Naval Research / Department of the Navy / Department of Defense

Participants: Ryan LaClair / Carl Salvaggio

Development of Sensor Data Product Algorithms

This NSF-funded project focuses on the development of (i) pre-processing methods, (ii) algorithms for waveform LiDAR processing, (iii) extraction of structural products, and (iv) development of preliminary workflows for generating waveform LiDAR data products, LiDAR-hyperspectral fusion products, and their associated accuracy and precision (noise) estimates. This project is geared towards

development of data and information products for the airborne waveform LiDAR system of the National Ecological Observatory Network’s (NEON) Airborne Observation Platform (AOP). We are in the process of assessing the accuracy of the retrieval of vegetation parameters, e.g., leaf area index, crown volume, canopy height, vertical profiling, and biomass estimations and evaluating how data and parameter retrievals scale in space (and time) from individual waveforms, to objects, e.g., trees, to landscapes, to regions.

Sponsor: National Science Foundation (NSF)

Participants: Martin van Leeuwen / Jan van Aardt / Kerry Cawse-Nicholson

A Voxel-Based Approach for Imaging Voids in Three-Dimensional Point Clouds

Geographically accurate scene models have enormous potential beyond that of just simple visualizations in regard to automated scene generation. In recent years, thanks to ever increasing computational efficiencies, there has been significant growth in both the computer vision and photogrammetry communities pertaining to automatic scene reconstruction from multi-view imagery. The result of these algorithms is a three-dimensional (3D) point cloud which can then be used to obtain a final model using surface reconstruction techniques. However, the fidelity of these point clouds has not been well studied, and voids often exist within the point cloud. Voids exist in texturally flat areas that fail to generate features, as well as areas where multiple views were not obtained during collection or a constant occlusion existed due to collection an- gles or overlapping scene geometry. It may be possible to fill in small voids in the scene using surface reconstruction or hole-filling techniques, but this is not the case with larger voids, and attempting to reconstruct them is neither accurate nor aesthetically pleasing.

This project is aimed at identifying the type of voids present using a voxel-based approach to partition the 3D space. By using collection geometry and information derived from the point cloud, it is possible to detect unsampled voxels such that voids can be identified. This analysis takes into account the location of the camera and the 3D points themselves to capitalize on the idea of free space, such that voxels that lie on the ray between the camera and point are devoid of obstruction as a clear line of sight was a necessary requirement for reconstruction. Using this approach, voxels are classifed into three categories: occupied (contains points from the point cloud), free (rays from the camera to the point passed through the voxel), and unsampled (does not contain points and no rays passed through the area). Voids in the voxel space are manifested as unsampled voxels. A similar line-of-sight analysis can then be used to pinpoint locations at aircraft altitude at which the voids in the point clouds could theoretically be imaged. The goal is to reduce the voids in the point cloud that were a result of lack of coverage by including more images of the void areas in the 3D reconstruction. Voids resulting from texturally flat areas will not benefit from more imagery in the reconstruction process, and thus are identified and removed prior to the determination of future potential imaging locations.

Sponsor: Exelis Geospatial Systems / National Science Foundation (NSF)

Participants: Katie Salvaggio / Carl Salvaggio

Extraction of biophysical structure from full-waveform small-footprint LiDAR signals

A relatively new remote sensing modality, small-footprint waveform light detection and ranging (wlidar) offers the promise of being able to extract structural information from forested regions. wlidar, as opposed to the more traditional discrete lidar, digitizes the entire backscattered signal, instead of just returning x; y; z locations of interactions. This time-varying signal potentially enables a deeper understanding of the underling tree structure as well as the forest understory. However, due to complex nature of forest environments, it is often infeasible if not impossible to collect the necessary ground truth to develop models relating the underling forest structure to the received lidar signal. As a result, radiative transfer (RT) simulations are used for this work, since the truth down to the location, orientation, size, and optical properties of every leaf in the scene is known. This project aims to extract biophysical structure from full-waveform small-footprint lidar signals. As part of this work, virtual Digital Image and Remote Sensing Image Generation (DIRSIG) forest scenes are used to develop and validate algorithms, while the research is extended to NEON AOP data in order to better understand the implications and impacts of this work in a real-world lidar systems.

Sponsor: National Science Foundation (NSF)

Participants: Paul Romanczyk / Martin van Leeuwen / Jan van Aardt

Towards operationalizing forest structure assessment using terrestrial laser scanning: Addressing traditional measurement constraints

This project’s aims include (i) the development and validation a unique and robust approach for automatically detecting and modeling woody tree stems using a low-cost TLS instrument, (ii) registration of multiple-scan TLS data in forest environments using an automatic, marker-free registration technique, and (iii) assessment of forest canopy structure via out TLS. We are using a low-cost TLS instrument, developed –in-house, which exhibits limitations in data quality due to a large beam divergence, limited angular sampling, resolution, and a large outgoing pulse width. However, the system enables rapid scans and is highly mobile. Out intension is to provide a rapid, accurate, and precise manner in which to inventory forests, provide calibration-validation data for airborne assessments, and develop 3D forest models for simulation purposes.

Sponsor: National Science Foundation (NSF)

Participants: David Kelbe / Martin van Leeuwen / Jan van Aardt