December 6, 2019 at 8:00am - MS Thesis Defense - NICHOLAS BITTEN - TIRS-2 and Future Thermal Instrument Band Study and Stray Light Study

DIRS Laboratory 76-3215
December 6, 2019 at 8:00am
NICHOLAS BITTEN
TIRS-2 and Future Thermal Instrument Band Study and Stray Light Study
MS Thesis Defense
Abstract: 

Abstract

 

Landsat thermal instruments have been a significant source of data for thermal remote sensing applications, and future Landsat missions will continue this tradition. This work was designed to help inform the requirements for several parameters of future Landsat thermal instruments, and assess the impact that these parameters can have on the retrieved Land Surface Temperature (LST). Two main studies were conducted in this research. The first will investigate the impact that uncertainty in the spectral response of the bands will have on the LST product using the Split Window Algorithm. The main parameters that will be tested are the center and width of the bands. The second study will investigate the impact of stray light on LST, including different magnitudes of stray light and different combinations of in-field and out-of-field targets.

 

The results of the band study showed that shifting of the bands seems to have a larger impact on the LST than widening of the bands. Small shifts of only +/- 50 nm can cause errors of over 1 K in the LST. It was also found that higher water vapor content in the atmosphere can increase the error in the LST-retrieval. The stray light study indicates, with respect to LST-retrieval, that residual errors in the split window algorithm process are larger than those introduced by stray light, except for extreme cases. Additionally, it was found that the total magnitude of the stray light is not the only factor that effects the accuracy of LST-retrieval, but the relationship between the magnitude of stray light in the individual bands seems to have more of an impact.

December 5, 2019 at 3:00pm - MS Thesis Defense - ETHAN W. HUGHES - Spatially Explicit Snap Bean Flowering and Disease Prediction Using Imaging Spectroscopy from Unmanned Aerial Systems

DIRS Laboratory 76-3215
December 5, 2019 at 3:00pm
ETHAN W. HUGHES
Spatially Explicit Snap Bean Flowering and Disease Prediction Using Imaging Spectroscopy from Unmanned Aerial Systems
MS Thesis Defense
Abstract: 

Abstract

 

Sclerotinia sclerotiorum, or white mold, is a fungus that infects the flowers of snap bean plants and causes a subsequent reduction in snap bean pods, which adversely impacts yield. Timing the application of white mold fungicide thus is essential to preventing the disease, and is most effective when applied during the flowering stage. However, most of the flowers are located beneath the canopy, i.e., hidden by foliage, which makes spectral detection of flowering via the leaf/canopy spectra paramount. The overarching objectives of this research therefore are to i) identify spectral signatures for the onset of flowering to optimally time the application of fungicide, ii) investigate spectral characteristics prior to white mold onset in snap beans, and iii) eventually link the location of white mold with biophysical (spectral and structural) metrics to create a spatially-explicit probabilistic risk model for the appearance of white mold in snap bean fields. To find pure vegetation pixels in the canopy of the flowering beans toward creating the discriminating and predictive models, spectral angle mapper (SAM) and ratio and thresholding (RT) were used. Average reflective power (ARP), on the other hand, was used to find pure pixels in regions of interest that contained mold to establish the mold models. The pure pixels then were used with a single feature logistic regression (SFLR) to identify wavelengths, spectral ratio indices, and normalized difference indices that best separated the flowering and mold classes. Features with the largest c-index were used to train a support vector machine (SVM) and applied to imagery from a different growing season to evaluate model robustness. This research found that single wavelength features in the near-infrared’s red edge region discriminated and predicted flowering up to two weeks before visible flowering, with c-index values above 90%. However, it was found that canopy-level discrimination of snap beans diseased with white mold was not possible using these methods. Therefore, a spectra-to-LAI regression was needed to predict mold occurrence in the canopy using ground truth LAI measurements.

 

December 2, 2019 at 2:00pm - Ph.D. Dissertation Defense - Di Bai - A Hyperspectral Image Classification Approach to Pigment Mapping in Historical Artifacts Using Deep Learning Methods

CAR 76-3215 DIRS Lab
December 2, 2019 at 2:00pm
Di Bai
A Hyperspectral Image Classification Approach to Pigment Mapping in Historical Artifacts Using Deep Learning Methods
Ph.D. Dissertation Defense
Abstract: 

Hyperspectral imaging has been applied to historical artifact studies. For example, the Gough Map, one of the earliest surviving maps of Britain, was imaged using a hyperspectral imaging system while in the collection at the Bodleian Library, Oxford University in 2015. The collection of the HSI data was aimed at pigment analysis for the material diversity of its composition and potentially the timeline of its creation. To make full use of both the spatial and spectral features, we developed a novel spatial-spectral deep learning technique called 3D-SE-ResNet in this research. We applied the 3D-SE-ResNet to the Gough Map and the Selden Map of China. This deep learning framework will automatically classify pigments in large HSIs with a limited amount of reference data. Historical geographers, cartographic historians and other scholars will benefit from this work to analyze the pigment mapping of cultural heritage artifacts in the future.

November 25, 2019 at 10:00am - Ph.D. Dissertation Defense - Shusil Dangi - Computational Methods for Segmentation of Multi-Modal Multi-Dimensional Cardiac Images

CAR 76-3215 DIRS Lab
November 25, 2019 at 10:00am
Shusil Dangi
Computational Methods for Segmentation of Multi-Modal Multi-Dimensional Cardiac Images
Ph.D. Dissertation Defense
Abstract: 

Segmentation of the heart structures helps compute the cardiac contractile function quantified via the systolic and diastolic volumes, ejection fraction, and myocardial mass, representing a reliable diagnostic value. Similarly, quantification of the myocardial mechanics throughout the cardiac cycle, analysis of the activation patterns in the heart via electrocardiography (ECG) signals, serve as good cardiac diagnosis indicators. Furthermore, high quality anatomical models of the heart can be used in planning and guidance of minimally invasive interventions under the assistance of image guidance.

The most crucial step for the above mentioned applications is to segment the ventricles and myocardium from the acquired cardiac image data. Although the manual delineation of the heart structures is deemed as the gold-standard approach, it requires significant time and effort, and is highly susceptible to inter- and intra-observer variability. These limitations suggest a need for fast, robust, and accurate semi- or fully-automatic segmentation algorithms. However, the complex motion and anatomy of the heart, indistinct borders due to blood flow, the presence of trabeculations, intensity inhomogeneity, and various other imaging artifacts, makes the segmentation task challenging.

In this work, we present and evaluate segmentation algorithms for multi-modal, multi-dimensional cardiac image datasets. Firstly, we segment the left ventricle (LV) blood-pool from a tri-plane 2D+time trans-esophageal (TEE) ultrasound acquisition using local phase based filtering and graph-cut technique, propagate the segmentation throughout the cardiac cycle using non-rigid registration-based motion extraction, and reconstruct the 3D LV geometry. Secondly, we segment the LV blood-pool and myocardium from an open-source 4D cardiac cine Magnetic Resonance Imaging (MRI) dataset by incorporating average atlas based shape constraint into the graph-cut framework and iterative segmentation refinement. The developed fast and robust framework is further extended to perform right ventricle (RV) blood-pool segmentation from a different open-source 4D cardiac cine MRI dataset. Next, we employ convolutional neural network based multi-task learning framework to segment the myocardium and regress its area, simultaneously, and show that segmentation based computation of the myocardial area is significantly better than that regressed directly from the network, while also being more interpretable. Finally, we impose a weak shape constraint via multi-task learning framework in a fully convolutional network and show improved segmentation performance for LV, RV and myocardium across healthy and pathological cases, as well as, in the challenging apical and basal slices in two open-source 4D cardiac cine MRI datasets.

We demonstrate the accuracy and robustness of the proposed segmentation methods by comparing the obtained results against the provided gold-standard manual segmentations, as well as with other competing segmentation methods.

November 1, 2019 at 10:00am - Ph.D. Dissertation Defense - Utsav B. Gewali - Machine Learning for Robust Understanding of Scene Materials in Hyperspectral Images

CAR 76-3215 DIRS Lab
November 1, 2019 at 10:00am
Utsav B. Gewali
Machine Learning for Robust Understanding of Scene Materials in Hyperspectral Images
Ph.D. Dissertation Defense
Abstract: 

The major challenges in hyperspectral (HS) imaging and data analysis are expensive sensors, high dimensionality of the signal, limited ground truth, and spectral variability. This dissertation develops and analyzes machine learning based methods to address these problems. In the first part, we examine two of the most important HS data analysis tasks--vegetation parameter estimation and land cover classification. For vegetation parameter estimation, we present two Gaussian processes based approaches for improving the accuracy of vegetation parameter retrieval when ground truth is limited and/or spectral variability is high. The first is the adoption of covariance functions based on well-established metrics, such as, spectral angle and spectral correlation, which are known to be better measures of similarity for spectral data. The second is the joint modeling of related vegetation parameters by multitask Gaussian processes so that the prediction accuracy of the vegetation parameter of interest can be improved with the aid of related vegetation parameters for which a larger set of ground truth is available. For land cover classification with limited ground truth, we perform a comparative study of random fields based spatial-spectral algorithms on widely used public datasets and propose the use of Bayesian optimization to further improve the performances of the preexisting methods.

In the second part of the dissertation, we demonstrate that high dimensional HS spectra can be reconstructed from low dimensional multispectral (MS) signals, that can be obtained from much cheaper, lower spectral resolution sensors. A novel end-to-end fully convolutional residual neural network architecture is proposed that can simultaneously optimize the MS bands and the transformation to reconstruct HS spectra from MS signals by analyzing a large quantity of HS data.  The learned band can be implemented in sensor hardware and the learned transformation can be incorporated in the data processing pipeline to build a low-cost hyperspectral data collection system. Additionally, we also investigate the prospects of using reconstructed HS spectra for land cover classification.

October 25, 2019 at 10:00am - Ph.D. Dissertation Defense - Baabak Mamaghani - An Assessment of the Radiometric Quality of sUAS Imagery

Carlson 76-3215
October 25, 2019 at 10:00am
Baabak Mamaghani
An Assessment of the Radiometric Quality of sUAS Imagery
Ph.D. Dissertation Defense
Abstract: 

In recent years, significant advancements have been made in both sensor technology and small Unmanned Aircraft Systems (sUAS). Improved sensor technology has provided users with cheaper, lighter, and higher resolution imaging tools, while new sUAS platforms have become cheaper, more stable and easier to navigate both manually and programmatically. These enhancements have provided remote sensing solutions for both commercial and research applications that were previously unachievable. However, this has provided non-scientific practitioners with access to technology and techniques previously only available to remote sensing professionals, sometimes leading to improper diagnoses and results. The work accomplished in this dissertation demonstrates the impact of proper calibration and reflectance correction on the radiometric quality of sUAS imagery.

The first part of this research conducts an in-depth investigation into a proposed technique for radiance-to-reflectance conversion. Previous techniques utilized reflectance conversion panels in-scene, which, while providing accurate results, required extensive time in the field to position the panels as well as measure them. We have positioned sensors on board the sUAS to record the downwelling irradiance which then can be used to produce reflectance imagery without the use of these reflectance conversion panels.

The second part of this research characterizes and calibrates a MicaSense RedEdge-3, a multispectral imaging sensor. This particular sensor comes pre-loaded with metadata values, which are never recalibrated, for dark level bias, vignette and row-gradient correction and radiometric calibration. This characterization and calibration studies were accomplished to demonstrate the importance of recalibration of any sensors over a period of time. In addition, an error propagation was performed to detect the highest contributors of error in the production of radiance and reflectance imagery.

Finally, a study of the inherent reflectance variability of vegetation was performed. In other words, this study attempts to determine how accurate the digital count to radiance calibration and the radiance to reflectance conversion has to be. Can we lower our accuracy standards for radiance and reflectance imagery, because the target itself is too variable to measure? For this study, six Coneflower plants were analyzed, as a surrogate for other cash crops, under different illumination conditions, at different times of the day, and at different ground sample distances (GSDs).

September 9, 2019 at 10:45am - Ph.D. Thesis Defense - Yilong Liang - Methodology for the Integration of Optomechanical System Software Models with a Radiative Transfer Image Simulation Model

DIRS Laboratory 76-3215
September 9, 2019 at 10:45am
Yilong Liang
Methodology for the Integration of Optomechanical System Software Models with a Radiative Transfer Image Simulation Model
Ph.D. Thesis Defense
Abstract: 

Abstract

 

With rapid developments in satellite and sensor technologies, there has been a dramatic increase in the availabilities of remotely sensed images obtained with different modalities. Given these data, there is always an urgent need for developing automatic algorithms that help experts with better image analyzing capabilities. In this work, we explore techniques related to object detection in both high resolution aerial images and hyperspectral remote sensing images.

 

In the first part of the thesis, subpixel object detection in hyperspectral images was studied. We propose a novel image segmentation algorithm to identify spatial-spectral coherent image regions, from which the background statistics were estimated for deriving the MFs. The proposed method is accompanied by extensive experimental studies that corroborate its merits.

 

The second part of the thesis explores the object based image analysis (OBIA) approach for object detection in high resolution aerial images. We formulate the detection problem into a tree-matching framework and propose two tree-matching algorithms. Our results demonstrate efficiency and advantages of the detection framework.

 

At last, we study object detection in high resolution aerial images from a machine learning perspective. We investigate both traditional machine learning based and end-to-end convolutional neural network (CNN) based approaches for various detection tasks. In traditional detection framework, we propose to apply the Gaussian process classifier (GPC) to train an object detector. In the CNN based approach, we proposed a novel scale transfer module that generates better feature maps for object detection. Our results show the efficiency of the proposed method and competitiveness when compared to state-of-the-art counterparts.

 

 

 

August 23, 2019 at 10:00am - Ph.D. Thesis Defense - Jacob Wirth - Point Spread Function and Modulation Transfer Function Engineering

DIRS Laboratory 76-3215
August 23, 2019 at 10:00am
Jacob Wirth
Point Spread Function and Modulation Transfer Function Engineering
Ph.D. Thesis Defense
Abstract: 

Abstract

 

A novel computational imaging approach to sensor protection based on point spread function (PSF) engineering is designed to suppress harmful laser irradiance without significant loss of image fidelity of a background scene. PSF engineering is accomplished by modifying a traditional imaging system with a lossless linear phase masks at the pupil which diffracts laser light over a large area of the imaging sensor. The approach provides the additional advantage of an instantaneous response time across a broad region of the electromagnetic spectrum. As the mask does not discriminate between the laser and desired scene, a post-processing image reconstruction step is required, which may be accomplished in real time, that both removes the laser spot and improves the image fidelity.

This thesis includes significant experimental and numerical advancements in the determination and demonstration of optimized phase masks. Analytic studies of PSF engineering systems and their fundamental limits were conducted. An experimental test-bed was designed using a spatial light modulator to create digitally-controlled phase masks to image a target in the presence of a laser source. Experimental results using already known phase masks: axicon, vortex and cubic are reported. New methods for designing phase masks are also reported including (1) a numeric differential evolution algorithm, (2) a “PSF reverse engineering” method, and (3) a hardware based simulated annealing experiment. Broadband performance of optimized phase masks were also evaluated in simulation. Optimized phase masks were shown to provide three orders of magnitude laser suppression while simultaneously providing high fidelity imaging a background scene.

August 23, 2019 at 9:00am - Ph.D. Thesis Defense - Yansong Liu - Semantic Segmentation of Multi-sensor Remote Sensing Images

CAR1275
August 23, 2019 at 9:00am
Yansong Liu
Semantic Segmentation of Multi-sensor Remote Sensing Images
Ph.D. Thesis Defense
Abstract: 

Abstract

 

Earth observation through remote sensing images enables the accurate characterization of materials and objects on the surface from space and airborne platforms. With the increasing availability of multiple and heterogeneous imaging sources for the same geographical region: multispectral, hyperspectral, LiDAR, and multitemporal, a  complete description of the given scene now can be acquired. The combination/fusion of the multi-sensor data opens great opportunities for improving the classification of individual objects or natural terrains in a complex environment such as urban cities. As a result, multi-sensor semantic segmentation stands out as a demanded technique in order to fully leverage complementary imaging modalities.

 

In our dissertation, we focus on developing the techniques specifically for multi-sensor image fusion of very-high-resolution (VHR) aerial optical imagery and light detection and ranging (LiDAR) data in the context of dense semantic segmentation/classification. The fusion of these two modalities (optical imagery and LiDAR data) usually can be performed at the feature level or decision level. Our research first investigated the feature level fusion that combines hand-crafted features derived from both optical imagery and LiDAR data. We then feed the combined features into various classifiers, and the results show clear advantages of using fused features. The pixel-wise classification results are then followed by the higher-order conditional random fields (CRFs) to eliminate noisy labels and enforce label consistency and coherence within one segment or between segments. As the recent use of pre-trained deep convolutional neural networks (DCNNs) for remote sensing image classification has been extremely successful, we proposed a decision-level fusion approach that trains one DCNN for optical imagery and one linear classifier for LiDAR data. These two probabilistic outputs are then combined later in various CRF frameworks (e.g., piece-wise CRFs, higher-order CRFS, and fully-connected CRFs) to generate the final classification results. We found in the extensive experiments that the proposed decision level fusion compares favorably or outperforms the state-of-the-art baseline methods that utilize feature level fusion.

August 8, 2019 at 10:00am - Ph.D. Thesis Defense - Ryan Ford - Water Quality and Algal Bloom Sensing from Multiple Imaging Platforms

DIRS Laboratory 76-3215
August 8, 2019 at 10:00am
Ryan Ford
Water Quality and Algal Bloom Sensing from Multiple Imaging Platforms
Ph.D. Thesis Defense
Abstract: 

Abstract

 

Harmful cyanobacteria blooms have been increasing in frequency throughout the world resulting in a greater need for water quality monitoring. Traditional methods of monitoring water quality, such as point sampling, are often resource expensive and time consuming in comparison to remote sensing approaches, however the spatial resolution of established water remote sensing satellites is often too coarse (~300 m) to resolve smaller inland waterbodies. The fine scale spatial resolution and improved radiometric sensitivity of Landsat satellites (~30 m) can resolve these smaller waterbodies, enabling their capability for cyanobacteria bloom monitoring.

In this work, the utility of Landsat to retrieve concentrations of two cyanobacteria bloom pigments, chlorophyll-a and phycocyanin, is assessed. Concentrations of these pigments are retrieved using a spectral Look-Up-Table (LUT) matching process, where an exploration of the effects of LUT design on retrieval accuracy is performed. Potential augmentations to the spectral sampling of Landsat are also tested to determine how it can be improved for waterbody constituent concentration retrieval.

Applying the LUT matching process to Landsat 8 imagery determined that concentrations of chlorophyll-a, total suspended solids, and color dissolved organic matter were retrieved with a satisfactory accuracy through appropriate choice of atmospheric compensation and LUT design, in agreement with previously reported implementations of the LUT matching process. Phycocyanin proved to be a greater challenge to this process due to its weak effect on waterbody spectrum, the lack of Landsat spectral sampling over its predominant spectral feature, and error from atmospheric compensation. From testing potential enhancements to Landsat spectral sampling, we determine that additional spectral sampling in the yellow and red edge regions of the visible/near-infrared (VNIR) spectrum can lead to improved concentration retrievals. This performance further improves when sampling is added to both regions, and when Landsat is transitioned to a VNIR imaging spectrometer, though this is dependent on band position and spacing. These results imply that Landsat can be used to monitor cyanobacteria blooms through retrieval of chlorophyll-a, and this retrieval performance can be improved in future Landsat systems, even with minor changes to spectral sampling. This includes improvement in retrieval of phycocyanin when implementing a VNIR imaging spectrometer.

 

 

August 6, 2019 at 1:00pm - MS Thesis Defense - Rinaldo Ronnie Izzo - Combining hyperspectral imaging and small unmanned aerial systems for grapevine moisture stress assessment

DIRS Laboratory 76-3215
August 6, 2019 at 1:00pm
Rinaldo Ronnie Izzo
Combining hyperspectral imaging and small unmanned aerial systems for grapevine moisture stress assessment
MS Thesis Defense
Abstract: 

Abstract

 

It has been shown that a mild water deficit in grapevine contributes to wine quality, in terms of especially flavor. Water deficit irrigation and selective harvesting are implemented to optimize quality, but these approaches require rigorous measurement of vine water status. While traditional in-field physiological measurements have made operational implementation onerous, modern small unmanned aerial systems (sUAS) have presented the unique opportunity for rigorous management across vast areas. This study sought to fuse hyperspectral remote sensing, sUAS, and sound multivariate analysis techniques for the purposes of assessing grapevine water status. High-spatial and -spectral resolution hyperspectral data were collected in the visible/near-infrared (VNIR; 400-1000nm) and short-wave infrared (SWIR; 950-2500 nm) spectral regions across three flight days at a commercial vineyard in upstate New York. A pressure chamber was used to collect traditional field measurements of stem water potential (ψstem) during image acquisition. We correlated our hyperspectral data with a limited stress range (wet growing season) of traditional measurements for ψstem using multiple linear regression (R2 between 0.34 and 0.55) and partial least squares regression (R2 between 0.36 and 0.39). We demonstrated statistically significant trends in our experiment, further qualifying the potential of hyperspectral data, collected via sUAS, for the purposes of grapevine water management. There was indication that the chlorophyll and carotenoid absorption regions in the VNIR, as well as several SWIR water band regions warrant further exploration. This work was limited since we did not have access to experimentally-controlled vineyard plots, and it therefore is recommended that future work includes a full range of water stress scenarios.

August 6, 2019 at 10:00am - Ph.D. Thesis Defense - Rehman Eon - The Characterization of Earth Sediments using Radiative Transfer Models from Directional Hyperspectral Reflectance

DIRS Laboratory 76-3215
August 6, 2019 at 10:00am
Rehman Eon
The Characterization of Earth Sediments using Radiative Transfer Models from Directional Hyperspectral Reflectance
Ph.D. Thesis Defense
Abstract: 

 

Remote sensing techniques are continuously being developed to extract physical information about the Earth's surface. Over the years, space-borne and airborne sensors have been used for the characterization of surface sediments. Spectral observations of sediments can be used to effectively identify the physical characteristics of the surface. Geophysical properties of a sediment surface such as its density, grain size, surface roughness, and moisture content can influence the angular dependence of spectral signatures, specifically the Bidirectional Reflectance Distribution Function (BRDF). Models based on radiative transfer equations can relate the angular dependence of the reflectance to these geophysical variables. Extraction of these parameters can provide a better understanding of the Earth's surface, and play a vital role in various environmental modeling processes. In this work, we focused on retrieving two of these geophysical properties of earth sediments, the bulk density and the soil moisture content (SMC), using directional hyperspectral reflectance. We proposed a modification to the radiative transfer model developed by Hapke to retrieve sediment bulk density. The model was verified under controlled experiments within a laboratory setting, followed by retrieval of the sediment density from different remote sensing platforms: airborne, space-borne and a ground-based imaging sensor. The SMC was characterized using the physics based multilayer radiative transfer model of soil reflectance or MARMIT. The MARMIT model was again validated from experiments performed in our controlled laboratory setting using several different soil samples across the United States; followed by applying the model in mapping SMC from imagery data collected by an Unmanned Aerial System (UAS) based hyperspectral sensor.

August 2, 2019 at 10:00am - MS Thesis Defense - Daniel L. Edwards - Evaluation of Single-Pixel Tunable Fabry-Perot filters for Optical Imaging

CAR1275
August 2, 2019 at 10:00am
Daniel L. Edwards
Evaluation of Single-Pixel Tunable Fabry-Perot filters for Optical Imaging
MS Thesis Defense
Abstract: 

Abstract

 

The Fabry-Perot interferometer (FPI) is a well-developed and widely used tool to control and measure wavelengths of light. In optical imaging applications, there is often a need for systems with compact, integrated, and widely tunable spectral filtering capabilities. We evaluate the performance of a novel tunable MEMS  (Micro-Electro-Mechanical System) Fabry-Perot (FP) filter device intended to be monolithically integrated over each pixel of a focal plane array. This array of individually tunable FPIs have been designed to operate across the visible light spectrum from 400-750 nm. This design can give rise to a new line of compact spectrometers with fewer moving parts and the ability to perform customizable filtering schemes at the hardware level. The original design was modeled, simulated, and fabricated but not tested and evaluated. We perform optical testing on the fabricated devices to measure the spectral resolution and wavelength tunability of these FP etalons. We collect the transmission spectrum through the FP etalons to evaluate their quality, finesse, and free spectral range. We then attempt to thermally actuate the expansion mechanisms in the FP cavity to validate tunability across the visible spectrum. The simulated design materials set was modified to create a more practical device for fabrication in a standard CMOS/MEMS foundry. Unfortunately, metal thin film stress and step coverage issues resulted in device heater failures, preventing actuation. This FP filter array design proves to be a viable manufacturing design for an imaging focal plane with individually tunable pixels. However, it will require more optimization and extensive electrical, optical, thermal, and mechanical testing when integrated with a detector array.

August 2, 2019 at 1:00am - MS Thesis Defense - Anjali K. Jogeshwar - Tool for the analysis of human interaction with two-dimensional printed imagery

DIRS Laboratory 76-3215
August 2, 2019 at 1:00am
Anjali K. Jogeshwar
Tool for the analysis of human interaction with two-dimensional printed imagery
MS Thesis Defense
Abstract: 

Abstract

 

 

The study of human vision must include our interaction with objects. These studies can include behavior modeling, understanding visual attention, and motor guidance, and enhancing user experiences. But all these studies have one thing in common. To analyze the data in detail, researchers typically have to analyze video data frame by frame. Real world interaction data often comprises of data from both eye and hand. Analyzing such data frame by frame can get very tedious and time-consuming. A calibrated scene video from an eye-tracker captured at 120 Hz for 3 minutes has over 21,000 frames to be analyzed.

 

Automating the process is crucial to allow interaction research to proceed. Research in object recognition over the last decade now allows eye-movement data to be analyzed automatically to determine what a subject is looking at and for how long. I will describe my research in which I developed a pipeline to help researchers analyze interaction data including eye and hand. Inspired by a semi-automated pipeline for analyzing eye tracking data, I have created a pipeline for analyzing hand grasp along with gaze. Putting both pipelines together can help researchers analyze interaction data.

 

The hand-grasp pipeline detects skin to locate the hands, then determines what object (if any) the hand is over, and where the thumbs/fingers occluded that object. I also compare identification with recognition throughout the pipeline. The current pipeline operates on independent frames; future work will extend the pipeline to take advantage of the dynamics of natural interactions.

 

July 31, 2019 at 10:30am - Ph.D. Thesis Defense - Lauren Taylor - Ultrafast Laser Polishing for Optical Fabrication

CAR1275
July 31, 2019 at 10:30am
Lauren Taylor
Ultrafast Laser Polishing for Optical Fabrication
Ph.D. Thesis Defense
Abstract: 

Abstract

 

Next-generation imaging systems for consumer electronics, AR/VR, and space telescopes require weight, size, and cost reduction while maintaining high optical performance. Freeform optics with rotationally asymmetric surface geometries condense the tasks of several spherical optics onto a single element. They are currently fabricated by ultraprecision sub-aperture tools like diamond turning and magnetorheological finishing, but the final surfaces contain mid-spatial-frequency tool marks and form errors which fall outside optical tolerances. Therefore, there remains a need for disruptive tools to generate optic-quality freeform surfaces.

This thesis work investigates a high precision, flexible, non-contact methodology for optics polishing using femtosecond ultrafast lasers. Femtosecond lasers enable ablation-based material removal on substrates with widely different optical properties owing to their high GW-TW/cm2 peak intensities. For polishing, it is imperative for the laser to precisely remove material while minimizing the onset of detrimental thermal and structural surface artifacts such as melting and oxidation. However, controlling the laser interaction is a non-trivial task due to the competing influence of nonthermal melting, ablation, electron/lattice thermalization, heat accumulation, and thermal melting phenomena occurring on femtosecond to microsecond timescales.

Femtosecond laser-material interaction was investigated from the fundamental theoretical and experimental standpoints to determine a methodology for optic-quality polishing of optical / photonic materials. Numerical heat accumulation and two-temperature models were constructed to simulate femtosecond laser processing and predict material-specific laser parameter combinations capable of achieving ablation with controlled thermal impact. A tunable femtosecond laser polishing system was established. Polishing of germanium substrates was successfully demonstrated using the model-determined laser parameters, achieving controllable material removal while maintaining optical surface quality. The established polishing technique opens a viable path for sub-aperture, optic quality finishing of optical / photonic materials, capable of scaling up to address complex polishing tasks towards freeform optics fabrication.

 

July 12, 2019 at 10:00am - Ph.D. Thesis Defense - Keegan McCoy - Methodology for the Integration of Optomechanical System Software Models with a Radiative Transfer Image Simulation Model

DIRS Laboratory 76-3215
July 12, 2019 at 10:00am
Keegan McCoy
Methodology for the Integration of Optomechanical System Software Models with a Radiative Transfer Image Simulation Model
Ph.D. Thesis Defense
Abstract: 

Abstract

 

Stray light, any unwanted radiation that reaches the focal plane of an optical system, reduces image contrast, creates false signals or obscures faint ones, and ultimately degrades radiometric accuracy. These detrimental effects can have a profound impact on the usability of collected remote sensing data, which must be radiometrically calibrated to be useful for scientific applications (e.g. Landsat imagery).  Understanding the full impact of stray light on data scientific utility is of particular concern for lower cost, more compact imaging systems, which inherently provide fewer opportunities for stray light control. To address these concerns, this research presents a general methodology for integrating point spread function (PSF) and stray light performance data from optomechanical system models in optical engineering software with a physics-based image and data simulation model.  This integration method effectively emulates the PSF and stray light performance of a detailed system model within a high-fidelity scene, thus producing realistic simulated imagery.  This novel capability enables system trade studies and sensitivity analyses to be conducted on parameters of interest, including those that influence stray light, by analyzing their quantitative impact on user applications when imaging realistic operational scenes, while also informing the writing of system requirements.  In addition to detailing the methodology’s radiometric framework, we describe the collection of necessary raytrace data from an optomechanical system model (in this case, using FRED Optical Engineering Software), and present PSF and stray light component validation tests through imaging Digital Imaging and Remote Sensing Image Generation (DIRSIG) model test scenes.  The integration method’s ability to produce quantitative metrics to assess the impact of stray light-focused system trade studies on user applications is then demonstrated using a Cassegrain telescope model and stray light-stressing coastal scene under various system and scene conditions.

July 11, 2019 at 9:30am - Ph.D. Thesis Defense - Sanghui Han - Utility Analysis for Optimizing Compact Adaptive Spectral Imaging Systems for Subpixel Target Detection Applications

DIRS Laboratory 76-3215
July 11, 2019 at 9:30am
Sanghui Han
Utility Analysis for Optimizing Compact Adaptive Spectral Imaging Systems for Subpixel Target Detection Applications
Ph.D. Thesis Defense
Abstract: 

Abstract

 

Since the development of spectral imaging systems where we transitioned from panchromatic, single band images to multiple bands, we have pursued a way to evaluate the quality of these images. We now have imaging systems capable of collecting images with hundreds of contiguous bands across the reflective portion of the electromagnetic spectrum that allows us to extract information at sub-pixel levels. However, prediction and assessment methods for spectral images, analyzing quality, and what this entails have yet to form a solid framework. In this research we find trends within the spectral image utility trade space, first by predicting the performance for a few combinations of targets and backgrounds, then generate images of the targets and background in a real scene that we can use to assess the utility and compare with the prediction. This allows us to find a relationship between utility, spectral separability, and scene complexity to optimize the design of compact spectral imaging systems with adaptive band selection capabilities that is focused on the mission and practical for real operations.

July 11, 2019 at 1:30am - Ph.D. Thesis Defense - Tyler Peery - System Design Considerations for a Low-Intensity Hyperspectral Imager of Sensitive Cultural Heritage Manuscripts

DIRS Laboratory 76-3215
July 11, 2019 at 1:30am
Tyler Peery
System Design Considerations for a Low-Intensity Hyperspectral Imager of Sensitive Cultural Heritage Manuscripts
Ph.D. Thesis Defense
Abstract: 

Abstract

 

Cultural heritage spectral imaging is becoming more prevalent with the increased affordability of more complex imaging systems, including multi- and hyperspectral imaging (MSI and HSI) systems.  HSI systems tend to sacrifice spatial pixels for additional spectral information, and diffracting the light into its constituent parts reduces an HSI signal one to two orders of magnitude relative to typical RGB or MSI framing cameras.  Requiring more illumination can be burdensome in cultural heritage imaging, where potentially sensitive targets are protected under various illumination standards.  In this research, spatial resolution is used as a trade space, increasing ground sample distance (GSD) to improve signal-to-noise ratios (SNRs).  Panchromatic sharpening is applied to recover sacrificed spatial detail, fusing together a high-spatial resolution panchromatic image with the HSI image.  A 14th-century manuscript was imaged with an HSI detector under museum lighting levels of 50 lux, based on the United Kingdom standard for cultural heritage display at museums, PAS 198:2012.  Detector systems are investigated that can utilize this technique, as well as additional methods of data capture to assist in the processing of sensitive cultural heritage documents while preserving their physical condition.

July 9, 2019 at 11:00am - Ph.D. Thesis Defense - Kamal Jnawali - Automatic Cancer Tissue Detection Using Multispectral Photoacoustic Imaging

CAR2155
July 9, 2019 at 11:00am
Kamal Jnawali
Automatic Cancer Tissue Detection Using Multispectral Photoacoustic Imaging
Ph.D. Thesis Defense
Abstract: 

Abstract

 

Convolutional neural networks (CNNs) have become increasingly popular in recent years because of their ability to tackle complex learning problems such as object detection, and localization.  They are being used for a variety of tasks, such as tissue abnormalities detection and localization, with an accuracy that comes close to the level of human predictive performance in medical imaging. The success is primarily due to the ability of CNNs to extract the discriminant features at multiple levels of abstraction.

Photoacoustic (PA) imaging is a promising new modality that is gaining significant clinical potential. The availability of a large dataset of three-dimensional PA images of ex-vivo human prostate and thyroid specimens has facilitated this current study aimed at evaluating the efficacy of CNN for cancer diagnosis. In PA imaging, a short pulse of near-infrared laser light is sent into the tissue, but the image is created by focusing the ultrasound waves that are photoacoustically generated due to the absorption of light, thereby mapping the optical absorption in the tissue. By choosing multiple wavelengths of laser light, multispectral photoacoustic (MPA) images of the same tissue specimen can be obtained. The objective of this thesis is to implement deep learning architecture for cancer detection using the MPA image dataset.

In this study, we built and examined a fully automated deep learning framework that learns to detect and localize cancer regions in a given specimen entirely from its MPA image dataset. The dataset for this work consisted of samples with size ranging from 12 × 45 × 200 pixels to 64 × 64 × 200 pixels at five wavelengths namely, 760 nm, 800 nm, 850 nm, 930 nm, and 970 nm.

The proposed algorithms first extract features using convolutional kernels and then identify presence of cancer region in the tissue using the softmax function, the last layer of the network. The area under curve (AUC) was calculated to evaluate the performance of each algorithm with very promising results. To the best of our knowledge, this is one of the first examples of the application of deep 3D CNN to a large cancer MPA dataset.

While previous efforts using the same dataset involved decision making using mathematically extracted image features, this work demonstrates that this process can be automated without any significant loss in accuracy. Another major contribution of this work has been to demonstrate that both prostate and thyroid datasets can be combined to produce improved results for cancer diagnosis.

 

April 26, 2019 at 1:00pm - Ph.D. Thesis Defense - Mandy Nevins - Point Spread Function Determination in the Scanning Electron Microscope and its Application in Restoring Images Acquired at Low Voltage

DIRS Laboratory 76-3215
April 26, 2019 at 1:00pm
Mandy Nevins
Point Spread Function Determination in the Scanning Electron Microscope and its Application in Restoring Images Acquired at Low Voltage
Ph.D. Thesis Defense
Abstract: 

Electron microscopes have the capability to examine specimens at much finer detail than a traditional light microscope. Higher electron beam voltages correspond to higher resolution, but some specimens are sensitive to beam damage and charging at high voltages. In the scanning electron microscope (SEM), low voltage imaging is beneficial for viewing biological, electronic, and other beam-sensitive specimens. However, image quality suffers at low voltage from reduced resolution, lower signal-to-noise, and increased visibility of beam-induced contamination. Most solutions for improving low voltage SEM imaging require specialty hardware, which can be costly or system-specific. Point spread function (PSF) deconvolution for image restoration could provide a software solution that is cost-effective and microscope-independent with the ability to produce image quality improvements comparable to specialty hardware systems. Measuring the PSF (i.e., electron probe) of the SEM has been a notoriously difficult task until now. The goals of this work are to characterize the capabilities and limitations of a novel SEM PSF determination method that uses nanoparticle dispersions to obtain a two-dimensional measurement of the PSF, and to evaluate the utility of the measured PSF for restoration of low voltage SEM images. The presented results are meant to inform prospective and existing users of this technique about its fundamental theory, best operating practices, the expected behavior of output PSFs and image restorations, and factors to be aware of during interpretation of results.

Pages