Ronald Kemker

PhD Candidate

Contact My Research

About Me

I am a PhD candidate in the Chester F. Carlson Center for Imaging Science at the Rochester Institute of Technology (RIT). My two main research interests are 1) applying machine- and deep-learning methods to solving remote sensing problems (e.g. semantic segmentation of non-RGB imagery) and 2) developing brain-inspired models for lifelong learning tasks.


Recent News


I completed both of my B.S. degrees (Computer Engineering and Electrical Engineering - Photonics) and my M.S. degree in Electrical Engineering at Michigan Technological University (MTU) located in Houghton, MI. At MTU, I worked with Dr. Michael Roggeman to forward-model and prototype a hand-held plenoptic (light-field) camera system. This camera is able to acquire depth information and computationally re-focus an image with a single capture. Following the completion of my degree, I worked at the Michigan Tech Research Institute (MTRI) in Ann Arbor, MI. There, I worked on various computer vision tasks including the development of a high-dynamic range imaging system for a US Army countersniper program.

I am currently a developmental engineer and program manager with the United States Air Force, and I have worked on numerous projects involving electronic warfare, radar, communications, and photonic systems. I was selected to attend RIT to complete a PhD degree in Imaging Science. I currently work with Dr. Christopher Kanan and his Machine and Neuromorphic Perception Laboratory.

Contact Me


FearNet: Brain-Inspired Model for Incremental Learning

Incremental class learning involves sequentially learning classes in bursts of examples from the same class. This violates the assumptions that underlie methods for training standard deep neural networks, and will cause them to suffer from catastrophic forgetting. Arguably, the best method for incremental class learning is iCaRL, but it requires storing training examples for each class, making it challenging to scale. Here, we propose FearNet for incremental class learning. FearNet is a generative model that does not store previous examples, making it memory efficient. FearNet uses a brain-inspired dual-memory system in which new memories are consolidated from a network for recent memories inspired by the mammalian hippocampal complex to a network for long-term storage inspired by medial prefrontal cortex. Memory consolidation is inspired by mechanisms that occur during sleep. FearNet also uses a module inspired by the basolateral amygdala for determining which memory system to use for recall. FearNet achieves state-of-the-art performance at incremental class learning on image (CIFAR-100, CUB-200) and audio classification (AudioSet) benchmarks.

Measuring Catastrophic Forgetting in Neural Networks

Deep neural networks are used in many state-of-the-art systems for machine perception. Once a network is trained to do a specific task, e.g., bird classification, it cannot easily be trained to do new tasks, e.g., incrementally learning to recognize additional bird species or learning an entirely different task such as flower recognition. When new tasks are added, typical deep neural networks are prone to catastrophically forgetting previous tasks. Networks that are capable of assimilating new information incrementally, much like how humans form new memories over time, will be more efficient than re-training the model from scratch each time a new task needs to be learned. There have been multiple attempts to develop schemes that mitigate catastrophic forgetting, but these methods have not been directly compared, the tests used to evaluate them vary considerably, and these methods have only been evaluated on small-scale problems (e.g., MNIST). In this paper, we introduce new metrics and benchmarks for directly comparing five different mechanisms designed to mitigate catastrophic forgetting in neural networks: regularization, ensembling, rehearsal, dual-memory, and sparse-coding. Our experiments on real-world images and sounds show that the mechanism(s) that are critical for optimal performance vary based on the incremental training paradigm and type of data being used, but they all demonstrate that the catastrophic forgetting problem has yet to be solved.

Algorithms for Semantic Segmentation of Multispectral Remote Sensing Imagery using Deep Learning

Deep convolutional neural networks (DCNNs) have been used to achieve state-of-the-art performance on many computer vision tasks (e.g., object recognition, object detection, semantic segmentation) thanks to a large repository of annotated image data. Large labeled datasets for other sensor modalities, e.g., multispectral imagery (MSI), are not available due to the large cost and manpower required. In this paper, we adapt state-of-the-art DCNN frameworks in computer vision for semantic segmentation for MSI imagery. To overcome label scarcity for MSI data, we substitute real MSI for generated synthetic MSI in order to initialize a DCNN framework. We evaluate our network initialization scheme on the new RIT-18 dataset that we present in this paper. This dataset contains very-high resolution MSI collected by an unmanned aircraft system. The models initialized with synthetic imagery were less prone to over-fitting and provide a state-of-the-art baseline for future work.

Low-Shot Learning for the Semantic Segmentation of Remote Sensing Imagery

Recent advances in computer vision using deep learning with RGB imagery (e.g., object recognition and detection) have been made possible thanks to the development of large annotated RGB image datasets. In contrast, multispectral image (MSI) and hyperspectral image (HSI) datasets contain far fewer labeled images, in part due to the wide variety of sensors used. These annotations are especially limited for semantic segmentation, or pixel-wise classification, of remote sensing imagery because it is labor intensive to generate image annotations. Low-shot learning algorithms can make effective inferences despite smaller amounts of annotated data. In this paper, we study low-shot learning using self-taught feature learning for semantic segmentation. We introduce 1) an improved self-taught feature learning framework for HSI and MSI data and 2) a semi-supervised classification algorithm. When these are combined, they achieve state-of-the-art performance on remote sensing datasets that have little annotated training data available. These low-shot learning frameworks will reduce the manual image annotation burden and improve semantic segmentation performance for remote sensing imagery.


Refereed Publications

  1. Kemker, R., Luu, R., and Kanan C. (2018) Low-Shot Learning for the Semantic Segmentation of Remote Sensing Imagery. To appear in the IEEE Transactions on Geoscience and Remote Sensing (TGRS).
  2. Kemker, R., Salvaggio C., and Kanan C. (2018) Algorithms for Semantic Segmentation of Multispectral Remote Sensing Imagery using Deep Learning. To appear in the ISPRS Journal of Photogrammetry and Remote Sensing - "Deep Learning for Remotely Sensed Data". 10.1016/j.isprsjprs.2018.04.014.
  3. Kemker, R. and Kanan, C. (2018) FearNet: Brain-Inspired Model for Incremental Learning. In International Conference for Learning Representations, 2018.
  4. Kemker, R. , McClure, M., Abitino, A., Hayes, T., and Kanan, C. (2017) Measuring Catastrophic Forgetting in Neural Networks. In AAAI 2018.
  5. Kemker, R. and Kanan C. (2017) Self-Taught Feature Learning for Hyperspectral Image Classification. IEEE Transactions on Geoscience and Remote Sensing (TGRS), 55(5): 2693-2705. 10.1109/TGRS.2017.2651639.


  1. Parisi, G., Kemker, R., Part, J., Kanan, C., and Wermter, S. Continual Lifelong Learning with Neural Networks: A Review. In review at Neural Networks.
  2. Kemker, R., Gewali, U., and Kanan, C. EarthMapper: A Tool Box for the Semantic Segmentation of Remote Sensing Imagery. In review at the IEEE Geoscience and Remote Sensing Letters (GRSL).