Research Assistant Professor
Munsell Color Science Laboratory
Center for Imaging Science
Rochester Institute of Technology
As you might guess, I'm a research assistant professor here at the Munsell Color Science Lab, at RIT. Prior to that I was a color scientist, and prior to that I was but a simple graduate student. How time flies. Prior to my incarnation of perpetual academia I spent my formidable/formative years growing up on Cape Cod spear fishing tuna and cooking pizza. Here are my assorted credentials, so to speak
Ph.D. Imaging Science, Rochester Institute of Technology,2003
M.S. Color Science, Rochester Institute of Technology,1998
B.S. Imaging Science, Rochester Institute of Technology,1996
Much of my time these days is spent on research concerning image appearance modeling. You can think of that as the next logical step beyond color appearance models. We are combining traditional color appearance modeling with spatial vision models, to create a model capable of predicting the perceptions of complex image stimuli. The ultimate goal is to have a single model that can predict the appearance of images, as well as image quality and image differences. P.S. model.
iCAM is a framework we have created for just this type of image appearance modeling. For more information please go to the iCAM page, or to Mark Fairchild's Page.

iCAM was inspired, in part, from some of my Ph.D. research on measuring image quality and image differences. Why do we need separate models of image differences, when we have plenty of color difference equations to choose from? Good question. Equations such as the nefarious CIEDE2K are designed to predict the color difference of simple color patches on uniform backgrounds. They do not take into account the complex spatial interactions that can be present in images.
A good example of this is shown in the banana example above. The original image is shown on the top, while the bottom shows two "reproductions." The image on the bottom left has additive white noise, while the image on the right has a green banana. A per-pixel color difference calculation will suggest that the image on the left has a larger perceived difference, while it actual has a non-perceptible difference. A color image difference metric will correctly predict that the image on the right has a larger perceived difference.
I'll be creating a page in the near future with ample IDL and Matlab code for calculating image differences. In the mean time, if you would like more information feel free to download my Ph.D. dissertation.

Research on high-dynamic range rendering stems from our work on predicting the appearance of complex image stimuli. High-dynamic range implies the images have either a ridiculous range of light (think bright sunshine and dark shadows) or many many bits of information, or both. Since the world itself has a pretty large dynamic range, we work on trying to display that world on imaging devices that have a much smaller dynamic range.
I am currently the chair of CIE TC8-08 working on testing spatially complex color appearance models. We are devising ways to test both the preference and accuracy of many existing HDR rendering/tone-mapping algorithms, including iCAM. To facilitate this testing, we have made available a wide range of HDR images. Please see the iCAM HDR page for much more information and source code for HDR rendering.
Below are a few sites that may or may not be useful.
office: 18-1069
email: garrett@cis.rit.edu
phone: 585-475-4923
fax: 585-475-4444
real mail: 54 Lomb Memorial Drive, Rochester NY 14623