home bibliography educational materials
Multimodal Breast Imaging
The thrust of this project, which is conducted in collaboration of Dr. Andrzej Krol from SUNY Upstate, focuses on the field of Multimodality Image Fusion and Visualization of breast tissue. This is a rapidly evolving field due to the constant upgrading and improvement of medical imaging and computational systems. This project provides an approach for taking information currently collected from different imaging modalities that do not share the same piece of equipment, and presenting it in a more cohesive way via image processing techniques.

Background:
Breast cancer is the most common malignant disease in women, and the second leading cause of cancer death among American women today [1]. The primary tool for detection and diagnosis of breast cancer is x-ray mammography, but it is hoped that additional information provided by Positron Emission Tomography (PET) and Magnetic Resonance Imaging (MRI) will provide a means to determine if a suspected lesion, seen in mammography, is malignant or not. The procedure may prevent a large number of retrospectively unnecessary breast biopsies, a surgical procedure, which can result in pain, bruising, and scaring, that is presently used to evaluate suspected breast lesions.

Figure 1: Positron Emission Tomography image of the breast
Fig. 1 Positron Emission Tomography image of the breast.

PET (Figure 1) provides information on the metabolic activity of the tissue. Malignant lesions that are generally more active then the surrounding tissue are often distinguishable in these images by their higher intensity. PET however, lacks the structural information that can be invaluable when identifying the location or context for the lesion, such as for planning biopsies or radiation treatment. This structural information can be provided by MRI (Figure 2).

Figure 2: Magnetic Resonance image of the breast
Fig. 2 Magnetic Resonance image of the breast.

In order to maximize the benefit of acquiring both PET and MRI the images need to be registered (brought into spatial alignment), and fused (combined into a single image for inspection by the radiologist). Registration of PET and MRI breast images is difficult since the breast is composed of soft, highly deformable tissue, without any internal salient structures.

Registration:
Registration of images acquired from different scanners is accomplished using a finite element method technique supported by the use of fiducial skin markers. Fiducial skin markers (FSMs) visible in PET and MRI (taped to predetermined locations on the skin of the breast prior to data acquisition) provide some common information visible in both modalities. Automatic fiducial marker recognition is accomplished via an algorithm based on a MACH (maximum average correlation height) filter, a class of composite correlation filters that allows for shift and rotational invariance, i.e. if the input image is translated by some amount, the filter output will shift by the same amount. The algorithm detects the locations of fiducial skin markers in both PET and MRI image stacks. This shift is estimated by the location of the correlation peak. Correlation filters can be designed to achieve noise tolerance and discrimination among other properties. The process is shown in a schematic way in Figure 3.

Block diagram representing the implementation of the MACH correlation filter.
Fig. 3 Block diagram representing the implementation of the MACH correlation filter.

A finite element model of the breast is constructed from the high resolution MRI image (Figure 4). Displacements between corresponding FSMs are used to calculate the displacement for each node in the meshed breast. Identical patient prone positioning is applied during data acquisition to ensure similar stress conditions in both the PET and MRI images. This displacement field is estimated by first distributing the observed FSM displacement vectors linearly over the breast surface and then distributing throughout the volume. This process has been implemented using the ANSYS software heat transfer module. An analogy between displacement and the temperature in steady state heat transfer in solids is used to find a dense displacement field from the observed displacements (loads) at the FSMs.

Finite element model of the breast constructed from the high resolution MRI image.
Fig. 4 Finite element model of the breast constructed from the high resolution MRI image.

If Computed Tomography (CT) data is available and coregistered with the PET image, such as from a PET/CT scanner, an additional refinement process can be performed. A large number of corresponding surface points can be identified in the MRI and CT images. The displacements at these points can be used to deform the mesh a second time, reducing the small registration errors that still exist in regions away from the FSMs.
Using the resulting displacement field the MRI image can be warped to the PET image (Figure 5). The region of activity in the PET image clearly correlates better with the glandular tissue region visible in the MRI image after FEM registration, as compared to rigid registration only.

Registered PET/MRI images. After rigid registration (left column), after FEM registration (right column). PET image is shown in yellow overlaid on grayscale MRI image. Coronal view is shown in top row and axial in bottom.
Fig. 5 Registered PET/MRI images. After rigid registration (left column), after FEM registration (right column). PET image is shown in yellow overlaid on grayscale MRI image. Coronal view is shown in top row and axial in bottom.

Visualization:
Little research has been devoted to finding the optimum viewing techniques for multimodal medical images. This is in part due to the relative rarity of multimodal data sets until the recent clinical availability of PET/CT scanners. However, as imaging technology evolves and further advantages get discovered the importance of fusion techniques becomes apparent. An application was developed for investigating fusion techniques and viewing multi-image data sets. The viewer provides both traditional and novel tools to fuse 3D inter- and intra-modal data sets. Fused projection displays (e.g., maximum intensity projection) are also supported. A plug-in interface exists for rapid implementation of new fusion techniques. This viewer provides a framework supporting future multimodal image visualization efforts. A snapshot is shown in Figure 6.

A screenshot of a fused data set.
Fig. 6 A screenshot of a fused data set. The three orthogonal images on the left are from the MRI data set, and the three orthogonal images on the right are from the PET data set after the application of a red color table. The three orthogonal images in the center are from fusing the MRI and PET images using the weighted average fusion plug-in. In essence the MRI image gets colored based on the PET intensity values. Not only can we see the anatomical structure, but we can also see the metabolic activity for each structure.

While a number of fusion techniques have been developed and investigated one in particular, generated by a genetic algorithm, has shown promise (Figure 7). The algorithm searches for a fusion technique that satisfies specific properties, and by satisfying these properties information from the images fused can be directly extracted from the fused image by an observer. To test the validity of fusion based visualization techniques, a number of techniques were selected to conduct a study with four radiologists from the Department of Radiology at SUNY Upstate Medical University. This initial study clearly demonstrated the need and benefit of a joint display and emphasized the benefits of the genetic algorithm fused images over other fusion techniques currently available in literature.

Examples of color tables produced by the genetic algorithm are shown in (a) and (b).  Joint PET/MRI images created using the new color tables are in (c) and (d).  The MRI and PET images that were fused are shown in (e) and (f).
Fig. 7 Examples of color tables produced by the genetic algorithm are shown in (a) and (b). Joint PET/MRI images created using the new color tables are in (c) and (d). The MRI and PET images that were fused are shown in (e) and (f).

Image Synthesis:
Obtaining "ground-truth" data in medical imaging is an almost impossible quest when pathology reports are not available. One way to circumvent this limitation is by creating digital synthetic phantoms with the appropriate physical properties and characteristics that can be imaged using digital simulators. Digital simulators can be used to study system design, acquisition protocols, reconstruction techniques, and evaluate image processing algorithms. Specifically in this work, simulated images can aid in the evaluation of the registration procedure, and provide data for studies accessing the ability of radiologists to use specific visualization techniques. In addition to providing a precise ground truth, they can be used to save significant time and money compared to finding volunteers, and arranging and paying for scanner time. A breast phantom has been designed to support current and future projects on breast imaging. The phantom, when combined with appropriate physical properties, can be used to generate synthetic MRI and PET breast images. The phantom contains ten different tissues including adipose tissue, areola, blood, bone (rib), ductal tissue, Cooper's ligament, lobule, muscle (pectoral), skin, and stroma connective tissue. Some elements in the phantom are shown in Figure 8.

Mesh representing selected internal structures of the breast.
Fig. 8 Mesh representing selected internal structures of the breast.