NORTHEASTERN UNIVERSITY
Graduate School of Engineering
Thesis Title: Interactive Deformable Registration Visualization And
Analysis Of 4D Computed Tomography
Author: Burak Erem
Department: Electrical and Computer Engineering
Approved for Thesis Requirements of the Master of Science Degree
Thesis Adviser: Professor David Kaeli Date
Thesis Reader: Professor Dana Brooks Date
Thesis Reader: Gregory C. Sharp Date
Department Chair: Professor Ali Abur Date
Graduate School Notified of Acceptance:
Director of the Graduate School: Yaman Yener Date
INTERACTIVE DEFORMABLE REGISTRATION
VISUALIZATION AND ANALYSIS OF 4D COMPUTED
TOMOGRAPHY
A Thesis Presented
by
Burak Erem
to
The Department of Electrical and Computer Engineering
in partial fulfillment of the requirements
for the degree of
Master of Science
in
Electrical Engineering
in the field of
Computer Engineering
Northeastern University
Boston, Massachusetts
July 2008
c© Copyright 2008 by Burak Erem
All Rights Reserved
iii
Abstract
Radiation therapy is a method for treating patients with various types of cancerous
tumors. A major challenge in radiation treatment planning is to treat tumors while
avoiding irradiating healthy tissue and organs. The problem is that some tumors in
the body are in areas where motion occurs (e.g., due to respiration or other normal
functions). Radiation treatment plans must try estimate the position of the moving
organ inside the body, since they cannot see inside the body. Even given 2-D and 3-D
X-Ray images of the patient, it can be very difficult to understand the complex motion
of a tumor. This thesis presents one interactive method for analyzing 4-D X-Ray
Computed Tomography (4DCT) images for patient care and research. 4-D includes
3-D volume rendering and time (the fourth dimension). Our 4DCT visualization tools
have been developed using the SCIRun Problem Solving Environment.
Deformable registration is one way to observe the motion of anatomy in images
from one respiratory phase to another. Our system provides users with the capability
to visualize these trajectories while simultaneously viewing rendered anatomical vol-
umes, which can greatly improve the accuracy of deformable registration as a means
of analysis.
iv
Acknowledgements
For my mother and father, Halise and Mehmet, forever my best friends. For the
unconditional love and support they have given me in the face of every imaginable
obstacle throughout the years. I can’t thank them enough for believing in me unlike
anyone else could. Thank you, again and again.
Many thanks to my advisor, Dr. David Kaeli, as well as my mentors and collab-
orators at Massachusetts General Hospital (MGH): Drs. Gregory C. Sharp, George
T.Y. Chen, and Ziji Wu. Also thanks to Dr. Dana Brooks for his help with SCIRun
and his contact with collaborators at the University of Utah.
This work was supported in part by Gordon-CenSSIS, the Bernard M. Gordon
Center for Subsurface Sensing and Imaging Systems, under the Engineering Research
Centers Program of the National Science Foundation (Award Number EEC-9986821).
This work was made possible in part by software from the NIH/NCRR Center for
Integrative Biomedical Computing, P41-RR12553-07.
v
Contents
Abstract iv
Acknowledgements v
1 Introduction 1
1.1 Contributions of Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Organization of Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2 Background 5
2.1 4D X-Ray Computed Tomography . . . . . . . . . . . . . . . . . . . . 5
2.1.1 Image Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1.2 Image Reconstruction . . . . . . . . . . . . . . . . . . . . . . . 8
2.1.3 Volume Visualization . . . . . . . . . . . . . . . . . . . . . . . 9
2.1.4 Radiotherapy Treatment Planning . . . . . . . . . . . . . . . . 12
2.2 Deformable Registration . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.3 SCIRun Problem Solving Environment . . . . . . . . . . . . . . . . . 19
2.3.1 Development . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.3.2 Volume Rendering . . . . . . . . . . . . . . . . . . . . . . . . 24
vi
3 View Trajectory Loop Tool 29
3.1 Motivation for the View Trajectory Loop Tool . . . . . . . . . . . . . 29
3.2 Development of a Trajectory Viewing Cursor . . . . . . . . . . . . . . 30
3.2.1 Description of Visual Elements . . . . . . . . . . . . . . . . . 33
3.2.2 User Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4 Edit Point Path Tool 37
4.1 Motivation for the Edit Point Path Tool . . . . . . . . . . . . . . . . 37
4.2 Development of a Trajectory Editor . . . . . . . . . . . . . . . . . . . 38
4.3 Materials and Methods . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.3.1 Description of Visual Elements . . . . . . . . . . . . . . . . . 39
4.3.2 User Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . 40
5 Related Work 44
5.1 3D/4D Medical Visualization . . . . . . . . . . . . . . . . . . . . . . 44
5.1.1 SCIRun . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.1.2 Fovia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.1.3 OsiriX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.1.4 3D Slicer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5.2 Motion Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.2.1 Fluid Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . 50
5.2.2 Anatomical Motion . . . . . . . . . . . . . . . . . . . . . . . . 51
6 Contributions and Future Work 55
6.1 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Bibliography 105
vii
List of Figures
2.1 An illustration of how planar X-ray imaging works [52]. . . . . . . . . 6
2.2 The basic orientation of the patient to the scanner in X-ray Computed
Tomography (CT) and an example CT slice of a patient’s head [52]. . 6
2.3 Several generations of CT scanner designs that serve to illustrate the
concept of rotating the X-ray source and detectors around the object
[52]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.4 An example of a visualization of a single respiratory phase of a 4DCT
visualization showing lung, bone, and skin. . . . . . . . . . . . . . . . 11
2.5 Example of four beams administered in the Anterior, Posterior, Right,
and Left directions, forming the shape of a box (source: a7www.igd.fhg.de) 15
2.6 A simplified direct volume rendering SCIRun dataflow network with
added modules, the focus of this research, at the bottom. . . . . . . . 21
2.7 The 15 possible surface combinations for the contents of each cube in
the Marching Cubes algorithm [40]. . . . . . . . . . . . . . . . . . . . 25
2.8 An example of adjacent cubes, each containing explicit surfaces, com-
bining to form a volume [13]. . . . . . . . . . . . . . . . . . . . . . . . 26
viii
2.9 An example of a direct volume rendering of bone and muscle tissue, two
different ranges of isovalues that were combined with gradient magni-
tude for the look-up table . . . . . . . . . . . . . . . . . . . . . . . . 28
3.1 (a) Visualization of bone and lung tissue. Although it is possible to
analyze trajectories within this type of visualization, or (b) one showing
a cropped version of the branches of the lungs, we provide examples of
each tool showing only bony anatomy for visual clarity. . . . . . . . . 34
3.2 Viewing several trajectories in the lung while visualizing surrounding
bony anatomy (right) and while zoomed in (left). Trajectories are
represented as line loops that make a smooth transition from blue to
red in even increments across each of the respiratory phases. . . . . . 35
4.1 (a) The editing tool shown alone with the current phase highlighted
in green and (b) the same editing tool shown with the trajectory loop
superimposed to demonstrate the relationship between the two tools.
The point highlighted in green is edited while the others remain fixed
to prevent accidental changes. . . . . . . . . . . . . . . . . . . . . . . 41
4.2 A zoomed out (left) and more detailed (right) perspective of editing a
point path while observing changes using the trajectory loop tool. . . 42
ix
Chapter 1
Introduction
Radiation therapy is a method of treating patients with various types of cancerous
tumors. The goal of the treatment, as discussed in this thesis, is to kill cancerous
cells by exposing them to radiation. However, when exposed to enough radiation,
this treatment method will kill healthy tissue as well – a loss that proper treatment
planning attempts to minimize. The case of tumors that are located very closely to
vital organs serve to illustrate the importance of minimizing the radiation exposure
of healthy tissue. Despite successfully removing the cancerous cells from the area, the
treatment may inflict irreparable damage to those organs and put the patient at even
greater risk. Thus the goal is to target cancerous cells, but always at a minimal cost of
healthy tissue to the patient. This becomes more of a concern for a physician planning
a patient’s treatment when the tumor moves significantly due to cardiac activity or
respiration, and can often lead to lower treatment success rates. Furthermore, imaging
methods often used for this type of treatment planning, such as 4D X-Ray Computed
Tomography (4DCT), are imperfect in their ability capture all of the information
about internal anatomical motion. For this reason, much research in this area focuses
1
on minimizing exposure of healthy tissue to radiation, maximizing the coverage of
the intended target, and also improving the usefulness of 4DCT imaging for analysis.
With a better understanding of internal anatomical motion, physicians can improve
the accuracy and efficiency of the treatment of their patients.
One attempt at characterizing such motion is by using a deformable registration
algorithm on 4DCT data to map, voxel by voxel, movement from one respiratory phase
to another. Based on splines, this model of voxel trajectories can have undesirable
results if the algorithm’s parameters are not appropriately set. Furthermore, it can
be difficult to determine what the proper parameters should be without some visual
feedback and experimentation.
This thesis discusses several new ideas for medical visualization that can help ad-
dress some of these issues. For the evaluation of the validity of visualizations, we
present an interactive measurement tool. For the visualization of anatomical motion
in 4DCT image sets, we present the ability to display point trajectories. Specifi-
cally, we have developed a toolset that can simultaneously visualize vector fields and
anatomy, provides interactive browsing of point trajectories, and allows for the im-
proved identification of current trajectory position using node color. We also describe
some additional interactive capabilities of our work, such as editing of deformation
fields which can enable automatic and interactive registration.
We present the major contributions of this work in the next section and then
describe the organization of the remainder of the thesis.
2
1.1 Contributions of Thesis
The main contributions of this thesis are summarized as follows: implemented, in
the C++ programming language for the SCIRun [1] Problem Solving Environment1,
several visualization tools to perform the following tasks:
• Trajectory Viewing Tool
– Display trajectories as line loops with transitioning colors
– Visualize vector fields for interactively chosen voxels with respect to simul-
taneously visualized anatomy
• Edit Point Path Tool
– Display trajectories as sequences of points
– Highlight which respiratory phase of the reference anatomy is being vi-
sualized by changing the node color of the aforementioned vector field
visualization
– Edit visualized vector fields to make changes to the deformation fields used
to produce them
1.2 Organization of Thesis
The central focus of this thesis is on applying multiple interactive visualization tech-
niques simultaneously to a single patient’s medical data in order to facilitate more
efficient analysis of anatomical motion that is relevant to radiotherapy treatment
planning. The remainder of the thesis is organized as follows: Chapter 2 presents
1All implementations in this thesis were done within the SCIRun Problem Solving Environment
3
background information about 4D X-Ray Computed Tomography, Deformable Regis-
tration, and the SCIRun Problem Solving Environment. In Chapter 3 we present the
View Trajectory Loop Tool, explaining the design and implementation of an interac-
tive cursor that displays the results of deformable registration relative to anatomy.
In Chapter 4 we present the Edit Point Path Tool similarly, highlighting its ability to
make changes to trajectories interactively. We use Chapter 5 to discuss the related
work to this thesis, past and present. Finally, in Chapter 6 we summarize our con-
tributions and present directions for future work. The Appendix holds source code
relevant to the tools presented in Chapters 3 and 4.
4
Chapter 2
Background
2.1 4D X-Ray Computed Tomography
2.1.1 Image Acquisition
X-ray imaging is a transmission-based technique in which X-rays from a source pass
through the patient and are detected on the other side. In planar X-ray imaging, as
shown in Figure 2.1, a simple two-dimensional projection of the tissues lying between
the X-ray source and the detecting medium produce the image. In planar X-ray
images, overlapping layers of soft tissue or complex bone structures can often be
difficult to interpret, even for a skilled radiologist. In these cases, X-ray computed
tomography (CT) is used [52].
In CT, the X-ray source and detectors rotate together around the patient, as shown
in Figure 2.2, producing a series of one-dimensional projections at a number of differ-
ent angles [52]. When rotated around a fixed axis, within a fixed plane as illustrated
5
Figure 2.1: An illustration of how planar X-ray imaging works [52].
Figure 2.2: The basic orientation of the patient to the scanner in X-ray ComputedTomography (CT) and an example CT slice of a patient’s head [52].
6
Figure 2.3: Several generations of CT scanner designs that serve to illustrate theconcept of rotating the X-ray source and detectors around the object [52].
in Figure 2.3, these one-dimensional projections are reconstructed to form a two-
dimensional image that is a cross section, or slice, of the imaged patient in that
plane. In some methods of acquisition, several slices can be acquired at the same
time (multislice CT), but in general the acquisition of these slices leads to three-
dimensional image volumes that are composed of stacks of two-dimensional slices.
However, since this method of imaging is based on several projections that are recon-
structed later to form an image, its accuracy is dependent on the absence of patient
organ motion during this image acquisition step. In order to take potential motion
into account, patients are imaged with four-dimensional X-ray computed tomography
(4DCT) instead.
The dimensions of 4DCT are the three spatial dimensions that are also a part
of CT, represented relative to the fourth, temporal dimension. The 4D images are
typically acquired as 1D projections and reconstructed into a series of 3D volumes that
each represent a stage of respiratory movement. Although other approaches could be
considered in addition to this, such as the stages of cardiac motion, this method
7
of imaging is too slow and represents respiration more reliably. This movement is
accounted for by way of acquiring an external signal that measures respiration in
some way, and then using the assumption that respiration is more or less periodic to
perform the desired reconstruction. This subject will be discussed in more detail in the
following image reconstruction subsection, but the images are typically reconstructed
according to respiration because this is considered to be the most prominent source
of movement for which random noise cannot form a good approximation.
2.1.2 Image Reconstruction
The reconstruction of 4DCT image sets after image acquisition is typically dependent
on having simultaneously acquired a signal that is thought to accurately represent the
stages of respiration of the patient. Plainly stated, in order to put the independently
acquired pieces to the puzzle back together, some assumptions about the dependence
of some of those pieces need to be made. While there are several approaches to
reconstructing an image set from these pieces in conjunction with a respiratory signal,
the respiratory signal itself is generally acquired using an external marker on the
surface of the patient’s skin, preferably near the diaphragm, which is tracked for
motion.
It should be noted that this respiratory tracking records a one-dimensional signal
representing the rise and fall of the skin’s surface at that point, and it is expected to
characterize the varying internal anatomical motion of the patient. Although there
are infinitely many possible variations of the motion associated with respiration, this
represents it with a discrete, undersampled (for example, the data presented in this
work has ten phases), and periodic signal whose samples correspond to averaged
8
generalizations of the stages of that motion. While this is a somewhat unfairly critical
view of this process, given the physical constraints of the situation, it is important
to describe the situation accurately in order to capture the enormous difficulty of
analyzing motion under these conditions.
Nonetheless, with this signal, images are arranged according to the physical location
and the stage of respiratory motion at which they were acquired. Typically, a major
assumption involved with this step is that the process of respiration is a periodically
occurring sequence that can be divided into well-defined bins. In succession, the
images sorted into these bins form a piecewise representation of the image as it would
look over one full sequence of respiratory phases. One way to do this is to separate
the respiratory signal into bins according to its amplitude. So, if it was decided
that there should be ten bins, every period of respiration would be broken into ten
possible amplitude ranges and images would be sorted into these bins according to
the amplitude of the respiratory signal at the time the image was recorded.
An alternative approach to separating the respiratory signal into bins is by phase.
Once again viewing the signal as periodic, bins are defined by dividing each cycle of the
respiratory signal evenly in time. While other methods of reconstruction certainly
exist, the concept of putting the puzzle together using assumptions made about a
one-dimensional signal is typically prevalent among them and serves to illustrate the
reason why robust methods of analyzing these results are so essential [33].
2.1.3 Volume Visualization
While humans are capable of viewing three-dimensional structure in the real world,
the most common form of viewing media still tends to be two-dimensional. For
9
example, most computer monitors are two-dimensional viewing surfaces that represent
three-dimensional structures by projecting them onto those two-dimensional surfaces.
While this may seem obvious, this is an important consideration when faced with the
task of visualizing information of even higher dimensionality, such as 4DCT.
One way to look at this challenge is to compare it to that faced by motion photog-
raphy or film. In some sense, movies are a form of 3D imaging, as they represent two-
dimensional projections of the three-dimensional world, captured over time. When
viewing this information, it usually is sufficient for one to watch a sequence of those
two-dimensional images over time in the same order in which they were captured in
order for the necessary information to be conveyed. However, in medical imaging,
passing over the information two dimensions at a time in succession can be an insuffi-
cient method of conveying the proper information. Clearly representing the aspect of
the information the user needs to analyze or, in other words, finding the right method
by which the user wishes to traverse the information is one of the greatest challenges
and also one of the most important considerations of this type of visualization.
With respect to medical imaging, volume visualization is generally considered a
way of viewing the structure of the anatomy in 3D. Thus, as mentioned earlier,
the main goal of volume visualization is to represent higher dimensional data on
a two-dimensional computer screen for visual inspection. Unlike other kinds of in-
formation of similar dimensionality however, it is best for the user to decide which
two-dimensional perspective is desired for such inspection. In the case of the work
done in this thesis, we use visualizations of the same 4DCT datasets which we have
used for deformable registration calculations, providing a superimposed anatomical
frame of reference for analysis. An example visualization can be seen in Figure 2.4, a
10
Figure 2.4: An example of a visualization of a single respiratory phase of a 4DCTvisualization showing lung, bone, and skin.
rendering of the bone and lung tissue that has been cropped to show a cross section.
While it is common to see 3D renderings of human anatomy in this field, it is
important to note that there are several methods of obtaining these visualizations
with important distinctions between them. We separate these into two categories:
1) direct volume rendering and 2) explicit volume rendering. With explicit volume
rendering, the boundaries of the structure which are being visualized are explicitly
11
defined, calculated, and then projected onto the 2D viewing plane of the user. On
the other hand, direct volume rendering only calculates the surfaces which will be
projected onto the 2D viewing plane, making it a faster alternative.
We chose to work with direct volume rendering in our analysis because of its in-
herent speed advantage. We note that there is no loss of information from the user’s
perspective with this method, especially from the standpoint of analyzing and editing
deformable registration parameters. It is because the renderings act as a reference for
visual comparison to the independent registration calculations that explicit surfaces
are not necessary.
2.1.4 Radiotherapy Treatment Planning
In this context, we refer to radiation therapy as a method of treating patients with
various types of cancerous tumors. The goal of the treatment is to kill cancerous
cells by exposing them to ionizing radiation. Ionizing radiation refers to high-energy
particles that cause atoms to lose an electron, or ionize. Traditionally, the view has
been that exposure to this type of radiation can be characterized in terms of its effect
on DNA and leads to a number of different possible outcomes for cells:
• DNA damage is detectable and repairable by the cells’ own internal mechanisms
• DNA damage is irreparable and cells go through apoptosis, thus killing the cells
• DNA mutation occurs, potentially causing cancer
More recently, however, alternative insights into the process of cell death after ex-
posure to ionizing radiation have been presented which question this first perspective
12
because cell-death pathways, in which direct relations between cell killing and DNA
damage diverge, have been reported. These pathways include membrane-dependent
signaling pathways and bystander responses (when cells respond not to direct radi-
ation exposure but to the irradiation of their neighboring cells). New insights into
mechanisms of these responses coupled with technological advances in targeting of
cells in experimental systems with microbeams have led to a reassessment of the
model of how cells are killed by ionizing radiation [34].
However, from the perspective of this work, when exposed to enough ionizing radi-
ation, it is clear that this treatment method will kill healthy tissue as well cancerous
tissue. Minimizing such damage is an obvious goal for tumors that are located very
closely to vital organs are a good example of why this is the case. While potentially
successfully removing the cancerous cells from the area, the treatment may inflict
irreparable damage on those organs and put the patient at equal or even greater risk.
To complicate things further, this becomes more of a concern for a physician planning
a patient’s treatment when the tumor moves significantly due to cardiac activity or
respiration, and can often lead to lower treatment success rates.
Certainly, this already poses a significant challenge for physicians from a clinical
standpoint, but it is worth noting that the current task of planning such treatment
is also a difficult process that inefficiently handles the high-dimensional data that
is available, compounding the overall difficulty. Specifically, treatment planning in
this field is done by experts for whom working with two-dimensional information has
become commonplace. In effect, this requires looking at four-dimensional information
by only working with a single, two-dimensional subset of the total image at one time.
One can draw the analogy that this is similar to viewing a movie over its entire
13
duration only one pixel at a time before going back to the beginning to view the next
pixel. The absurdity of this analogy should serve to illustrate the corresponding degree
of inefficiency of the manner in which treatments are planned only two dimensions at
a time.
Of course, it is only fair to note that a part of this inefficiency stems from the
fact that imaging methods often used for this type of treatment planning, such as
4D X-Ray Computed Tomography (4DCT), are imperfect in their ability capture all
of the information about internal anatomical motion. As mentioned above, image
reconstruction is imperfect and thus there is rightly inherent distrust of additional
processing that may amplify existing noise or even introduce new noise.
Four-Field Box Technique
The Four-Field Box technique [8] is a radiation therapy method in which radiation is
administered in four directions: Anterior-Posterior, Posterior-Anterior, Right-Lateral,
and Left-Lateral. In the Anterior-Posterior direction, the beam goes from the anterior,
or front, of the patient toward the posterior, or back. The reverse is true for the
Posterior-Anterior direction; the beam goes from the back toward the front of the
patient. The Right-Lateral and Left-Lateral directions are also relative to the patient,
where the Right-Lateral beam goes from the patient’s own right side to the left side.
Similarly, the Left-Lateral beam goes from the patient’s own left side to the right side
[7].
The Four-Field Box technique gets its name from the intersection of the four beams,
which forms a box shape. An example is shown in Figure 2.5. Here, The letters A,
P, R, and L specify the Anterior, Posterior, Right, and Left sides of the patient (and
14
Figure 2.5: Example of four beams administered in the Anterior, Posterior, Right,and Left directions, forming the shape of a box (source: a7www.igd.fhg.de)
directions of the beams), respectively [7].
In addition to direction, each beam is administered with a specific energy, which
refers to its wavelength (or conversely, its frequency). Shorter wavelengths are associ-
ated with higher energy, which can penetrate deeper into the tissue. Typical energies
are between 6 MV and 18 MV. The amount of radiation that the linear accelerator
outputs is measured in monitor units (MU). One monitor unit corresponds to one
centiGray of radiation1 [7].
Treatment planning for this type of technique is done such that physicians first
segment the individual slices of the CT image set to highlight the location of cancerous
cells, then plan the proper dosages according to the expected density of the tissue
in the way of each beam before reaching the tumor. Due to the number of beams
used in this technique, it is clear that even small errors in the planning physician’s
understanding of any potential motion involved can have severe consequences. We
will briefly characterize respiratory motion in the following section.
1The gray (symbol: Gy) is the SI unit of absorbed dose. One gray is the absorption of one jouleof radiation energy by one kilogram of matter.
15
Respiratory Motion
Motion of internal anatomy due to respiration is a significant challenge for radiation
therapy. Specifically, if a tumor is located in or near the lung, its motion is very
difficult to characterize. The general field of image-guided radiotherapy aims to tackle
this very difficult problem.
One method of handling motion is to turn the radiation on and off when the tumor
is expected to be correctly targeted by the beam. This has several problems associated
with it, because even if the patient breathes exactly the same way each time, it isn’t
necessarily true that the relevant internal anatomical motion will be identical for each
respiratory cycle [14].
A more ambitious method is to synchronize the movement of the beams to match
that of the target [14]. While this would be an ideal approach if the tumor could
be imaged appropriately in realtime, even then there would be the need for better
models that could more accurately characterize the motion such that target-tracking
algorithms could perform correctly.
2.2 Deformable Registration
Given all of the challenges, described above, that come as a result of imaging anatomy
in motion, one proposed solution is to employ image analysis methods to allow for
better understanding of that motion. With a better understanding of this motion,
improved methods for image reconstruction and even treatment planning could be
conceived. One such vein of research in this area attempts to address this type of
analysis by using image registration.
16
Image registration is a process to determine a transformation that can relate the
position of features in one image with the position of the corresponding features
in another image. For example, the features that one would use to perform this
matching could be anything from simple but specific pixel values to edges detected
by more complicated processing. In this case, we wish to relate the features in one
time “instant” to the next, for example. Amongst our considerations, we note that
we do not wish to make too many assumptions about the contents of medical images.
We consider every voxel – as opposed to a subset that we assume corresponds to a
tumor, for example – that we have imaged, and thus we use more general models
of deformation not specific to this problem that account for these kinds of features.
These considerations and design decisions each have various tradeoffs.
One such approach, spline-based free-form registration, is capable of modeling a
wide variety of deformations [21]. Also, by definition, it is constrained such that it
ensures a smooth deformation field. A deformation field is represented as a weighted
sum of spline basis functions, which have parameters that adjust such smoothness.
B-splines are one of the most widely used basis functions for this purpose.
B-spine Transformation Model
In the B-spline transformation model [36], the deformation vectors are computed
using B-spline interpolation from the deformation values of points located in a coarse
grid, which is usually referred to as the B-spline grid. The parameter space of the
B-spline deformation is composed by the set of all the deformations associated with
17
the nodes of the B-spline grid. A cubic B-spline in matrix form is:
Si(t) =[
t3 t2 t 1] 1
6
−1 3 −3 1
3 −6 3 0
−3 0 3 0
1 4 1 0
pi−1
pi
pi+1
pi+2
, t ∈ [0, 1] (2.1)
where pj are the control points, and the parameter t determines the progression of the
knot vector (defined above as the vector of the powers of t from 3 to 0). As a result,
one can follow the spline Si(t), to the next time phase to find where the model places
a specific point with respect to the control points. Note that B-splines have a finite
support region and thus changing the weight or contribution of each basis function
affects only a specific portion of the overall deformation. By increasing the resolution
of the B-spline grid, more complex and localized deformations can be modeled.
Landmark-based Splines
An alternative to the B-spline deformation model is landmark-based splines, typically
implemented using thin-plate splines [12] or other radial basis functions. In this
approach, a set of landmark correspondence matches is formed between points in
a pair of images. The displacements of the correspondences are used to define a
deformation map, which smoothly interpolates or approximates the point pairs. One
approach of particular interest is radial basis functions that have finite support, such
as the Wendland functions [25]. Because these functions only deform a small region
of the image, the deformations can be quickly computed and updated for interactive
applications. Given N control points, located at xi and displaced by an amount λi,
18
the deformation ~ν at location x is given as:
~ν(x) =N∑
i=1
λiφ(|x− xi|), (2.2)
where φ is an appropriate Wendland function, such as:
φ(r) =
(1− r
σ
)2r ≤ σ
0 otherwise.
(2.3)
In this method, the function φ serves as a weight whose effect changes based on
the distance of control points on the current deformation ~ν. To be more specific,
the variable σ controls the width of the adjustment, usually on the order of one to
two centimeters for human anatomy, and the weight that results in the deformation
calculation is based on the input r, defined as the Euclidian distance between the
current point x and the control point xi. Another explanation of the deformation
~ν is that it maps any point x in one time phase to a point ~ν(x) in the time phase
that corresponds to the control points in the calculation. Several of these Wendland
functions are used together to form a complete vector field, which defines the motion
of organs of the anatomy [21].
2.3 SCIRun Problem Solving Environment
Developed by the Scientific Computing and Imaging (SCI) Institute at the University
of Utah, SCIRun is a problem solving environment designed to allow researchers the
freedom to build programs to perform various scientific computing tasks [1]. In our
particular application, a dataflow network of modules already existed that allowed
19
us to do direct volume rendering. The network is a simplified version of the SCIRun
PowerApp called BioImage [2]. Enhancements were made to that network to allow for
visualization of 4DCT datasets and point paths by cycling through them one phase at
a time. Building on the existing tools, we provided for more efficient and interactive
ways of analyzing tumor motion. As shown in Figure 2.6, the visual representation of
the dataflow network allows us to make a connection to the base system by dragging
a “pipe” from our module to the relevant module in the existing network.
The viewing window, the central module to which almost all dataflow eventually
leads, is especially useful for our application. This graphical viewport allows navi-
gation of the 3D environment in which we work by zooming, panning, and rotating.
Furthermore, the viewing window passes back event callbacks to certain classes of
objects that allow module developers to make interactive, draggable, clickable tools.
However, movement of such tools is limited to the viewing plane. Thus, by rotating
the viewing plane, one is able to change the directions of motion of the interactive
tools.
2.3.1 Development
Development for SCIRun is done by connecting the dataflow of independently-functioning
modules into fully-functioning programs. These programs are called “dataflow net-
works” and are created in a visual editing environment such that dataflow connections
can be done easily in a point-and-click manner. Special purpose dataflow networks,
called “PowerApps,” can also be made to perform collections of application-specific
tasks and accessed via a single user interface.
20
Figure 2.6: A simplified direct volume rendering SCIRun dataflow network with addedmodules, the focus of this research, at the bottom.
21
BioImage PowerApp
Specifically, the BioImage PowerApp has the goal of providing a unified source for all
built-in medical image visualization support that comes with SCIRun. While many
basic and advanced visualization features are supported, certain general classes of
analysis tools are lacking, and therefore the underlying modules and dataflow network
serve as a good starting point for development of such tools.
Modules
Modules are written in the C++ programming language and function as independent
entities so long as sufficient inputs and settings are provided. This is facilitated by
each new module inheriting the general C++ Module class, provided as part of the
SCIRun headers, and thus having the same familiar interfaces by which SCIRun knows
to handle its operation. The most important of these is the “execute” function which
is analogous to the “main” function in any C or C++ program. Additional generic
hooks for user interface connections exist as well, although these are not necessarily
mandatory.
SCIRun is made aware of each module’s capabilities by the parameters in each
module’s XML definition file. As stated earlier, while it is true that each module
functions independently, they are obligated to adhere to any supported input and
output types as specified in this file. Additionally, if a module is specified to have its
own user interface, it must implement the corresponding functions inherited from the
Module class to handle such interaction.
At the time this was written, user interface development for SCIRun modules was
done in the TCL/TK scripting language as an independent file from the module
22
C++ source code and XML definition file. While plans to move to either a GTK
or OpenGL-based user interface scheme had been discussed as possible replacements,
this discussion will be about the current TCL/TK setup. Most importantly, SCIRun
facilitates interaction between modules and their user interfaces either by connecting
the execution of a specific TCL/TK function to that of a module’s C++ member
function or “marrying” the values of variables in each language such that a change in
one corresponds to a change in the other.
Dataflow Networks
As mentioned earlier, dataflow networks are the connections of modules that form
more meaningfully functioning applications on a larger scale. Development of dataflow
networks is primarily done within the base SCIRun application’s visual editing en-
vironment. Modules are chosen, dropped into this environment, and can be dragged
to any desired position. Furthermore, each module’s input ports can be connected
to applicable output ports by clicking and dragging from one port to the other. The
same is true for connecting output ports to input ports, if this is desired. Within this
environment, user interface fields can be edited, presumably changing the function
parameters of the corresponding modules, and each module can be executed sepa-
rately or the entire network can be executed as a whole. To make repreduction of
networks easy, the ability to save and load dataflow networks is provided.
While the above is the most common way to develop SCIRun dataflow networks, a
lesser-known method is one taken advantage of by several SCIRun PowerApps: using
the TCL/TK user interface scripts to dynamically add modules, edit input/output
port connections, and edit user interface parameters of each module. This is a consid-
erably more advanced method that is not documented and was discovered as a part
23
of this research when attempting to assimilate our own modules into an independent
version of the BioImage PowerApp. The disadvantage of this is that while it allows for
dynamically reconfigured dataflow networks, the ease in which dataflow networks are
intended to be created and edited is considerably diminished despite this flexibility.
2.3.2 Volume Rendering
We refer to the means by which volume visualization is achieved as volume rendering.
Here we will provide background about two algorithms used for volume rendering. As
mentioned in the section on volume visualization, these two algorithms correspond to
the two methods of visualization addressed in this work: explicit volume rendering
(or marching cubes) and direct volume rendering.
Marching Cubes
Also referred to as isosurface extraction, the Marching Cubes algorithm and its vari-
ants (such as Marching Tetrahedrons), are used to extract explicit surfaces for a
volume that typically can be summarized by the voxels in an image set whose values
fall within a specified range of the identifying voxel value, the isovalue. In the case of
Marching Cubes, this is achieved by analyzing the eight vertices of a cube and, based
on how their voxel values lie relative to the specified isovalue, determine whether each
vertex is classified to belong within the volume or not. Based on these classifications,
either one or more surfaces are defined within the cube. Once the previous step
of classification is done, this step can be simplified considerably to only 15 possible
surface combinations within each cube, as shown in Figure 2.7.
The connection of all of the adjacent cubes in the image set yield isosurfaces, as
can be seen in Figure 2.8, that correspond to the isovalue for which the algorithm
24
Figure 2.7: The 15 possible surface combinations for the contents of each cube in theMarching Cubes algorithm [40].
25
Figure 2.8: An example of adjacent cubes, each containing explicit surfaces, combin-ing to form a volume [13].
was run.
As explained above, this creates explicit surfaces within each cube and hence defines
explicit surfaces for each volume corresponding to the specified isovalue. The benefit
of this is that it is easy to define when a point is either outside, inside, or intersecting
the surface of a volume because that surface is flat and has well-defined vertices
that were already calculated for the visualization. However, calculating these types
of vertices can be time-consuming, and therefore specific applications may prefer a
faster alternative, as described below.
Direct Volume Rendering
Using a substantially different approach, direct volume rendering uses the concept
that three-dimensions are projected onto two-dimensional viewing surfaces anyways
26
to eliminate the need for calculating explicit vertices and surfaces for visualization.
Another way to look at this concept is to consider that isosurface extraction starts
with the image set, creates a three-dimensional representation, and then projects
it onto the two-dimensional viewing surface, whereas, in the case of direct volume
rendering, the approach is to start from the viewing surface and determine what the
projection should look like by directly looking up the projection results from the image
set. How this process is done in reverse is application specific, but one approach is to
use look-up tables.
For example, if one were to calculate the gradient of the image set, this would be a
relatively fast calculation whose magnitude would contain information about where
the surfaces within the image lie. A gradient can also be calculated locally very
quickly, requiring little computational overhead when following a projection from the
surface back into the volume as described above. Thus one such look-up table method
is to create a colormap that compares gradient magnitude to isovalue. In practice,
this achieves a very similar visual effect to that of Marching Cubes, and provides a
fast alternative in the absence of the need for explicit surfaces. An additional benefit
to this method is the ability to combine the visualizations for multiple isovalues, as
seen in Figure 2.9 with very little additional calculation cost due to the efficiency of
the look-up table.
27
Figure 2.9: An example of a direct volume rendering of bone and muscle tissue, twodifferent ranges of isovalues that were combined with gradient magnitude for thelook-up table
28
Chapter 3
View Trajectory Loop Tool
3.1 Motivation for the View Trajectory Loop Tool
Some paths of motion, like the swing of a pendulum, are easy to see using the human
eye just by observing a few iterations of this behavior. However other trajectories,
like the flight of a bee, can be exceptionally difficult to understand with the same type
of observation. Furthermore, it even can be difficult to understand the trajectories of
several simultaneously swinging pendulums all at once.
While the complexity of internal anatomical motion, as interpreted by purposely-
smoothed deformable registration results, may not be quite as complex as the flight
of a bee, the scenario with more than one pendulum is a perfect explanation of why
it can be difficult to understand too many simple behaviors simultaneously. Within
the 4DCT image sets, it is of interest to researchers to understand the movements of
various regions of close proximity. As a consequence, it is of additional interest to
understand the ways in which their modeling of this motion (in the case of this work,
via deformable registration) succeeds and fails at helping them understand this type
29
of behavior.
The trajectories of individual voxels over the respiratory cycle are not necessarily
very complicated and, in fact, we have observed that they almost never are. However,
understanding this motion for the entire image set simultaneously is very difficult be-
cause it invariably requires interpreting the trajectories via some form of complicated
animation. However it may not always be the case that one wishes to observe the
trajectories of all of the voxels in an image set. Instead, it is reasonable to expect
that one may wish to analyze either several very loosely selected voxels’ trajectories
or one very specifically selected voxel’s trajectory over all of the respiratory phases.
In this case, traditional visualization methods for 4DCT image sets are not suitable
for this type of interaction.
Thus the motivation for this work comes from the desire to view only a select
few trajectories at the same time, without the need to view an animation. In other
words, the View Trajectory Loop Tool enables one to visually analyze the trajectories
of a few selected voxels over all of the available respiratory phases in a single, static
visualization.
3.2 Development of a Trajectory Viewing Cursor
Given the motivation for this tool, we encountered several design considerations that
were important to address. The primary goal being visualization of one or more
trajectories, we decided to design the tool such that it was scalable to the desired
number of trajectory visualizations of the user. With this in mind, and the benefit of
a flexible SCIRun development environment, we were able to create the tool in such
30
a way that it operates independently of the traditional three-dimensional anatomical
visualization capabilities of SCIRun while still providing for user interaction.
Specifically, we took advantage of a specific predefined visual component of the
class “Widget” called the “PointWidget.” This object, regardless of in which module
in the SCIRun dataflow network it is created, can be selected, dragged, and dropped
in the end by the user in the viewing window. Furthermore, when properly used
via inheritance, this object triggers feedback to the module that created it for the
exact events that correspond to being selected, dragged, and dropped. This allowed
us to make an independently functioning module which displays only one trajectory
corresponding to the nearest voxel selected by its cursor, the underlying PointWidget.
If the user wants to view N number of trajectories, all that is required is to insert
and connect N separate modules for this purpose and finally interact with them all
together in the viewing window.
The deformable registration results were read from external vector field files as
the relevant data was requested by the visualization tool. This was facilitated by
the “point path” application developed by Gregory C. Sharp, available in the Ap-
pendix. This application, specifically created for this visualization project, parsed
and traversed the deformable registration results to form a point by point trajectory
for every requested voxel. A summary of its functionality is that, when supplied the
coordinates of the voxel for which a trajectory was desired, the application’s output
was an ASCII text file with as many coordinate locations as respiratory phases, from
which we extracted the relevant trajectory information by reading in the file as a
matrix in SCIRun and parsing it row by row appropriately. The agreed upon ASCII
data format was defined as
31
0 x0 y0 z0
......
......
N − 1 xN−1 yN−1 zN−1
where there are N rows, one for each respiratory phase, and the first column holds
the index of each respiratory phase. The remaining elements in each row form a tuple
(xi, yi, zi) that are the coordinates of the voxel at the i-th respiratory phase. In the
file, columns were delimited by white space and every new line started a new row.
Once these values were read, there was still some small amount of processing re-
quired before the data was ready to be processed. Because the coordinate systems of
the visualization environment and the deformable registration results agreed in scale
but not in translation, each coordinate needed to be shifted by a constant amount that
we calculated by comparing reference points. In order to ensure that this would not
introduce errors, we compared several stationary reference points as well as several
anatomical reference points to make sure that the resulting translation was correct.
This discrepancy was believed to be caused by an inconsistency of the handling of
the coordinate system by the SCIRun visualization software and therefore we needed
to accommodate this shift internally within our software.
After the shift, in order to obtain trajectory vectors from the voxel coordinate
locations, pi, we performed the simple calculation for each vector vi such that
vi = pi − pi−1 (3.1)
where the respiratory phases are assumed to circularly repeat, making that cal-
culation possible since
32
p0 = pN (3.2)
or in other words,
p−1 = pN−1 (3.3)
given, once again, that there are N respiratory phases. These vectors, vi, were then
displayed for this tool rather than the shifted output of the “point path” application.
3.2.1 Description of Visual Elements
To represent a 4D trajectory in a 3D graphical environment, we have developed a
cursor that displays the path of movement of a single voxel over time. A user can
move the cursor by clicking and dragging it in a motion plane parallel to the viewing
plane. At its new location, the cursor displays the trajectory of the voxel at that
point by showing a line path. The direction and magnitude of the motion during
each time phase are indicated by a color transition from blue to red.
All trajectories start and end at the same shades of blue and red, but may display
less of certain intermediate shades due to very low magnitude movements during those
time phases. This can be very useful when comparing two trajectories of similar shape,
but very different color patterns, indicating that despite having followed a similarly
shaped path, each voxel followed the path at a different speed.
33
(a) Lung Branches and Bone(b) Cropped Lung Branches
Figure 3.1: (a) Visualization of bone and lung tissue. Although it is possible toanalyze trajectories within this type of visualization, or (b) one showing a croppedversion of the branches of the lungs, we provide examples of each tool showing onlybony anatomy for visual clarity.
3.2.2 User Interaction
The visual nature of these tools provides a definite improvement in the way tumor
motion analysis is performed. The user has a rich set of visualization capabilities
using our system; volume rendering of 4DCT datasets is capable of showing many
different kinds of tissue. Figure 3.1 shows two examples of different kinds of tissue
that can be visualized. In Figure 3.1(a) we show how the lungs and bone can be
displayed simultaneously and that our visualization tools are not strictly limited to
bone. Figure 3.1(b) shows branches of a set of lungs that have been cropped to show
a different perspective, an important kind of tissue whose motion is important to
understand in order to treat tumors located within. This illustrates the ability to
create helpful perspectives of the data by methods such as cropping and visualizing
other types of tissue that the user wishes to see. However, for the rest of the figures,
we use renderings of bony anatomy only to avoid cluttering the view of our tools.
It should be noted that this is less of a concern when viewing them together in an
interactive environment.
34
Figure 3.2: Viewing several trajectories in the lung while visualizing surroundingbony anatomy (right) and while zoomed in (left). Trajectories are represented as lineloops that make a smooth transition from blue to red in even increments across eachof the respiratory phases.
35
The trajectory loop tool’s purpose is to facilitate rapid analysis of trajectories
within the visual environment. We are able to run the trajectory loop at every
position the cursor has been (see Figure 3.2), showing a trail of several loops that
have been visualized. In this figure, the tool was used to analyze the extent to which
the registration algorithm detected motion at various spatial locations within the
lung. As expected, movements became smaller as the cursor was used to inspect
areas closer to bone. On the other hand, trajectory loops closer to the lower lung
showed significant motion.
36
Chapter 4
Edit Point Path Tool
4.1 Motivation for the Edit Point Path Tool
In order to fully interact with the deformable registration results, simply viewing
individual trajectories may not always be enough. There may be times when the user
identifies errors by analyzing the deformable registration visualizations and wishes to
make changes to the results within the visual environment that can be reviewed later.
In such a case, while the View Trajectory Loop Tool would be useful for finding the
initial point of analysis that raised concern, it would be unable to perform any edits
due to its limited niche of visualization behavior.
The motivation for the Edit Point Path Tool is that, given a corresponding and
simultaneously visualized anatomical background, the best way for a user to make
changes to observed anomalous trajectory results is to mark and edit them in place,
within the same visual setting in which they were witnessed. Furthermore, because
results like deformable registration are smoothed purposely, changes made should be
reflected over an entire region of influence determined by some radius, removing the
37
need to edit every individual trajectory within the bounds of that radius one by one.
4.2 Development of a Trajectory Editor
Taking advantage of the same development components used by the View Trajectory
Loop Tool, the major difference in this tool was that there needed to be several
movable cursors per editing tool, so the “PointWidget” cursors needed to be organized
accordingly. Thus maintaining a list of the visual elements, one for each respiratory
phase, became important. Additionally, finding a way to visually distinguish the
cursors that corresponded to each of the respiratory phases was also important.
Specifically, this visualization challenge required finding and editing the internal
parameters of each “PointWidget” object so that we could change its color at the
appropriate times. When a normal cursor is selected and moved, its selection is
indicated by a change in color from gray to red, and then its release causes a change
back. In order to prevent confusion during interaction with this tool, we decided it
was best to make the color of the cursor that corresponds to the current respiratory
phase being visualized green instead of gray. Furthermore, we decided to ignore
the select, drag, and drop events of all cursors that did not correspond to the current
respiratory phase. Thus only one cursor could be moved at a time, somewhat limiting
the editing ability of the tool, but more elegantly solving the organizational problem
of distinguishing between tightly packed cursors in the visualization.
As with the trajectory viewing tool, the deformable registration results were read
from external vector field files as the relevant data was requested by the visualization
tool. This was facilitated by the “point path” application developed by Gregory C.
38
Sharp, available in the Appendix. In summary, when supplied the coordinates of the
voxel for which a trajectory was desired, the application’s output was an ASCII text
file with as many coordinate locations as respiratory phases, from which we extracted
the relevant trajectory information by reading in the file as a matrix in SCIRun and
parsing it row by row appropriately.
4.3 Materials and Methods
4.3.1 Description of Visual Elements
The Edit Point Path Tool is a collection of points, or cursors, that indicate the
locations of a specified voxel over all of the available respiratory phases in the data.
While each cursor is editable as mentioned above, only one cursor is editable at each
respiratory phase of the background anatomical visualization.
The easiest way to interpret the information shown by the tool is to imagine that,
for a specified voxel, one can view all of the frames of a movie showing its motion
simultaneously. In this analogy, each of the frames of the movie correspond to a
respiratory phase in the visualization. The visual effect is as if one can view all of
the places the voxel has been over its trajectory at one time.
For the user, having a comfortable understanding of the nature of this visualization
allows for appropriate edits to be made. An improperly interpreted visual element
here can lead to confusion about which cursor represents which respiratory phase
of the trajectory or, even worse, about which voxel is being edited by the tool at
that time. Once the visualization is properly interpreted, interaction and editing are
intuitively learned and utilized as described next.
39
4.3.2 User Interaction
Once a user has identified a region of interest using our tool, they can then explore
the region in greater detail. Instead of displaying a line path, this tool displays several
cursors to convey similar information without using lines. To prevent confusion about
the order, the module connects to the same tool that allows the user to select the
4DCT phase currently being viewed, and then highlights the corresponding cursor
with a different color. At each respiratory phase, the path of a voxel can be followed
both through this tool and a volume visualization simultaneously.
If it is observed that the trajectory and the visualization do not agree, the user has
the option of editing the trajectory by moving the cursors. It should be noted that
this will not modify the 4DCT data itself, but only supplement the output of the
registration algorithm. Also, moving the cursor will not only effect the voxel whose
trajectory is being viewed, but will also have an attenuated effect on the surrounding
area. To view the extent of this effect, the user can use several of the previously
described tools to view the updated trajectory loops.
If unsatisfied with analysis of the trajectories when compared to the visualization,
the user can make adjustments within this environment to improve the registration.
Figure 4.1(a) shows the path editing tool, where each of the individual points can be
moved independently to adjust the path to the user’s specifications. The point that
is colored green highlights the current phase of the 4DCT that is being visualized.
Thus, if the rest of the anatomy were visible, one could see the voxel to which that
specific point path belonged. While Figure 4.1(a) shows the editing tool alone, Figure
4.1(b) shows the trajectory loop tool and the path editing tool when used at the
same point. This may not normally be a desired way to edit a path, but in this
40
(a) Zoomed In (b) With Loop
Figure 4.1: (a) The editing tool shown alone with the current phase highlighted ingreen and (b) the same editing tool shown with the trajectory loop superimposed todemonstrate the relationship between the two tools. The point highlighted in greenis edited while the others remain fixed to prevent accidental changes.
case it serves to illustrate the relationship between the two tools. Each has its own
purpose for different intended uses, but this demonstrates that both represent the
same registration information.
When changes are made to the point path and are committed, the tool appends
modifications to the previous registration results and refreshes the visualization.
Thus, if desired, after several rounds of changes, one can go back to the modified
deformable registration results and perform analysis and comparisons about what
was incorrectly or insufficiently specified in the first attempt at characterizing the
motion. While this work does not do this, one particularly useful extension of this
tool would be to infer the appropriate adjustments to the deformable registration
parameters from the interactive modifications made to the results using this set of
tools.
41
Figure 4.2: A zoomed out (left) and more detailed (right) perspective of editing apoint path while observing changes using the trajectory loop tool.
An additional thing to note is that changing the visible path affects the surrounding
paths as well, that may or may not also be visualized, in a way similar to how smudg-
ing tools work in image editing software. Typically, image editing software includes
a tool that allows one to distort the pixels under the cursor and, as a consequence,
around the cursor by a smearing effect. Similar to this, although not exactly the same,
the editing tool uses changes in the path being edited to push surrounding paths out
of its own way. Intuitively, this makes sense because one wouldn’t expect internal
anatomy to cross paths during its motion and thus potential changes that may cause
such effects are best dealt with in this way. By “pushing” the adjacent trajectories
out of the way that may potentially interfere with the changes being made, the tool
aims to prevent such an undesirable conflict.
The effect of this range of influence can be seen by using the path editing tool and
several trajectory loop tools simultaneously, as shown in Figure 4.2. While in some
cases, “pushing” adjacent trajectories out of the way may be a desired thing to do,
42
this kind of visual feedback allows a user to avoid impacting surrounding trajectories
that move independently of the one being directly edited. A good example of this
would be making changes to the trajectories of tissue that is very close to bone.
While there may be desired changes to be made to the tissue near the bone, bone is
considered not to typically move very much and thus placing a few trajectory loop
tools on or near the bone can prevent significant changes from being made to the
trajectories of the bone before something can be done about it.
Similarly, if those kinds of changes are desired for an entire object at once, the tool
can provide for this as well. In other words, if one wishes to verify that the smudging
effect of the tool is working correctly for an entire object (such as a tumor), one can
place the trajectory viewing tool near the edges of the object to make sure that the
effect of changes to the edited trajectory are propagating, as desired, to those regions
as well.
43
Chapter 5
Related Work
Our work in the areas of visualization of moving medical datasets using SCIRun
has been preceeded by several clinical and technical works by others that highlight
the need for this type of analysis and, in some cases, offer their own approach. To
provide the basis for comparison, our own work uses SCIRun’s visualization, segmen-
tation, and tool development capabilities as its foundation. Furthermore, its greatest
achievement is that it takes advantage of these capabilities by visualizing complicated
internal anatomy at the same time as complicated trajectories in such a way that it
is easy to navigate and understand the complexity in each component interactively.
We will use the following sections to compare and contrast various veins of research
in how they relate to our own, specifically touching upon three- and four-dimensional
visualizations, motion analysis in fluid dynamics, and medical motion analysis.
5.1 3D/4D Medical Visualization
As discussed in Chapter 2, visualization of high-dimensional information is typically
done by hiding information that is deemed to be irrelevant. We are reminded of
44
this every time one views a three- or four-dimensional scene on a two-dimensional
surface such as a television or computer monitor. Efforts to tackle the challenges
of viewing high-dimensional information and finding alternative approaches are not
new but, despite this, much new work continues to be produced in this field. Specif-
ically, whether they are moving or not, medical image sets have unique visualization
challenges associated with them that these works address.
An example of early work in the general area of 4D visualization is that of Hanson
and Cross. Although this work focused primarily on the computing challenges of 4D
visualization, the authors made a keen observation about this field that still holds
today when they wrote, “in order to make simulated worlds that help develop human
intuition about the fourth dimensions, we need techniques that permit real-time,
interactive manipulation of the most sophisticated depictions available [27].” An
alternate approach to visualizing general 4D data was presented recently by Sun et
al. that focused on the rendering aspect of the challenge [47]. In other words, their
goal was to find effective ways to navigate through time and space through tricks in
rendering.
As stated above, medical visualization tasks have unique challenges specific to
the content of the image sets, in addition to the challenges specific to the imaging
modality used. As our research uses CT image sets, we will focus mostly on work
that has used CT as well. Of anatomical parts that pose unique challenges there
is the heart, an example of which was presented by Sirineni et al. where CT data
was visualized for coronary angiography [45]. Another such example is of the colon,
whose visualizations were recently presented for colonographies by Dachman et al.
[18]. Anything with a tree-like structure, such as the branches of the lungs or veins of
the liver, is certainly a challenge as well. A system of dealing with tree-like structure
45
in general was suggested recently by Yu, Ritman, and Higgins which moves away
from interactive segmentation and instead facilitates “semi-automatic” analysis by
first extracting the tree structure automatically and then offering the ability to edit
it, basing all visualizations off of that result [57].
Aside from their tree-like structure, the lungs are an especially unique challenge
due to their obvious association with respiratory motion. Several works have dealt
with visualizing the lungs recently involving segmentation [48], bronchial airways
([30], [28], [20]), and adjusting for motion ([38], [9]). Further work related to motion
analysis, the focus of this thesis, will be discussed in greater depth in the next section.
5.1.1 SCIRun
Precursors and recent similar efforts to our own using SCIRun’s visualization and
development capabilities exist as well. Firstly, we should note that the basis for the
work presented in this thesis was earlier SCIRun-based work done by Dedual et al.,
titled “Visualization of 4D Computed Tomography Datasets” [19], that demonstrated
how to display 4DCT data appropriately in that environment. However, this work did
not address the interactive abilities of SCIRun, leaving room for the improvements we
present. Dedual’s visualizations played an important role in the presentation of “A
Biological Lung Phantom for IGRT Studies [24]” by Folkert et al., further illustrating
the value of creating better visual representations of this type of information.
5.1.2 Fovia
A commercial alternative to SCIRun’s visualization capabilities exists called “Fovia.”
Fovia is an interactive visualization and analysis tool commercially available via
46
Fovia.com. Available for Macintosh, Linux, or Windows operating systems, the soft-
ware is said to scale efficiently without the need for additional hardware. In contrast,
the University of Utah’s SCIRun software requires the aid of graphics hardware to
facilitate its visualization capabilities and operates most successfully under Linux
and Macintosh. Fovia reports that it supports 3D visualizations and measurements,
whereas SCIRun does have the advantage of operating in 3D or 4D. The list of mea-
surement and segmentation features available for Fovia sound similar to those that
come standard with SCIRun’s BioImage application or have been developed through
our work. While an SDK is available for Fovia, it is unclear how much support is
available for low-level modification of their algorithms. This kind of transparency is
certainly an advantage of SCIRun, although it tends to lack the polished end-user
features when compared to its commercial counterpart. Furthermore, Fovia plans to
release a version of their product as a plug-in for the popular OsiriX 3D DICOM
viewer, while SCIRun operates as an independent project.
5.1.3 OsiriX
OsiriX is a medical image processing tool (primarily for DICOM files) that is useful
for viewing medical data sets of various sorts. OsiriX has been specifically designed
for navigation and visualization of multimodality and multidimensional images: 2D
Viewer, 3D Viewer, 4D Viewer (3D series with temporal dimension, for example:
Cardiac-CT) and 5D Viewer (3D series with temporal and functional dimensions,
for example: Cardiac-PET-CT). The 3D Viewer offers all modern rendering modes:
Multiplanar reconstruction (MPR), Surface Rendering, Volume Rendering and Max-
imum Intensity Projection (MIP) [4]. The capabilities of OsiriX can be expanded by
the development of plug-ins, similar to what is available for the other visualization
47
packages discussed in this chapter. Although OsiriX features several visualization
methods and segmentation features, we should note that SCIRun’s combination of
2D transfer function capabilities, overall visualization quality, and module (or plug-in)
expandability appear to offer more for the needs of our specific application.
5.1.4 3D Slicer
Slicer, or 3D Slicer, is a free, open source software package for visualization and
image analysis. 3D Slicer is natively designed to be available on multiple platforms,
including Windows, Linux and Macintosh. Slicer started as a masters thesis project
between the Surgical Planning Laboratory at the Brigham and Women’s Hospital
and the MIT Artificial Intelligence Laboratory in 1998. A variety of publications
were enabled by the Slicer software. A new, completely rearchitected version of Slicer
was developed and released in 2007 [3]. Slicer, as it relates to our work, can be seen as
an alternative for SCIRun. However, according to its own development wiki, Slicer is
lacking a number of features that exist in SCIRun (such as 2D transfer functions for
direct volume rendering) that it is expecting to establish eventually by collaborating
with developers at the University of Utah [5].
Slicer is commonly used in support of research that takes advantage of its visu-
alization capabilities to display its results. On the other hand, several examples of
combinations of research areas similar to our own exist that use Slicer. Several of
these could easily fit in later subsections of this chapter, but will be discussed here
to highlight similar stems of research that use Slicer. For example, Farneback and
Westin published work on “affine and deformable registration” applied to medical im-
age sets and used Slicer to display their results [23]. Results were displayed in similar
48
fashion for work on brain deformation models being used to perform registration on
medical image sets by Wittek et al. [55].
Work relating to treatment and surgical planning has also been done that takes
advantage of Slicer’s visualization capabilities. Specifically, San Jose Estepar et al.
have twice recently published work ([37], [22]) on computer-aided surgery using vi-
sualization provided by Slicer and additionally work by Gering et al. has provided
a software framework using the underlying capabilities of Slicer for similar motives
[26]. In each of those mentioned works, the imaging modalities and types of surgeries
differ from our own, but certainly illustrate Slicer’s robustness in this area.
Of work relating directly to tumor motion, Slicer has been used to visualize reg-
istration between magnetic resonance (MR) and CT image sets for tumors during
treatment in the liver by Archip et al. [6]. One way of looking at this work is that it
provides a different approach to interactivity in motion analysis by skipping trajec-
tory analysis and going directly to treatment guided by the images, albeit visualized
effectively using Slicer nonetheless.
5.2 Motion Analysis
This section will be used to discuss relevant work, past and present, to our own as
it relates to motion analysis. We believe that visualization plays a very important
role in motion analysis because of its ability to confuse or clarify information for the
analyst. We begin with a brief tangent about similar challenges in fluid dynamics
and then go into greater depth about medical motion analysis, discussing work that
is similar to our own.
49
5.2.1 Fluid Dynamics
Certainly, visualization and analysis of motion is not unique to the medical field. One
area where complex motion in four dimensions exists, and hence the need for clever
visualization of it, is fluid dynamics. In this subsection we will briefly discuss work
in this field that is relevant to our own and how our approach relates.
In the general category of analyzing trajectories of 4D information, recent work by
Tzeng et al. has been done in the area of visualizing fluid flow data to be applied in
fields such as chemistry and aerospace [51]. The relationship between this category of
work and our own is that analyzing the flows of substances, be it fluid or anatomical,
faces similar visualization challenges.
Again on the topic of flow visualization, early work by Silver et al. presented “a
method to juxtapose 4D space-time vector fields in which one contains a ‘source’
variable and the other the ‘response’ field [44].” The technique helped to highlight
the topological relationship between the two different kinds of fields in an effort to
understand the connection. Our work had a similar philosophy, but instead aimed to
superimpose visualizations of anatomy and trajectories, hoping to achieve the same
goal of comparison of ‘source’ and ‘response’ data sets.
To relate this discussion of fluid dynamics visualizations to the set of medical
problems at hand, work by Tory et al. used “flow visualization techniques” to show
variations of 3D volumes over time [50]. This was an attempt to display the next
stage of movement as vector fields whose colors represented their weights. However,
by the authors’ own admission, this work suffered from a lack of interactibility due
to limitations of the graphical interfaces. By using the programming infrastructure
50
of SCIRun, our work circumvented this type of difficulty very quickly by utilizing
built-in widgets that could be used within the 3D visualization environment.
5.2.2 Anatomical Motion
As previously discussed, the analysis of motion in medical applications is of great
research interest to many. In this subsection we will discuss related research that
touches upon the need for such analysis in treatment planning and during the actual
treatment itself.
Much work has been done in the area of image registration, and while we won’t
go into great depth in our coverage of that, we should mention a few important
precursors. Registration work related to ours includes work by Sharp et al. [42][43][41]
and Wu et al. [56]. Sharp’s work in registration appears in other applications to tumor
tracking in the abdomen by Betke et al. [10]. Most recently, a study of lung motion
in 4DCT with deformable registration by Boldea, Sharp, et al. is the next in the line
of this registration-oriented research [11].
Moving on to the recent work in medical motion analysis and 4DCT visualization,
it should be noted that a recent summary of the problems faced in this area of
research can be found in a book chapter on “4D Imaging and Treatment Planning”
written by Rietzel and Chen [35]. Further information about the role of 4DCT can
be found in another chapter by Chen and Rietzel on the topic of “4DCT Simulation”
which touches upon the significance of this imaging method for treatment planning
and delivery, including optimization in the presence of motion, aperture design, dose
calculations to moving targets, and image guided therapy delivery [16]. More recent
work on treatment planning and image-guided therapy have focused on segmentation
51
of specific target areas, such as the lungs [53], and also rapid visualization techniques
such as “color intensity projections” [17] to evaluate CT data more effectively.
Another of these problems is using the knowledge of motion to improve image
reconstruction. While the image reconstruction methods mentioned earlier here typ-
ically involve sorting based on external motion, Zeng et al. have suggested a method
based on internal anatomical motion [58]. The effect this method would have on
our work is that the anatomy, visualized together with the trajectories, would be
differently reconstructed and thus represent significantly different motion informa-
tion. Furthermore this method, based on iteratively matching the acquired data to
deformed reference models, may provide strikingly similar information. Whatever the
case is, the benefit of our tool for these authors’ work would be providing the ability
to visually verify whether that was true.
Su et al. recently presented a method of organ tracking that does not require
markers. Their paper proposed a novel method that addresses both the position and
shape variation caused by the intra-fraction movement [46]. Once again, in the context
of our work, this method can be seen as an alternative set of motion information
to visualize instead of deformable registration. Yet another perspective of this was
provided by Li et al., using finite element simulation of the moving anatomy [31].
Our work would be an excellent way to analyze the results of this work interactively,
relative to visualized anatomy.
There have been a number of publications oriented towards clinical audiences about
this area of research. Tanaka et al. have created tracking tools that operate on
two-dimensional slices of 4DCT [49]. This work tracks motion by use of template
52
matching in each of the three 2D perspectives (corresponding to orthogonal camera
angles) and incorporates interactivity by allowing the user to select the tumor in each
one separately. While the work they have done does a good job of visualizing the
information by using two-dimensional slices of the three-dimensional volumes, shown
changing over time, we believe that their software would benefit from the ability to
display trajectories relative to rendered anatomical volumes.
This followed work by Sayeh et al. in a similar area that had the intention of ap-
plying the same concepts for robotic radiosurgery, a means of automating the therapy
process from the motion information [39]. Their work used correlation-based models
to infer internal tumor positions from external marker positions, similar to the general
approach described for respiratory reconstruction of images above. Our work would
enable visualization of this tool’s results relative to rendered anatomy while showing
the path taken over previous respiratory phases.
Khamene et al. modeled respiratory motion in external images as a Markov pro-
cess [29]. This was proposed as a replacement for existing external respiratory marker
tracking schemes and was shown as a good way to automatically register previously-
taken 4DCT images of a patient with fluoroscopic images taken the day of the treat-
ment. Our work would be an excellent way of verifying the validity of these learned
trajectories interactively and relative to visualized anatomy. This can primarily be
seen as an alternative to deformable registration, although the strength of our tools
potentially would be to allow both methods of analysis to be visualized simultaneously
along with anatomy.
53
Similar to this work, Chang et al. have been experimenting with relating fluo-
roscopy and CT for the purpose of determining beam gating parameters in radiother-
apy [15]. Once again, this work relies upon two-dimensional views of the captured
data and its various forms of analysis. This serves to illustrate the importance of our
work to improving the interactive capabilities of works such as these.
West et al. use 4DCT data for more efficient treatment planning of moving
anatomy. A tissue motion model is computed by performing non rigid registration of
the individual three-dimensional CT images. Using the target-centric alignment and
the deformation model, it is possible to calculate a dose distribution that takes into
account both beam movement and soft tissue deformation. This dose distribution
may be calculated before plan optimization and hence used to determine the desired
beam geometry and weighting, or it may be calculated after plan optimization in
order to review the effects of respiration on the dose isocontours and statistics for a
given plan [54]. Once again, our work would greatly improve the efficiency of this
tool by allowing for such planning to take place in an interactive environment that is
akin to the nature of the data.
Work by Mori et al. is the closest anatomical trajectory analysis work we have
found to our own [32]. Although it is presented for a clinical audience, it still shows
much of the same information we are able to visualize in our work, and its strengths
are that it analyzes 4DCT tumor trajectories by showing them in two-dimensional
slices and alternately displaying their trajectories in a 3D environment without the
anatomy. Certainly, it is difficult to visualize both sets of information simultaneously,
but this highlights our achievement in doing so by employing interactive methods.
54
Chapter 6
Contributions and Future Work
It isn’t unusual that technological advances in the medical industry are frequently
linked to favorable healthcare results. Many of these advances lead to new perspec-
tives or new levels of detail, which often imply an increase in information available to
physicians. However, despite the amount of information growing, there are challenges
that are growing at the same rate that often hinder the ability of that information
to progress the quality of healthcare. Some of these concerns, such as storage of this
data, are alleviated by the simultaneous progress of storage capabilities. On the other
hand, the challenge of visualizing this growing amount of data has no similar trend.
Problem-specific visualization tools that handle the growing amount of data must be
developed so that the associated information available to physicians is a benefit and
not a hindrance.
Specific to the problem of treatment planning in radiotherapy, respiratory motion
poses a particularly difficult visualization challenge. While simple animation-based
visualizations are one solution, we found that there is more value in viewing all of
the temporal information about a limited amount of spatial information at one time.
55
Analyzing data about motion in this format provides a different and potentially more
efficient way for a user’s mind to process the information. Furthermore, we found that
providing interaction with the information locally to the visualizations could enhance
the experience even more, setting up the design considerations for the tools presented
in the previous chapters.
The contributions of this thesis have been a set of tools that allow researchers
characterizing anatomical motion to visualize and edit deformation fields in a 3D en-
vironment. While it is clear that deformable registration is a valuable research tool
for the area of anatomical motion research, we have been able to improve the use-
fulness of the algorithm by providing an intuitive interface that displays information
more efficiently, encourages more integration with other forms of visualization, and
provides a way of interacting with the data to make changes to the model in the same
environment.
6.1 Future Work
Our future efforts with this work will focus on three major areas of advancement:
visualization, interactivity, and pattern matching. In the area of visualization, we
will use other visual queues such as color or line thickness to highlight additional
attributes of motion such as velocity and regularity. Further, as movement is not fully
characterized by trajectories alone, we will include visualizations of tissue expansion
and contraction to enable new analyses.
To improve interactivity, we will explore more efficient means of marking regions
whose trajectories have been deemed unreliable. One approach may be to paint a
56
region in 3D and recompute the registration for that region alone. Additionally, we
plan on learning registration algorithm parameters from the way researchers use these
interactive tools.
Lastly, we will explore motion correlation measurement and analysis between dif-
ferent regions of the anatomy using a combination of the visualization and interactive
tool creation capabilities that we have shown and the pattern matching mentioned
above. In addition to this, we will explore the ability of general purpose graphi-
cal processing units (GPGPUs) in accelerating the motion correlation calculations
for entire image sets at one time. With the enhanced performance provided by this
type of processing capability, we will be able to provide users with more processing-
intensive interactive functionality that would normally be infeasible due to execution
time delays at each instance of user interaction.
In conclusion, we have provided a satisfying set of tools for interactive motion
analysis with respect to simultaneously visualized anatomy. Using deformable regis-
tration results for motion vector fields about the anatomy, and 4DCT image sets of
the same anatomy, we have shown in this thesis that it is possible to represent com-
plex information about moving anatomy in a relatively simple and intuitive manner
for more efficient patient analysis. We hope that our work brings us one step closer
to a truly interactive treatment planning environment where physicians are able to
perform their work accurately and efficiently.
57
Appendix
The View Trajectory Loop Module
ViewTrajectoryLoop.cc:
/*
* ViewTrajectoryLoop.cc:
*
* Written by:
* burak
* TODAY’S DATE HERE
*
*/
#include <Dataflow/Network/Module.h>
#include <Core/Malloc/Allocator.h>
#include <Core/Containers/StringUtil.h>
#include <Dataflow/Network/Ports/GeometryPort.h>
#include <Dataflow/Widgets/PointWidget.h>
#include <Dataflow/Constraints/DistanceConstraint.h>
namespace Teem {
using namespace SCIRun;
// create own pointwidget class for callbacks
class MyPointWidget : public PointWidget {
public:
Module *parent;
MyPointWidget( Module* module, CrowdMonitor* lock, double widget_scale ):PointWidget(module,lock,widget_scale)
{
parent = module;
}
58
MyPointWidget( const MyPointWidget& );
virtual ~MyPointWidget(){}
void geom_moved( GeomPickHandle hPick, int axis, double dist,
const Vector& delta, int pick, const BState& bs,
const Vector &pick_offset)
{
PointWidget::geom_moved(hPick,axis,dist,delta,pick,bs,pick_offset);
}
void geom_pick(GeomPickHandle p, ViewWindow *vw, int data, const BState &bs)
{
PointWidget::geom_pick(p,vw,data,bs);
}
};
class ViewTrajectoryLoop : public Module {
public:
ViewTrajectoryLoop(GuiContext*);
MyPointWidget* InitialPoint;
GeometryOPort *outport;
CrowdMonitor control_lock_;
/* void showloop()
{
glBegin(GL_LINE_LOOP);
glVertex3f(0.0,0.0,0.0)
glVertex3f(2.0,0.0,0.0)
glVertex3f(0.0,3.0,0.0)
glEnd();
}
*/
virtual ~ViewTrajectoryLoop();
virtual void execute();
virtual void tcl_command(GuiArgs&, void*);
59
};
#define MYPTSIZE 5.0
DECLARE_MAKER(ViewTrajectoryLoop)
ViewTrajectoryLoop::ViewTrajectoryLoop(GuiContext* ctx) :
Module("ViewTrajectoryLoop", ctx, Source, "Misc", "Teem"),
control_lock_("ViewTrajectoryLoop resolution lock")
{
outport = (GeometryOPort *)get_oport("GeomOut"); //output port
InitialPoint = scinew MyPointWidget(this, &control_lock_, MYPTSIZE);
InitialPoint->Connect(outport);
GeomHandle hTemp =InitialPoint->GetWidget();
outport->addObj(hTemp, "Trajectory Cursor", &control_lock_);
}
ViewTrajectoryLoop::~ViewTrajectoryLoop(){
}
void
ViewTrajectoryLoop::execute(){
/* Point temp = InitialPoint->GetPosition();
outport->delAll();
cout << endl << "ViewTrajectoryLoop :: execute : run point_path and display loop here" << endl;
// first redisplay point
InitialPoint = scinew MyPointWidget(this, &control_lock_, MYPTSIZE);
InitialPoint->SetPosition(temp);
InitialPoint->Connect(outport);
GeomHandle hTemp =InitialPoint->GetWidget();
outport->addObj(hTemp, "Trajectory Cursor", &control_lock_);
*/
//add trajectory lines below here
outport->flushViews();
update_state(Completed); //Tell SCIRun we’re done!
60
}
void
ViewTrajectoryLoop::tcl_command(GuiArgs& args, void* userdata)
{
Module::tcl_command(args, userdata);
}
} // End namespace Teem
61
The Edit Point Path Module
PointPath.cc:
/*
* PointPath.cc:
*
* Written by:
* berem
* TODAY’S DATE HERE
*
*/
#include <Dataflow/Network/Module.h>
#include <Core/Malloc/Allocator.h>
#include <Core/Containers/StringUtil.h>
#include <Dataflow/GuiInterface/GuiVar.h>
#include <Dataflow/Network/Ports/GeometryPort.h>
#include <Dataflow/Network/Ports/MatrixPort.h>
#include <Dataflow/Widgets/ArrowWidget.h>
#include <Dataflow/Widgets/PointWidget.h>
#include <Dataflow/Constraints/DistanceConstraint.h>
#include <Core/Algorithms/DataIO/DataIOAlgo.h>
namespace Teem {
using namespace SCIRun;
using namespace SCIRunAlgo;
class PointPath : public Module {
public:
PointPath(GuiContext*);
//Runs when "Display Trajectory" is clicked from GUI. Runs
//point_path executable to create trajtemp.txt locally.
//Runs on some default local bspline directory, starting from
//phase 0. We can make all of this adjustable later.
62
void ChoosePoint();
//Opens trajtemp.txt locally and reads in phases and points.
//Displays points and (eventually) highlights which phase
//at which we’re currently looking.
void DisplayPath();
GeometryOPort *outport;
CrowdMonitor control_lock_;
PointWidget *InitialPoint;
PointWidget *traj[100];
// Define handle which will hold trajectory phases and points
MatrixHandle mat;
bool runonceflag, showingtrajectory;
// Default values
string point_path_executable;
string deform_dir;
int phase;
string output_file;
MaterialHandle SelectedPointMaterial;
MaterialHandle DefaultPointMaterial;
int prevphase, currentphase, maxphase;
MatrixIPort* matrix_iport;
MatrixHandle inmatrix;
virtual ~PointPath();
//default mode of operation, displays point that we want to track
//when done choosing point, click "Display Trajectory" from GUI
virtual void execute();
virtual void tcl_command(GuiArgs&, void*);
};
63
DECLARE_MAKER(PointPath)
PointPath::PointPath(GuiContext* ctx) :
Module("PointPath", ctx, Source, "Misc", "Teem"),
control_lock_("PointPath resolution lock"), runonceflag(true), showingtrajectory(false)
{
// load the geometry output port
outport = (GeometryOPort *)get_oport("GeomOut");
matrix_iport = (MatrixIPort *)get_iport("PhaseIn");
// Default values
point_path_executable = "/home/berem/Documents/downloads/point_path/point_path";
deform_dir = "/home/zijiguest/4burak/lung-0055/bspline";
phase = 0;
output_file = "/home/berem/Documents/downloads/point_path/scitempout.txt";
SelectedPointMaterial = scinew Material(Color(0,0,0), Color(0,.5,0), Color(0,0,.5), 20);
DefaultPointMaterial = scinew Material(Color(0,0,0), Color(.54,.60,1), Color(.5,.5,.5), 20);
prevphase = -1;
}
PointPath::~PointPath(){
}
void
PointPath::execute(){
if(runonceflag)
{
InitialPoint=scinew PointWidget(this, &control_lock_, 2.0);
GeomHandle hTemp = InitialPoint->GetWidget();
outport->addObj(hTemp, "PointPathInitialPoint", &control_lock_);
runonceflag = false;
}
if(showingtrajectory) // use this to change color to match the current phase
{
// what is the current phase? get input
64
matrix_iport->get(inmatrix);
if(inmatrix->get_data_size() == 0)
currentphase=0;
else
currentphase = (int)inmatrix->get(0,0);
if(currentphase > maxphase)
{
currentphase = maxphase;
cerr << "Selected phase exceeds known maximum phase.";
}
// set that widget’s default color to something identifiable
if(prevphase != -1)
traj[prevphase]->SetMaterial(SCIRun::Index(0),DefaultPointMaterial);
traj[currentphase]->SetMaterial(SCIRun::Index(0),SelectedPointMaterial);
// remember the prevphase
prevphase = currentphase;
}
outport->flushViews();
update_state(Completed); //Tell SCIRun we’re done!
}
void
PointPath::ChoosePoint()
{
// Get point coords for InitialPoint
float px,py,pz;
ostringstream str, cmd;
str << Point(InitialPoint->GetPosition());
sscanf(str.str().c_str(), "[%f %f %f]", &px, &py, &pz);
// Execute point_path using default values and this point
cmd << point_path_executable << " " << deform_dir << " " << phase << " \"" << px << " " << py << " " << pz << "\" " << output_file;
system(cmd.str().c_str());
// When finished executing, read in phases and points
65
// Connect library with module, so output will be forwarded to the user
DataIOAlgo ioalgo(this);
if(!(ioalgo.ReadMatrix(output_file,mat,"SimpleTextFile")))
{
cout << "*** an error occured, need cleanup" << endl;
return;
}
// Call DisplayPath()
DisplayPath(); // we do this, but a user can "undo" their changes by manually clicking the Display Path button as well
}
void
PointPath::DisplayPath()
{
// clear all of the objects in outport
outport->delAll();
// loop through the matrix object and create points that are connected in order, in a loop ~~ visualizing a circular linked list
GeomHandle hTemp;
char tempname[25];
int rowphase;
maxphase = 0;
for(int m=0;m<mat->nrows();m++)
{
rowphase=int(mat->get(m,0));
if(rowphase > maxphase)
maxphase = rowphase;
traj[rowphase] = scinew PointWidget(this, &control_lock_, 1.0);
traj[rowphase]->SetPosition(Point(mat->get(m,1), mat->get(m,2), mat->get(m,3)));
hTemp = traj[rowphase]->GetWidget();
sprintf(tempname, "PointPathPoint%d", rowphase);
outport->addObj(hTemp, tempname, &control_lock_);
}
outport->flushViews();
showingtrajectory=true;
66
}
void
PointPath::tcl_command(GuiArgs& args, void* userdata)
{
if(args[1] == "choosepoint")
{
ChoosePoint();
}
else
if(args[1] == "displaypath")
{
DisplayPath();
}
else
Module::tcl_command(args, userdata);
}
} // End namespace Teem
PointPath.tcl:
itcl_class Teem_Misc_PointPath {
inherit Module
constructor {config} {
set name PointPath
set_defaults
}
method set_defaults {} {
}
method ui {} {
set w .ui[modname]
if {[winfo exists $w]} {
return
67
}
toplevel $w
button $w.buttonchoosepoint -text "Choose Point" -command "$this-c choosepoint"
button $w.buttondisplaypath -text "Display Path" -command "$this-c displaypath"
pack $w.buttonchoosepoint $w.buttondisplaypath -side top -padx 10 -pady 10
makeSciButtonPanel $w $w $this
moveToCursor $w
}
}
68
The point path Application
point path.cxx:
/* =======================================================================*
Copyright (c) 2004-2007 Massachusetts General Hospital.
All rights reserved.
Output file format
0 x1 y1 z1
1 x2 y2 z2
2 x3 y3 z3
...
9 x9 y9 z9
* =======================================================================*/
#include <stdio.h>
#include <string.h>
#include "xform.h"
int input_phase;
int reference_phase = 5;
float input_position[3];
char deform_dir[256];
char output_file[256];
void
itk_transform_point (Xform* xf, float pos[3])
{
DoublePointType itk_1;
DoublePointType itk_2;
itk_1[0] = pos[0];
itk_1[1] = pos[1];
itk_1[2] = pos[2];
itk_2 = xf->get_bsp()->TransformPoint (itk_1);
pos[0] = itk_2[0];
pos[1] = itk_2[1];
pos[2] = itk_2[2];
}
69
/* Modify pos from original to deformed position */
void
compute_point_path (float pos[3], int output_phase)
{
char fn[256];
Xform xf;
if (input_phase == output_phase) return;
if (input_phase != reference_phase) {
sprintf (fn, "%s/t_%d_%d_bsp.txt", deform_dir, input_phase, reference_phase);
load_xform (&xf, fn);
itk_transform_point (&xf, pos);
}
if (output_phase != reference_phase) {
sprintf (fn, "%s/t_%d_%d_bsp.txt", deform_dir, reference_phase, output_phase);
load_xform (&xf, fn);
itk_transform_point (&xf, pos);
}
}
int
main (int argc, char *argv[])
{
int i;
FILE* fp;
if (argc != 5) {
fprintf (stderr, "Usage: point_path deform_dir phase \"point\" output_file\n");
return 1;
}
if (strlen(argv[1]) > 256) {
fprintf (stderr, "Sorry, deform_dir path must be shorter than 256 characters\n");
return 1;
}
strcpy (deform_dir, argv[1]);
if (1 != sscanf (argv[2], "%d", &input_phase)) {
fprintf (stderr, "Error parsing phase argument\n");
70
return 1;
}
if (3 != sscanf (argv[3], "%g %g %g", &input_position[0], &input_position[1], &input_position[2])) {
fprintf (stderr, "Error parsing point argument\n");
return 1;
}
if (strlen(argv[4]) > 256) {
fprintf (stderr, "Sorry, output_file path must be shorter than 256 characters\n");
return 1;
}
strcpy (output_file, argv[4]);
fp = fopen (output_file, "wb");
if (!fp) {
fprintf (stderr, "Couldn’t open file \"%s\" for read\n", output_file);
return 1;
}
for (i = 0; i < 10; i++) {
float deformed_position[3];
memcpy (deformed_position, input_position, 3 * sizeof(float));
compute_point_path (deformed_position, i);
fprintf (fp, "%d %g %g %g\n", i, deformed_position[0],
deformed_position[1], deformed_position[2]);
}
fclose (fp);
printf ("Finished!\n");
return 0;
}
xform.h:
/* =======================================================================*
Copyright (c) 2004-2006 Massachusetts General Hospital.
All rights reserved.
* =======================================================================*/
#ifndef _xform_h_
#define _xform_h_
#include "itkTranslationTransform.h"
71
#include "itkVersorRigid3DTransform.h"
#include "itkAffineTransform.h"
#include "itkBSplineDeformableTransform.h"
#include "gregBSplineDeformableTransform.h"
#include "load_mha.h"
#include "ra_registration.h"
#include "print_and_exit.h"
class Xform;
/* Different kinds of transforms */
typedef itk::TranslationTransform < double, Dimension > TranslationTransformType;
typedef itk::VersorRigid3DTransform < double > VersorTransformType;
typedef itk::AffineTransform < double, Dimension > AffineTransformType;
/* B-spline transforms */
const unsigned int SplineDimension = Dimension;
const unsigned int SplineOrder = 3;
typedef double CoordinateRepType;
typedef itk::gregBSplineDeformableTransform<
CoordinateRepType,
SplineDimension,
SplineOrder > BsplineTransformType;
void load_xform (Xform *xf, char* fn);
void save_xform (Xform *xf, char* fn);
void xform_to_trn (Xform *xf_out, Xform *xf_in, Stage_Parms* stage,
const OriginType& img_origin, const SpacingType& img_spacing,
const ImageRegionType& img_region);
void xform_to_vrs (Xform *xf_out, Xform *xf_in, Stage_Parms* stage,
const OriginType& img_origin, const SpacingType& img_spacing,
const ImageRegionType& img_region);
void xform_to_aff (Xform *xf_out, Xform *xf_in, Stage_Parms* stage,
const OriginType& img_origin, const SpacingType& img_spacing,
const ImageRegionType& img_region);
void xform_to_bsp (Xform *xf_out, Xform *xf_in, Stage_Parms* stage,
const OriginType& img_origin, const SpacingType& img_spacing,
const ImageRegionType& img_region);
72
void xform_to_vf (Xform* xf_out, Xform *xf_in, FloatImageType::Pointer image);
/* This way uses multiple smart pointers. Crufty, but easy to understand. */
class Xform {
public:
int m_type;
TranslationTransformType::Pointer m_trn;
VersorTransformType::Pointer m_vrs;
AffineTransformType::Pointer m_aff;
BsplineTransformType::Pointer m_bsp;
BsplineTransformType::ParametersType m_bsp_parms;
DeformationFieldType::Pointer m_vf;
public:
Xform () {
clear ();
}
Xform (Xform& xf) {
*this = xf;
}
~Xform () {
clear ();
}
Xform& operator= (Xform& xf) {
m_type = xf.m_type;
m_trn = xf.m_trn;
m_vrs = xf.m_vrs;
m_aff = xf.m_aff;
m_vf = xf.m_vf;
m_bsp = xf.m_bsp;
return *this;
}
void clear () {
m_type = TRANSFORM_NONE;
m_trn = 0;
m_vrs = 0;
m_aff = 0;
m_bsp = 0;
73
m_vf = 0;
}
TranslationTransformType::Pointer get_trn () {
if (m_type != TRANSFORM_TRANSLATION) {
print_and_exit ("Typecast error in get_trn()\n");
}
return m_trn;
}
VersorTransformType::Pointer get_vrs () {
if (m_type != TRANSFORM_VERSOR) {
printf ("Got type = %d\n", m_type);
print_and_exit ("Typecast error in get_vrs ()\n");
}
return m_vrs;
}
AffineTransformType::Pointer get_aff () {
if (m_type != TRANSFORM_AFFINE) {
print_and_exit ("Typecast error in get_aff()\n");
}
return m_aff;
}
BsplineTransformType::Pointer get_bsp () {
if (m_type != TRANSFORM_BSPLINE) {
print_and_exit ("Typecast error in get_bsp()\n");
}
return m_bsp;
}
DeformationFieldType::Pointer get_vf () {
if (m_type != TRANSFORM_VECTOR_FIELD) {
print_and_exit ("Typecast error in get_vf()\n");
}
return m_vf;
}
void set_trn (TranslationTransformType::Pointer trn) {
clear ();
m_type = TRANSFORM_TRANSLATION;
m_trn = trn;
}
void set_vrs (VersorTransformType::Pointer vrs) {
clear ();
74
m_type = TRANSFORM_VERSOR;
m_vrs = vrs;
}
void set_aff (AffineTransformType::Pointer aff) {
clear ();
m_type = TRANSFORM_AFFINE;
m_aff = aff;
}
void set_bsp (BsplineTransformType::Pointer bsp) {
clear ();
m_type = TRANSFORM_BSPLINE;
m_bsp = bsp;
}
void set_vf (DeformationFieldType::Pointer vf) {
clear ();
m_type = TRANSFORM_VECTOR_FIELD;
m_vf = vf;
}
};
#endif
xform.cxx:
/* =======================================================================*
Copyright (c) 2004-2006 Massachusetts General Hospital.
All rights reserved.
* =======================================================================*/
#include <time.h>
#include <stdlib.h>
#include <string.h>
#include "itkArray.h"
#include "itkResampleImageFilter.h"
#include "itkBSplineResampleImageFunction.h"
#include "xform.h"
#include "ra_registration.h"
#include "resample_mha.h"
#include "print_and_exit.h"
75
void alloc_bspline_parms (Xform *xf, BsplineTransformType::Pointer bsp);
int
strcmp_alt (const char* s1, const char* s2)
{
return strncmp (s1, s2, strlen(s2));
}
int
get_parms (FILE* fp, itk::Array<double>* parms, int num_parms)
{
float f;
int r, s;
s = 0;
while (r = fscanf (fp, "%f",&f)) {
(*parms)[s++] = (double) f;
if (s == num_parms) break;
if (!fp) break;
}
return s;
}
void
load_xform (Xform *xf, char* fn)
{
char buf[1024];
FILE* fp;
fp = fopen (fn, "r");
if (!fp) {
print_and_exit ("Error: xf_in file %s not found\n", fn);
}
if (!fgets(buf,1024,fp)) {
print_and_exit ("Error reading from xf_in file.\n");
}
if (strcmp_alt(buf,"ObjectType = MGH_XFORM_TRANSLATION")==0) {
TranslationTransformType::Pointer trn = TranslationTransformType::New();
TranslationTransformType::ParametersType xfp(12);
76
int num_parms;
num_parms = get_parms (fp, &xfp, 3);
if (num_parms != 3) {
print_and_exit ("Wrong number of parameters in xf_in file.\n");
} else {
trn->SetParameters(xfp);
std::cout << "Initial translation parms = " << trn << std::endl;
}
xf->set_trn (trn);
fclose (fp);
} else if (strcmp_alt(buf,"ObjectType = MGH_XFORM_VERSOR")==0) {
VersorTransformType::Pointer vrs = VersorTransformType::New();
VersorTransformType::ParametersType xfp(6);
int num_parms;
num_parms = get_parms (fp, &xfp, 6);
if (num_parms != 6) {
print_and_exit ("Wrong number of parameters in xf_in file.\n");
} else {
vrs->SetParameters(xfp);
std::cout << "Initial versor parms = " << vrs << std::endl;
}
xf->set_vrs (vrs);
fclose (fp);
} else if (strcmp_alt(buf,"ObjectType = MGH_XFORM_AFFINE")==0) {
AffineTransformType::Pointer aff = AffineTransformType::New();
AffineTransformType::ParametersType xfp(12);
int num_parms;
num_parms = get_parms (fp, &xfp, 12);
if (num_parms != 12) {
print_and_exit ("Wrong number of parameters in xf_in file.\n");
} else {
aff->SetParameters(xfp);
std::cout << "Initial affine parms = " << aff << std::endl;
}
xf->set_aff (aff);
fclose (fp);
} else if (strcmp_alt(buf,"ObjectType = MGH_XFORM_BSPLINE")==0) {
77
int s[3];
float p[3];
BsplineTransformType::RegionType::SizeType size;
BsplineTransformType::RegionType region;
BsplineTransformType::SpacingType spacing;
BsplineTransformType::OriginType origin;
BsplineTransformType::Pointer bsp = BsplineTransformType::New();
/* Skip 2 lines */
fgets(buf,1024,fp);
fgets(buf,1024,fp);
printf ("Trying to load BSPLINE\n");
/* Load bulk transform, if it exists */
fgets(buf,1024,fp);
if (!strncmp ("BulkTransform", buf, strlen("BulkTransform"))) {
TranslationTransformType::Pointer trn = TranslationTransformType::New();
VersorTransformType::Pointer vrs = VersorTransformType::New();
AffineTransformType::Pointer aff = AffineTransformType::New();
itk::Array <double> xfp(12);
float f;
int n, num_parm = 0;
char *p = buf + strlen("BulkTransform = ");
while (sscanf (p, " %g%n", &f, &n) > 0) {
if (num_parm>=12) {
print_and_exit ("Error loading bulk transform\n");
}
xfp[num_parm] = f;
p += n;
num_parm++;
}
if (num_parm == 12) {
aff->SetParameters(xfp);
std::cout << "Bulk affine = " << aff;
bsp->SetBulkTransform (aff);
} else if (num_parm == 6) {
vrs->SetParameters(xfp);
std::cout << "Bulk versor = " << vrs;
bsp->SetBulkTransform (vrs);
78
} else if (num_parm == 3) {
trn->SetParameters(xfp);
std::cout << "Bulk translation = " << trn;
bsp->SetBulkTransform (trn);
} else {
print_and_exit ("Error loading bulk transform\n");
}
fgets(buf,1024,fp);
}
/* Load origin, spacing, size */
if (3 != sscanf(buf,"Offset = %g %g %g",&p[0],&p[1],&p[2])) {
print_and_exit ("Unexpected line in xform_in file.\n");
}
origin[0] = p[0]; origin[1] = p[1]; origin[2] = p[2];
fgets(buf,1024,fp);
if (3 != sscanf(buf,"ElementSpacing = %g %g %g",&p[0],&p[1],&p[2])) {
print_and_exit ("Unexpected line in xform_in file.\n");
}
spacing[0] = p[0]; spacing[1] = p[1]; spacing[2] = p[2];
fgets(buf,1024,fp);
if (3 != sscanf(buf,"DimSize = %d %d %d",&s[0],&s[1],&s[2])) {
print_and_exit ("Unexpected line in xform_in file.\n");
}
size[0] = s[0]; size[1] = s[1]; size[2] = s[2];
std::cout << "Offset = " << origin << std::endl;
std::cout << "Spacing = " << spacing << std::endl;
std::cout << "Size = " << size << std::endl;
fgets(buf,1024,fp);
if (strcmp_alt (buf, "ElementDataFile = LOCAL")) {
print_and_exit ("Error: bspline xf_in failed sanity check\n");
}
region.SetSize (size);
bsp->SetGridSpacing (spacing);
bsp->SetGridOrigin (origin);
79
bsp->SetGridRegion (region);
/* Allocate memory, and bind to bspline struct to xform struct */
alloc_bspline_parms (xf, bsp);
/* Read bspline coefficients */
/* GCS WARNING: I’m assuming this is OK after SetParameters() */
const unsigned int num_parms = bsp->GetNumberOfParameters();
for (int i = 0; i < num_parms; i++) {
float d;
if (!fgets(buf,1024,fp)) {
print_and_exit ("Missing bspline coefficient from xform_in file.\n");
}
if (1 != sscanf(buf,"%g",&d)) {
print_and_exit ("Bad bspline parm in xform_in file.\n");
}
xf->m_bsp_parms[i] = d;
}
fclose (fp);
/* Linux version seems to require this... */
bsp->SetParameters (xf->m_bsp_parms);
} else {
/* Close the file and try again, it is probably a vector field */
fclose (fp);
DeformationFieldType::Pointer vf = DeformationFieldType::New();
vf = load_float_field (fn);
if (!vf) {
print_and_exit ("Unexpected file format for xf_in file.\n");
}
xf->set_vf (vf);
}
}
void
save_xform_translation (TranslationTransformType::Pointer transform, char* filename)
{
FILE* fp = fopen (filename,"w");
if (!fp) {
80
printf ("Error: Couldn’t open file %s for write\n", filename);
return;
}
fprintf (fp,"ObjectType = MGH_XFORM_TRANSLATION\n");
for (int i = 0; i < transform->GetNumberOfParameters(); i++) {
fprintf (fp, "%g\n", transform->GetParameters()[i]);
}
fclose (fp);
}
void
save_xform_versor (VersorTransformType::Pointer transform, char* filename)
{
FILE* fp = fopen (filename,"w");
if (!fp) {
printf ("Error: Couldn’t open file %s for write\n", filename);
return;
}
fprintf (fp,"ObjectType = MGH_XFORM_VERSOR\n");
for (int i = 0; i < transform->GetNumberOfParameters(); i++) {
fprintf (fp, "%g\n", transform->GetParameters()[i]);
}
fclose (fp);
}
void
save_xform_affine (AffineTransformType::Pointer transform, char* filename)
{
FILE* fp = fopen (filename,"w");
if (!fp) {
printf ("Error: Couldn’t open file %s for write\n", filename);
return;
}
fprintf (fp,"ObjectType = MGH_XFORM_AFFINE\n");
for (int i = 0; i < transform->GetNumberOfParameters(); i++) {
fprintf (fp, "%g\n", transform->GetParameters()[i]);
}
81
fclose (fp);
}
void
save_xform_bspline (BsplineTransformType::Pointer transform, char* filename)
{
FILE* fp = fopen (filename,"w");
if (!fp) {
printf ("Error: Couldn’t open file %s for write\n", filename);
return;
}
fprintf (fp,
"ObjectType = MGH_XFORM_BSPLINE\n"
"NDims = 3\n"
"BinaryData = False\n");
if (transform->GetBulkTransform()) {
if ((!strcmp("TranslationTransform", transform->GetBulkTransform()->GetNameOfClass())) ||
(!strcmp("AffineTransform", transform->GetBulkTransform()->GetNameOfClass())) ||
(!strcmp("VersorTransform", transform->GetBulkTransform()->GetNameOfClass()))) {
fprintf (fp, "BulkTransform =");
for (int i = 0; i < transform->GetBulkTransform()->GetNumberOfParameters(); i++) {
fprintf (fp, " %g", transform->GetBulkTransform()->GetParameters()[i]);
}
fprintf (fp, "\n");
} else if (strcmp("IdentityTransform", transform->GetBulkTransform()->GetNameOfClass())) {
printf("Warning!!! BulkTransform exists. Type=%s\n", transform->GetBulkTransform()->GetNameOfClass());
printf(" # of parameters=%d\n", transform->GetBulkTransform()->GetNumberOfParameters());
printf(" The code currently does not know how to handle this type and will not write the parameters out!\n");
}
}
fprintf (fp,
"Offset = %f %f %f\n"
"ElementSpacing = %f %f %f\n"
"DimSize = %d %d %d\n"
"ElementDataFile = LOCAL\n",
transform->GetGridOrigin()[0],
transform->GetGridOrigin()[1],
transform->GetGridOrigin()[2],
82
transform->GetGridSpacing()[0],
transform->GetGridSpacing()[1],
transform->GetGridSpacing()[2],
transform->GetGridRegion().GetSize()[0],
transform->GetGridRegion().GetSize()[1],
transform->GetGridRegion().GetSize()[2]
);
for (int i = 0; i < transform->GetNumberOfParameters(); i++) {
fprintf (fp, "%g\n", transform->GetParameters()[i]);
}
fclose (fp);
}
void
save_xform (Xform *xf, char* fn)
{
switch (xf->m_type) {
case TRANSFORM_TRANSLATION:
save_xform_translation (xf->get_trn(), fn);
break;
case TRANSFORM_VERSOR:
save_xform_versor (xf->get_vrs(), fn);
break;
case TRANSFORM_AFFINE:
save_xform_affine (xf->get_aff(), fn);
break;
case TRANSFORM_BSPLINE:
save_xform_bspline (xf->get_bsp(), fn);
break;
case TRANSFORM_VECTOR_FIELD:
save_image (xf->get_vf(), fn);
break;
}
}
#if defined (GCS_REARRANGING_STUFF)
void
init_versor_moments_old (RegistrationType::Pointer registration)
{
typedef itk::CenteredTransformInitializer < VersorTransformType,
83
FloatImageType, FloatImageType > TransformInitializerType;
TransformInitializerType::Pointer initializer =
TransformInitializerType::New();
typedef VersorTransformType* VTPointer;
VTPointer transform = static_cast<VTPointer>(registration->GetTransform());
initializer->SetTransform(transform);
initializer->SetFixedImage(registration->GetFixedImage());
initializer->SetMovingImage(registration->GetMovingImage());
initializer->GeometryOn();
printf ("Calling Initialize Transform\n");
initializer->InitializeTransform();
std::cout << "Transform is " << registration->GetTransform()->GetParameters() << std::endl;
}
void
init_versor_moments (RegistrationType::Pointer registration,
VersorTransformType* versor)
{
typedef itk::CenteredTransformInitializer < VersorTransformType,
FloatImageType, FloatImageType > TransformInitializerType;
TransformInitializerType::Pointer initializer =
TransformInitializerType::New();
initializer->SetTransform(versor);
initializer->SetFixedImage(registration->GetFixedImage());
initializer->SetMovingImage(registration->GetMovingImage());
initializer->GeometryOn();
printf ("Calling Initialize Transform\n");
initializer->InitializeTransform();
std::cout << "Transform is " << registration->GetTransform()->GetParameters() << std::endl;
}
84
void
set_transform_translation (RegistrationType::Pointer registration,
Registration_Parms* regp)
{
TranslationTransformType::Pointer transform = TranslationTransformType::New();
TranslationTransformType::ParametersType vt(3);
registration->SetTransform (transform);
switch (regp->init_type) {
case TRANSFORM_NONE:
{
VersorTransformType::Pointer v = VersorTransformType::New();
FloatVectorType dis;
init_versor_moments (registration, v);
dis = v->GetOffset();
vt[0] = dis[0]; vt[1] = dis[1]; vt[2] = dis[2];
transform->SetParameters(vt);
std::cout << "Initial translation parms = " << transform << std::endl;
}
break;
case TRANSFORM_TRANSLATION:
vt[0] = regp->init[0];
vt[1] = regp->init[1];
vt[2] = regp->init[2];
transform->SetParameters(vt);
break;
case TRANSFORM_VERSOR:
case TRANSFORM_AFFINE:
case TRANSFORM_FROM_FILE:
default:
not_implemented();
break;
}
}
void
set_transform_versor (RegistrationType::Pointer registration,
Registration_Parms* regp)
{
VersorTransformType::Pointer transform = VersorTransformType::New();
85
VersorTransformType::ParametersType vt(6);
registration->SetTransform (transform);
switch (regp->init_type) {
case TRANSFORM_NONE:
init_versor_moments (registration, transform);
break;
case TRANSFORM_TRANSLATION:
not_implemented();
break;
case TRANSFORM_VERSOR:
vt[0] = regp->init[0];
vt[1] = regp->init[1];
vt[2] = regp->init[2];
vt[3] = regp->init[3];
vt[4] = regp->init[4];
vt[5] = regp->init[5];
transform->SetParameters(vt);
break;
case TRANSFORM_AFFINE:
case TRANSFORM_FROM_FILE:
default:
not_implemented();
break;
}
}
void
set_transform_affine (RegistrationType::Pointer registration,
Registration_Parms* regp)
{
AffineTransformType::Pointer transform = AffineTransformType::New();
AffineTransformType::ParametersType vt(12);
registration->SetTransform (transform);
switch (regp->init_type) {
case TRANSFORM_NONE:
{
VersorTransformType::Pointer v = VersorTransformType::New();
init_versor_moments (registration, v);
transform->SetMatrix(v->GetRotationMatrix());
transform->SetOffset(v->GetOffset());
86
std::cout << "Initial affine parms = " << transform << std::endl;
}
break;
case TRANSFORM_TRANSLATION:
not_implemented();
break;
case TRANSFORM_VERSOR:
{
VersorTransformType::Pointer v = VersorTransformType::New();
VersorTransformType::ParametersType vt(6);
vt[0] = regp->init[0];
vt[1] = regp->init[1];
vt[2] = regp->init[2];
vt[3] = regp->init[3];
vt[4] = regp->init[4];
vt[5] = regp->init[5];
v->SetParameters(vt);
std::cout << "Initial versor parms = " << v << std::endl;
transform->SetMatrix(v->GetRotationMatrix());
transform->SetOffset(v->GetOffset());
std::cout << "Initial affine parms = " << transform << std::endl;
}
break;
case TRANSFORM_AFFINE:
{
AffineTransformType::ParametersType at(12);
for (int i=0; i<12; i++) {
at[i] = regp->init[i];
}
transform->SetParameters(at);
std::cout << "Initial affine parms = " << transform << std::endl;
}
break;
case TRANSFORM_FROM_FILE:
default:
not_implemented();
break;
}
}
#endif /* GCS_REARRANGING_STUFF */
87
void
init_translation_default (Xform *xf_out, Xform* xf_in,
Stage_Parms* stage,
const OriginType& img_origin,
const SpacingType& img_spacing,
const ImageRegionType& img_region)
{
TranslationTransformType::Pointer trn = TranslationTransformType::New();
xf_out->set_trn (trn);
}
void
init_versor_default (Xform *xf_out, Xform* xf_in,
Stage_Parms* stage,
const OriginType& img_origin,
const SpacingType& img_spacing,
const ImageRegionType& img_region)
{
VersorTransformType::Pointer vrs = VersorTransformType::New();
xf_out->set_vrs (vrs);
}
void
init_affine_default (Xform *xf_out, Xform* xf_in,
Stage_Parms* stage,
const OriginType& img_origin,
const SpacingType& img_spacing,
const ImageRegionType& img_region)
{
AffineTransformType::Pointer aff = AffineTransformType::New();
xf_out->set_aff (aff);
}
void
xform_trn_to_aff (Xform *xf_out, Xform* xf_in,
Stage_Parms* stage,
const OriginType& img_origin,
const SpacingType& img_spacing,
const ImageRegionType& img_region)
88
{
init_affine_default (xf_out, xf_in, stage,
img_origin, img_spacing, img_region);
xf_out->get_aff()->SetOffset(xf_in->get_trn()->GetOffset());
}
void
xform_vrs_to_aff (Xform *xf_out, Xform* xf_in,
Stage_Parms* stage,
const OriginType& img_origin,
const SpacingType& img_spacing,
const ImageRegionType& img_region)
{
init_affine_default (xf_out, xf_in, stage,
img_origin, img_spacing, img_region);
xf_out->get_aff()->SetMatrix(xf_in->get_vrs()->GetRotationMatrix());
xf_out->get_aff()->SetOffset(xf_in->get_vrs()->GetOffset());
}
void
init_bspline_region (BsplineTransformType::Pointer bsp,
Stage_Parms* stage,
const OriginType& img_origin,
const SpacingType& img_spacing,
const ImageRegionType& img_region)
{
BsplineTransformType::OriginType origin = img_origin;
BsplineTransformType::SpacingType spacing = img_spacing;
BsplineTransformType::RegionType region;
BsplineTransformType::RegionType::SizeType grid_size;
ImageRegionType::SizeType img_size = img_region.GetSize();
ImageRegionType::IndexType img_index = img_region.GetIndex();
if (stage->grid_method == 0) {
/* Number of grid points was assigned */
for (int i=0; i<3; i++) grid_size[i] = stage->num_grid[i];
/* According to the examples & documentation, there should be 3 extra
grid points for an order 3 bspline: 1 lower & 2 upper. But my
experiments show that there is only 2 extra: 1 lower & 1 upper */
89
for (int i=0; i<3; i++) grid_size[i] += 3;
#if defined (commentout)
for (int i=0; i<3; i++) grid_size[i] --;
#endif
region.SetSize (grid_size);
for (unsigned int r=0; r<Dimension; r++) {
origin[r] += img_index[r] * spacing[r];
spacing[r] *= static_cast<double>(img_size[r] - 1) /
static_cast<double>(grid_size[r] - 1);
origin[r] -= spacing[r];
}
bsp->SetGridSpacing (spacing);
bsp->SetGridOrigin (origin);
bsp->SetGridRegion (region);
} else {
/* Absolute grid spacing was assigned */
BsplineTransformType::OriginType fraction_size;
for (int i=0; i<3; i++) {
grid_size[i] = (int) (img_size[i]*spacing[i]/stage->grid_spac[i]);
fraction_size[i] = img_size[i]*spacing[i] - grid_size[i]*stage->grid_spac[i];
grid_size[i] = grid_size[i] + 2;
}
for (int i=0; i<3; i++) grid_size[i] += 3;
#if defined (commentout)
for (int i=0; i<3; i++) grid_size[i] --;
#endif
region.SetSize (grid_size);
for (unsigned int r=0; r<Dimension; r++)
origin[r] += img_index[r]*spacing[r]-fraction_size[r]/2-stage->grid_spac[r];
for (unsigned int r=0; r<Dimension; r++)
spacing[r] = stage->grid_spac[r];
//bsp->SetGridSpacing (stage->grid_spac);
bsp->SetGridSpacing (spacing);
bsp->SetGridOrigin (origin);
bsp->SetGridRegion (region);
}
90
std::cout << "BSpline Region = "
<< region;
std::cout << "BSpline Grid Origin = "
<< bsp->GetGridOrigin()
<< std::endl;
std::cout << "BSpline Grid Spacing = "
<< bsp->GetGridSpacing()
<< std::endl;
std::cout << "Image Origin = "
<< img_origin
<< std::endl;
std::cout << "Image Spacing = "
<< img_spacing
<< std::endl;
std::cout << "Image Index = "
<< img_index
<< std::endl;
std::cout << "Image Size = "
<< img_size
<< std::endl;
}
void
alloc_bspline_parms (Xform *xf, BsplineTransformType::Pointer bsp)
{
const unsigned int num_parms = bsp->GetNumberOfParameters();
if (num_parms != xf->m_bsp_parms.GetSize()) {
xf->m_bsp_parms.SetSize (num_parms);
xf->m_bsp_parms.Fill (0.0);
bsp->SetParameters (xf->m_bsp_parms);
}
printf ("Bspline transform has %d free parameters\n", num_parms);
xf->set_bsp (bsp);
}
void
init_bspline_default (Xform *xf_out, Xform* xf_in,
Stage_Parms* stage,
const OriginType& img_origin,
91
const SpacingType& img_spacing,
const ImageRegionType& img_region)
{
BsplineTransformType::Pointer bsp = BsplineTransformType::New();
init_bspline_region (bsp, stage, img_origin, img_spacing, img_region);
alloc_bspline_parms (xf_out, bsp);
}
void
xform_trn_to_bsp (Xform *xf_out, Xform* xf_in,
Stage_Parms* stage,
const OriginType& img_origin,
const SpacingType& img_spacing,
const ImageRegionType& img_region)
{
init_bspline_default (xf_out, xf_in, stage,
img_origin, img_spacing, img_region);
xf_out->get_bsp()->SetBulkTransform (xf_in->get_trn());
}
void
xform_vrs_to_bsp (Xform *xf_out, Xform* xf_in,
Stage_Parms* stage,
const OriginType& img_origin,
const SpacingType& img_spacing,
const ImageRegionType& img_region)
{
init_bspline_default (xf_out, xf_in, stage,
img_origin, img_spacing, img_region);
xf_out->get_bsp()->SetBulkTransform (xf_in->get_vrs());
}
void
xform_aff_to_bsp (Xform *xf_out, Xform* xf_in,
Stage_Parms* stage,
const OriginType& img_origin,
const SpacingType& img_spacing,
const ImageRegionType& img_region)
{
init_bspline_default (xf_out, xf_in, stage,
92
img_origin, img_spacing, img_region);
xf_out->get_bsp()->SetBulkTransform (xf_in->get_aff());
}
void
xform_bsp_to_bsp (Xform *xf_out, Xform* xf_in,
Stage_Parms* stage,
const OriginType& img_origin,
const SpacingType& img_spacing,
const ImageRegionType& img_region)
{
BsplineTransformType::Pointer bsp_new = BsplineTransformType::New();
init_bspline_region (bsp_new, stage, img_origin, img_spacing, img_region);
alloc_bspline_parms (xf_out, bsp_new);
BsplineTransformType::Pointer bsp_old = xf_in->get_bsp();
/* Need to copy the bulk transform */
bsp_new->SetBulkTransform (bsp_old->GetBulkTransform());
// Now we need to initialize the BSpline coefficients of the higher
// resolution transform. This is done by first computing the actual
// deformation field at the higher resolution from the lower
// resolution BSpline coefficients. Then a BSpline decomposition
// is done to obtain the BSpline coefficient of the higher
// resolution transform.
unsigned int counter = 0;
for (unsigned int k = 0; k < Dimension; k++) {
typedef BsplineTransformType::ImageType ParametersImageType;
typedef itk::ResampleImageFilter<ParametersImageType,
ParametersImageType> ResamplerType;
ResamplerType::Pointer upsampler = ResamplerType::New();
typedef itk::BSplineResampleImageFunction<ParametersImageType,
double> FunctionType;
FunctionType::Pointer fptr = FunctionType::New();
typedef itk::IdentityTransform<double,
Dimension> IdentityTransformType;
IdentityTransformType::Pointer identity = IdentityTransformType::New();
93
upsampler->SetInput (bsp_old->GetCoefficientImage()[k]);
upsampler->SetInterpolator (fptr);
upsampler->SetTransform (identity);
upsampler->SetSize (bsp_new->GetGridRegion().GetSize());
upsampler->SetOutputSpacing (bsp_new->GetGridSpacing());
upsampler->SetOutputOrigin (bsp_new->GetGridOrigin());
typedef itk::BSplineDecompositionImageFilter<ParametersImageType,
ParametersImageType> DecompositionType;
DecompositionType::Pointer decomposition = DecompositionType::New();
decomposition->SetSplineOrder (SplineOrder);
decomposition->SetInput (upsampler->GetOutput());
decomposition->Update();
ParametersImageType::Pointer newCoefficients
= decomposition->GetOutput();
// copy the coefficients into the parameter array
typedef itk::ImageRegionIterator<ParametersImageType> Iterator;
Iterator it (newCoefficients, bsp_new->GetGridRegion());
while (!it.IsAtEnd()) {
xf_out->m_bsp_parms[counter++] = it.Get();
++it;
}
}
}
DeformationFieldType::Pointer
xform_any_to_vf (itk::Transform<double,3,3>* xf,
FloatImageType::Pointer image)
{
DeformationFieldType::Pointer field = DeformationFieldType::New();
field->SetRegions (image->GetBufferedRegion());
field->SetOrigin (image->GetOrigin());
field->SetSpacing (image->GetSpacing());
field->Allocate();
typedef itk::ImageRegionIterator< DeformationFieldType > FieldIterator;
FieldIterator fi (field, image->GetBufferedRegion());
94
fi.GoToBegin();
DoublePointType fixed_point;
DoublePointType moving_point;
DeformationFieldType::IndexType index;
FloatVectorType displacement;
while (!fi.IsAtEnd()) {
index = fi.GetIndex();
field->TransformIndexToPhysicalPoint (index, fixed_point);
moving_point = xf->TransformPoint (fixed_point);
for (int r = 0; r < Dimension; r++) {
displacement[r] = moving_point[r] - fixed_point[r];
}
fi.Set (displacement);
++fi;
}
return field;
}
DeformationFieldType::Pointer
xform_vf_to_vf (DeformationFieldType::Pointer vf,
FloatImageType::Pointer image)
{
printf ("Setting deformation field\n");
const DeformationFieldType::SpacingType& vf_spacing = vf->GetSpacing();
printf ("Deformation field spacing is: %g %g %g\n",
vf_spacing[0], vf_spacing[1], vf_spacing[2]);
const DeformationFieldType::SizeType vf_size = vf->GetLargestPossibleRegion().GetSize();
printf ("VF Size is %d %d %d\n", vf_size[0], vf_size[1], vf_size[2]);
const FloatImageType::SizeType& img_size = image->GetLargestPossibleRegion().GetSize();
printf ("IM Size is %d %d %d\n", img_size[0], img_size[1], img_size[2]);
const FloatImageType::SpacingType& img_spacing = image->GetSpacing();
printf ("Deformation field spacing is: %g %g %g\n",
img_spacing[0], img_spacing[1], img_spacing[2]);
if (vf_size[0] != img_size[0] || vf_size[1] != img_size[1] || vf_size[2] != img_size[2]) {
95
//printf ("Deformation stats (pre)\n");
//deformation_stats (vf);
vf = vector_resample_image (vf, image);
//printf ("Deformation stats (post)\n");
//deformation_stats (vf);
const DeformationFieldType::SizeType vf_size = vf->GetLargestPossibleRegion().GetSize();
printf ("NEW VF Size is %d %d %d\n", vf_size[0], vf_size[1], vf_size[2]);
const FloatImageType::SizeType& img_size = image->GetLargestPossibleRegion().GetSize();
printf ("IM Size is %d %d %d\n", img_size[0], img_size[1], img_size[2]);
}
return vf;
}
void
xform_to_vrs (Xform *xf_out,
Xform *xf_in,
Stage_Parms* stage,
const OriginType& img_origin,
const SpacingType& img_spacing,
const ImageRegionType& img_region)
{
switch (xf_in->m_type) {
case TRANSFORM_NONE:
init_versor_default (xf_out, xf_in, stage,
img_origin, img_spacing, img_region);
break;
case TRANSFORM_TRANSLATION:
print_and_exit ("Sorry, couldn’t convert to vrs\n");
break;
case TRANSFORM_VERSOR:
*xf_out = *xf_in;
break;
case TRANSFORM_AFFINE:
case TRANSFORM_BSPLINE:
case TRANSFORM_VECTOR_FIELD:
print_and_exit ("Sorry, couldn’t convert to vrs\n");
break;
}
96
}
void
xform_to_trn (Xform *xf_out,
Xform *xf_in,
Stage_Parms* stage,
const OriginType& img_origin,
const SpacingType& img_spacing,
const ImageRegionType& img_region)
{
switch (xf_in->m_type) {
case TRANSFORM_NONE:
init_translation_default (xf_out, xf_in, stage,
img_origin, img_spacing, img_region);
break;
case TRANSFORM_TRANSLATION:
*xf_out = *xf_in;
break;
case TRANSFORM_VERSOR:
case TRANSFORM_AFFINE:
case TRANSFORM_BSPLINE:
case TRANSFORM_VECTOR_FIELD:
print_and_exit ("Sorry, couldn’t convert to trn\n");
break;
}
}
void
xform_to_aff (Xform *xf_out,
Xform *xf_in,
Stage_Parms* stage,
const OriginType& img_origin,
const SpacingType& img_spacing,
const ImageRegionType& img_region)
{
switch (xf_in->m_type) {
case TRANSFORM_NONE:
init_affine_default (xf_out, xf_in, stage,
img_origin, img_spacing, img_region);
break;
97
case TRANSFORM_TRANSLATION:
xform_trn_to_aff (xf_out, xf_in, stage,
img_origin, img_spacing, img_region);
break;
case TRANSFORM_VERSOR:
xform_vrs_to_aff (xf_out, xf_in, stage,
img_origin, img_spacing, img_region);
break;
case TRANSFORM_AFFINE:
*xf_out = *xf_in;
break;
case TRANSFORM_BSPLINE:
case TRANSFORM_VECTOR_FIELD:
print_and_exit ("Sorry, couldn’t convert to aff\n");
break;
}
}
void
xform_to_bsp (Xform *xf_out,
Xform *xf_in,
Stage_Parms* stage,
const OriginType& img_origin,
const SpacingType& img_spacing,
const ImageRegionType& img_region)
{
BsplineTransformType::Pointer bsp;
switch (xf_in->m_type) {
case TRANSFORM_NONE:
init_bspline_default (xf_out, xf_in, stage,
img_origin, img_spacing, img_region);
break;
case TRANSFORM_TRANSLATION:
xform_trn_to_bsp (xf_out, xf_in, stage,
img_origin, img_spacing, img_region);
break;
case TRANSFORM_VERSOR:
xform_vrs_to_bsp (xf_out, xf_in, stage,
img_origin, img_spacing, img_region);
98
break;
case TRANSFORM_AFFINE:
xform_aff_to_bsp (xf_out, xf_in, stage,
img_origin, img_spacing, img_region);
break;
case TRANSFORM_BSPLINE:
xform_bsp_to_bsp (xf_out, xf_in, stage,
img_origin, img_spacing, img_region);
break;
case TRANSFORM_VECTOR_FIELD:
print_and_exit ("Sorry, couldn’t convert to bsp\n");
break;
}
}
void
xform_to_vf (Xform* xf_out, Xform *xf_in, FloatImageType::Pointer image)
{
DeformationFieldType::Pointer vf;
switch (xf_in->m_type) {
case TRANSFORM_NONE:
print_and_exit ("Sorry, couldn’t convert to vf\n");
break;
case TRANSFORM_TRANSLATION:
vf = xform_any_to_vf (xf_in->get_trn(), image);
break;
case TRANSFORM_VERSOR:
vf = xform_any_to_vf (xf_in->get_vrs(), image);
break;
case TRANSFORM_AFFINE:
vf = xform_any_to_vf (xf_in->get_aff(), image);
break;
case TRANSFORM_BSPLINE:
vf = xform_any_to_vf (xf_in->get_bsp(), image);
break;
case TRANSFORM_VECTOR_FIELD:
vf = xform_vf_to_vf (xf_in->get_vf(), image);
}
xf_out->set_vf (vf);
99
}
ra registration.h:
/* =======================================================================*
Copyright (c) 2004-2006 Massachusetts General Hospital.
All rights reserved.
* =======================================================================*/
#ifndef _ra_registration_h_
#define _ra_registration_h_
#include <stdlib.h>
#include "itkDemonsRegistrationFilter.h"
#if defined (commentout)
#include "itkMultiResolutionImageRegistrationMethod.h"
#endif
#include "itkImageRegistrationMethod.h"
#include "load_mha.h"
/* Registration typedefs */
#if defined (commentout)
typedef itk::MultiResolutionImageRegistrationMethod <
FloatImageType, FloatImageType > RegistrationType;
#endif
typedef itk::ImageRegistrationMethod <
FloatImageType, FloatImageType > RegistrationType;
typedef itk::DemonsRegistrationFilter<
FloatImageType,
FloatImageType,
DeformationFieldType> DemonsFilterType;
#if defined (commentout)
#define TRANSFORM_NONE 0
#define TRANSFORM_TRANSLATION 1
#define TRANSFORM_VERSOR 2
#define TRANSFORM_AFFINE 3
#define TRANSFORM_BSPLINE 4
#define TRANSFORM_FROM_FILE 5
#define TRANSFORM_DEMONS 6
100
#define OPTIMIZATION_NO_REGISTRATION 0
#define OPTIMIZATION_AMOEBA 1
#define OPTIMIZATION_RSG 2
#define OPTIMIZATION_VERSOR 3
#define OPTIMIZATION_LBFGS 4
#define OPTIMIZATION_LBFGSB 5
#endif
#define TRANSFORM_NONE 0
#define TRANSFORM_TRANSLATION 1
#define TRANSFORM_VERSOR 2
#define TRANSFORM_AFFINE 3
#define TRANSFORM_BSPLINE 4
#define TRANSFORM_VECTOR_FIELD 5
#define OPTIMIZATION_NO_REGISTRATION 0
#define OPTIMIZATION_AMOEBA 1
#define OPTIMIZATION_RSG 2
#define OPTIMIZATION_VERSOR 3
#define OPTIMIZATION_LBFGS 4
#define OPTIMIZATION_LBFGSB 5
#define OPTIMIZATION_DEMONS 6
#define METRIC_MSE 1
#define METRIC_MI 2
#define METRIC_MI_MATTES 3
class Stage_Parms {
public:
int xform_type;
int optim_type;
int metric_type;
int resolution[3];
float background_max; /* This is used as a threshold to find the valid region */
float background_val; /* This is used for replacement when resampling */
int min_its;
int max_its;
float grad_tol;
float convergence_tol;
101
float demons_std;
int num_grid[3]; // number of grid points in x,y,z directions
float grid_spac[3]; // absolute grid spacing in mm in x,y,z directions
int grid_method; // which grid method used, numbers (0) or absolute spacing (1)
int histoeq; // histogram matching flag on (1) or off (0)
char img_out_fn[_MAX_PATH];
char xf_out_fn[_MAX_PATH];
char vf_out_fn[_MAX_PATH];
public:
Stage_Parms () {
/* Generic optimization parms */
xform_type = TRANSFORM_VERSOR;
optim_type = OPTIMIZATION_VERSOR;
metric_type = METRIC_MSE;
resolution[0] = 4;
resolution[1] = 4;
resolution[2] = 1;
/* Intensity values for air */
background_max = -999.0;
background_val = -1200.0;
/* Generic optimization parms */
min_its = 2;
max_its = 25;
grad_tol = 1.5;
convergence_tol = 5.0;
/* Demons parms */
demons_std = 6.0;
/* Bspline parms */
num_grid[0] = 10;
num_grid[1] = 10;
num_grid[2] = 10;
grid_spac[0] = 20.;
grid_spac[1] = 20.;
grid_spac[2] = 20.;
grid_method = 1; // by default goes to the absolute spacing
histoeq = 0; // by default, don’t do it
*img_out_fn = 0;
*xf_out_fn = 0;
*vf_out_fn = 0;
102
}
Stage_Parms (Stage_Parms& s) {
/* Copy all the parameters except the file names */
*this = s;
*img_out_fn = 0;
*xf_out_fn = 0;
*vf_out_fn = 0;
}
};
class Registration_Parms {
public:
char moving_fn[_MAX_PATH];
char fixed_fn[_MAX_PATH];
char moving_mask_fn[_MAX_PATH];
char fixed_mask_fn[_MAX_PATH];
char img_out_fn[_MAX_PATH];
char xf_in_fn[_MAX_PATH];
char xf_out_fn[_MAX_PATH];
char vf_out_fn[_MAX_PATH];
int init_type;
double init[12];
int num_stages;
Stage_Parms** stages;
public:
Registration_Parms() {
*moving_fn = 0;
*fixed_fn = 0;
*moving_mask_fn = 0;
*fixed_mask_fn = 0;
*img_out_fn = 0;
*xf_in_fn = 0;
*xf_out_fn = 0;
*vf_out_fn = 0;
init_type = TRANSFORM_NONE;
num_stages = 0;
stages = 0;
}
103
~Registration_Parms() {
for (int i = 0; i < num_stages; i++) {
delete stages[i];
}
if (stages) free (stages);
}
};
class Registration_Data {
public:
FloatImageType::Pointer fixed_image;
FloatImageType::Pointer moving_image;
UCharImageType::Pointer fixed_mask;
UCharImageType::Pointer moving_mask;
};
void not_implemented (void);
void do_registration (Registration_Parms* regp);
#endif
104
Bibliography
[1] SCIRun: A Scientific Computing Problem Solving Environment, Scientific Com-
puting and Imaging Institute (SCI).
[2] BioImage: Volumetric Image Analysis and Visualization. Scientific Computing
and Imaging Institute (SCI).
[3] 3D Slicer - Introduction.
[4] Osirix - About.
[5] SlicerWiki.
[6] N. Archip, S. Tatli, P. Morrison, F. Jolesz, S.K. Warfield, and S. Silverman.
Non-rigid registration of pre-procedural MR images with intra-procedural unen-
hanced CT images for improved targeting of tumors during liver radiofrequency
ablations. In Medical Image Computing and Computer-Assisted Intervention
MICCAI 2007, volume Volume 4792/2007, pages 969–977. Springer Berlin /
Heidelberg, 2007.
[7] F. Azmandian. The chart checker: Applying data mining techniques to detect
major errors in radiotherapy treatment charts. Master’s thesis, Northeastern
University, 2007.
105
[8] J. Bedford. A comparison of coplanar four-field techniques for conformal radio-
therapy of the prostate. Radiotherapy and Oncology, 51(3):225 – 235, 1999.
[9] C. Beigelman-Aubry. Post-processing and display in multislice CT of the chest.
JBR-BTR, 90:85–88, 2007.
[10] M. Betke, J. Ruel, G.C. Sharp, S.B. Jiang, D.P. Gierga, and G.T.Y. Chen.
Tracking and prediction of tumor movement in the abdomen. In PRIS, pages
27–37, 2006.
[11] V. Boldea, G.C. Sharp, S.B. Jiang, and D. Sarrut. 4D-CT lung motion esti-
mation with deformable registration: quantification of motion nonlinearity and
hysteresis. Med Phys, 35:1008–1018, Mar 2008.
[12] F.L. Bookstein. Principal warps: Thin-plate splines and the decomposition of
deformations. IEEE Trans. Pattern Anal. Mach. Intell., 11(6):567–585, 1989.
[13] P. Bourke. Polygonising a scalar field, May 1994.
[14] J.K. Bucsko. Managing respiratory motion. Radiology Today, 5(23):33, 2004.
[15] S. Chang, J. Zhou, Q. Liu, D.N. Metaxas, B.G. Haffty, S.N. Kim, S.J. Jabbour,
and N.J. Yue. Registration of lung tissue between fluoroscope and CT images:
determination of beam gating parameters in radiotherapy. In Medical Image
Computing and Computer-Assisted Intervention MICCAI 2007, volume Volume
4791/2007, pages 751–758. Springer Berlin / Heidelberg, 2007.
[16] G.T.Y. Chen and E.R.M. Rietzel. 4D CT simulation. In Image-Guided IMRT,
pages 247–257. Springer Berlin Heidelberg, 2006.
106
[17] K.S. Cover, F.J. Lagerwaard, and S. Senan. Color intensity projections: a rapid
approach for evaluating four-dimensional CT scans in treatment planning. Int.
J. Radiat. Oncol. Biol. Phys., 64:954–961, Mar 2006.
[18] A.H. Dachman, P. Lefere, S. Gryspeerdt, and M. Morin. CT colonography:
visualization methods, interpretation, and pitfalls. Radiol. Clin. North Am.,
45:347–359, Mar 2007.
[19] N. Dedual, D. Kaeli, B. Johnson, G. Chen, and J. Wolfgang. Visualization of 4D
computed tomography datasets. Image Analysis and Interpretation, 2006 IEEE
Southwest Symposium on, pages 120–123, 2006.
[20] K.H. Englmeier and M.D. Seemann. Multimodal virtual bronchoscopy using
PET/CT images. Comput. Aided Surg., 13:106–113, Mar 2008.
[21] B. Erem, G.C. Sharp, Z. Wu, and D.R. Kaeli. Interactive deformable regis-
tration visualization and analysis of 4D computed tomography. In D. Zhang,
editor, ICMB, volume 4901 of Lecture Notes in Computer Science, pages 232–
239. Springer, 2008.
[22] R.S. Estepar, N. Stylopoulos, R. Ellis, E. Samset, C.F. Westin, C. Thompson,
and K. Vosburgh. Towards scarless surgery: an endoscopic ultrasound navigation
system for transgastric access procedures. Comput. Aided Surg., 12:311–324, Nov
2007.
[23] G. Farneback and C.F. Westin. Affine and deformable registration based on
polynomial expansion. Med Image Comput Comput Assist Interv Int Conf Med
Image Comput Comput Assist Interv, 9:857–864, 2006.
107
[24] M. Folkert, N. Dedual, and G.T.Y. Chen. A biological lung phantom for igrt
studies. Medical Physics, 33(6):22–34, 2006.
[25] M. Fornefett, K. Rohr, and H.S. Stiehl. Radial basis functions with compact
support for elastic registration of medical images. Image and Vision Computing,
19(1–2):87–96, 2001.
[26] D.T. Gering, A. Nabavi, R. Kikinis, W.E.L. Grimson, N. Hata, P. Everett, F.A.
Jolesz, and W.M. Wells III. An integrated visualization system for surgical plan-
ning and guidance using image fusion and interventional imaging. In MICCAI,
pages 809–819, 1999.
[27] A.J. Hanson and R.A. Cross. Interactive visualization methods for four dimen-
sions. Visualization, 1993. Visualization ’93, Proceedings., IEEE Conference on,
pages 196–203, 25-29 Oct 1993.
[28] K.M. Horton, M.R. Horton, and E.K. Fishman. Advanced visualization of
airways with 64-MDCT: 3D mapping and virtual bronchoscopy. AJR Am J
Roentgenol, 189:1387–1396, Dec 2007.
[29] A. Khamene, C. Schaller, J. Hornegger, J.C. Celi, B. Ofstad, E. Rietzel, X.A. Li,
A. Tai, and J. Bayouth. A novel image based verification method for respiratory
motion management in radiation therapy. Computer Vision, 2007. ICCV 2007.
IEEE 11th International Conference on, pages 1–7, 14-21 Oct. 2007.
[30] D. Kotsianos-Hermle, M. Scherr, S. Wirth, J. Rieger, R.M. Huber, M. Reiser,
and H. Hautmann. Visualization of bronchial lesions using multidetector CT and
endobronchial ultrasound (EBUS). Eur. J. Med. Res., 12:84–89, Feb 2007.
108
[31] P. Li, G. Remmert, J. Biederer, and R. Bendl. Finite element simulation of
moving targets in radio therapy. In Bildverarbeitung fr die Medizin 2007, pages
353–357. Springer Berlin Heidelberg, 2007.
[32] S. Mori, M. Endo, S. Komatsu, T. Yashiro, S. Kandatsu, and M. Baba. Four-
dimensional measurement of lung tumor displacement using 256-multi-slice ct-
scanner. Lung Cancer, 56(1):59–67, 2007.
[33] Y.D. Mutaf, J.A. Antolak, and D.H. Brinkmann. The impact of temporal inac-
curacies on 4DCT image quality. Med Phys, 34:1615–1622, May 2007.
[34] K.M. Prise, G. Schettino, M. Folkard, and K.D. Held. New insights on cell death
from radiation exposure. The Lancet Oncology, 6(7):520–8, 2005.
[35] E.R.M. Rietzel and G.T.Y. Chen. 4D imaging and treatment planning. In New
Technologies in Radiation Oncology, pages 81–97. Springer Berlin Heidelberg,
2006.
[36] D. Rueckert, L. I. Sonoda, C. Hayes, D. L. G. Hill, M. O. Leach, and D. J.
Hawkes. Nonrigid registration using free-form deformations: Application to
breast MR images. IEEE Trans. on Med. Imag., 18(8):712–721, 1999.
[37] R. San Jose Estepar, N. Stylopoulos, R.E. Ellis, E. Samset, C.F. Westin,
C. Thompson, and K. Vosburgh. Towards scarless surgery: An endoscopic-
ultrasound navigation system for transgastric access procedures. In Ninth Inter-
national Conference on Medical Image Computing and Computer-Assisted Inter-
vention (MICCAI’06), Lecture Notes in Computer Science 4190, pages 445–453,
Copenhagen, Denmark, October 2006.
109
[38] A.P. Santhanam, C. Imielinska, P. Davenport, P. Kupelian, and J.P. Rolland.
Modeling real-time 3-d lung deformations for medical visualization. IEEE Trans
Inf Technol Biomed, 12:257–270, Mar 2008.
[39] S. Sayeh, J. Wang, W.T. Main, W. Kilby, and C.R. Maurer Jr. Respiratory
motion tracking for robotic radiosurgery. In Treating Tumors that Move with
Respiration, pages 15–29. Springer Berlin Heidelberg, 2007.
[40] J. Sharman. The marching cubes algorithm.
[41] G.C. Sharp, S.W. Lee, and D.K. Wehe. Icp registration using invariant features.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(1):90–102,
2002.
[42] G.C. Sharp, S.W. Lee, and D.K. Wehe. Multiview registration of 3d scenes by
minimizing error between coordinate frames. ECCV, pages 587–597, 2002.
[43] G.C. Sharp, S.W. Lee, and D.K. Wehe. Multiview registration of 3d scenes
by minimizing error between coordinate frames. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 26(8):2037–1050, 2004.
[44] D. Silver, M. Gao, and N. Zabusky. Visualizing casual effects in 4D space-time
vector fields. In VIS ’91: Proceedings of the 2nd conference on Visualization ’91,
pages 12–16, Los Alamitos, CA, USA, 1991. IEEE Computer Society Press.
[45] G.K. Sirineni, M.K. Kalra, K.M. Pottala, M.A. Syed, S. Tigges, and A.D. Cann.
Visualization techniques in computed tomographic coronary angiography. Curr
Probl Diagn Radiol, 35:245–257, 2006.
110
[46] Y. Su, M.H. Fisher, and R.S. Rowland. Marker-less intra-fraction organ motion
tracking - a hybrid asm approach. Imaging Systems and Techniques, 2007. IST
’07. IEEE International Workshop on, pages 1–7, 5-5 May 2007.
[47] M. Sun, G. Schindler, S.B. Kang, and F. Dellaert. 4D view synthesis: navigating
through time and space. In SIGGRAPH ’07: ACM SIGGRAPH 2007 posters,
page 194, New York, NY, USA, 2007. ACM.
[48] X. Sun, H. Zhang, and H. Duan. 3D computerized segmentation of lung volume
with computed tomography. Acad Radiol, 13:670–677, Jun 2006.
[49] R. Tanaka, S. Mori, M. Endo, and S. Sanada. Volumetric tracking tool using
four-dimensional CT for image guided-radiation therapy. Radiological Physics
and Technology, 1(1):38–43, January 2008.
[50] M. Tory, N. Rober, T. Moller, A. Celler, and M.S. Atkins. 4D space-time tech-
niques: a medical imaging case study. In VIS ’01: Proceedings of the conference
on Visualization ’01, pages 473–476, Washington, DC, USA, 2001. IEEE Com-
puter Society.
[51] F.-Y. Tzeng and K.-L. Ma. Intelligent feature extraction and tracking for vi-
sualizing large-scale 4D flow simulations. In SC ’05: Proceedings of the 2005
ACM/IEEE conference on Supercomputing, page 6, Washington, DC, USA, 2005.
IEEE Computer Society.
[52] A.G. Webb. Introduction to Biomedical Imaging, page 264.
111
[53] Q. Wei, Y. Hu, J.H. MacGregor, and G. Gelfand. Segmentation of lung lobes in
volumetric CT images for surgical planning of treating lung cancer. Conf Proc
IEEE Eng Med Biol Soc, 1:4869–4872, 2006.
[54] J.B. West, J. Park, J.R. Dooley, and C.R. Maurer Jr. 4D treatment optimization
and planning for radiosurgery with respiratory motion tracking. In Treating
Tumors that Move with Respiration, pages 249–264. Springer Berlin Heidelberg,
2007.
[55] A. Wittek, K. Miller, R. Kikinis, and S.K. Warfield. Patient-specific model
of brain deformation: application to medical image registration. J Biomech,
40:919–929, 2007.
[56] H. Wu, B. Salzberg, G.C. Sharp, S.B. Jiang, H. Shirato, and D. Kaeli. Subse-
quence matching on structured time series data. In SIGMOD ’05: Proceedings
of the 2005 ACM SIGMOD international conference on Management of data,
pages 682–693, New York, NY, USA, 2005. ACM.
[57] K.C. Yu, E.L. Ritman, and W.E. Higgins. System for the analysis and visual-
ization of large 3D anatomical trees. Comput. Biol. Med., 37:1802–1820, Dec
2007.
[58] R. Zeng, J.A. Fessler, J.M. Balter, and P.A. Balter. Iterative sorting for 4DCT
images based on internal anatomy motion. Biomedical Imaging: From Nano to
Macro, 2007. ISBI 2007. 4th IEEE International Symposium on, pages 744–747,
12-15 April 2007.
112