Date post: | 09-Nov-2014 |
Category: |
Documents |
Upload: | manish-verma |
View: | 118 times |
Download: | 1 times |
IMAGE MORPHING 1
1.INTRODUCTIONImage morphing is an image processing technique used
for the visual effect of transforming from one image to
another image. Ideally this is a seamless transition over a
period of time. The main concept is to create an
intermediate image by mixing the pixel color of the
original image with another image. The easiest way to
morph one image into another, for images that are the
same width and height, is to blend each pixel in one
image with the corresponding pixel in the other image.
Basically, the color of each pixel is interpolated from the
first image value to the corresponding second image
value. Additional, more complex, image morphing
techniques exist . We focused on an approach that makes
use of barycentric coordinates for our project.The steps
needed for image morphing consist of many repetitive
tasks. Some aspects of the process are independent
such as the computation to determine color for the pixels
in the morphed image. This processing can be completed
in parallel and therefore is a good candidate for
optimization on FPGA.
IMAGE MORPHING 2
Image morphing is a special effect in motion
pictures
and animations that changes (or morphs) one image into
another through a seamless transition. Most often it is
used to depict one person turning into another through
technological means or as part of a fantasy or surreal
sequence. Traditionally such a depiction would be
achieved through cross-fading techniques on film. Since
the early 1990s, this has been replaced by computer
software to create more realistic transitions.
Morphing is an image processing technique used for the
metamorphosis from one image to another. The idea is to
get a sequence of intermediate images which when put
together with the original images would represent the
change from one image to the other. The simplest
method of transforming one image into another is to
cross-dissolve between them. In this method, the color of
each pixel is interpolated over time from the first image
value to the corresponding second image value. This is
not so effective in suggesting the actual metamorphosis.
For morphs between faces, the metamorphosis does not
IMAGE MORPHING 3
look good if the two faces do not have the same shape
approximately.
2.MORPHING
Morphing is derived from the word Metamorphosis.
Metamorphosis means to change shape, appearance
or form.
2.1.DEFINATION
Morphing can be defined as : -
Transition from one object to another.
Process of transforming one image into another.
An animation technique that allows you to blend two still
images, creating a sequence of in – between pictures that
when played in Quick Time, metamorphoses the first
image into the second.
2.2.TYPES OF MORPHING
1.Single Image Morphs
IMAGE MORPHING 4
A single morph is the most basic morph you can create-
morphing one image to another image. For example, you
can morph photos of two brothers; begin with one photo
then morph to the other.Just for fun, you can display a
frame halfway between the first photo and the second to
project an imaginary but realistic "third" brother.
2.Multiple Image Morphs
You can have a lot of fun with multiple morphs (also
called "Series Morphs" by some manufacturers and
"Morph 2+ Images" on our Morphing Software
Homepage). Morphing a flower into a bird then morphing
that bird into a child is a multiple morph; more than two
images are involved. Morphing a series of photos into a
mini-morph movie is a challenging but fascinating project.
For a morph this diverse, pick a common visual theme-
such as a similar background or a dark spot that the eye
can latch onto during the morph.
The more parallel your photos are in color, contrast and
focal point, the more convincing the morph will be. Why?
Because the viewer's mind focuses on the smooth change
IMAGE MORPHING 5
of your focal point. Your viewers will watch the eyes
morph while other characteristics blend and change
inconspicuously to the viewer's mind. Before you tackle a
multiple morph, focus on becoming expert at smooth
single morphs.
Once you are confident you can place anchor points
precisely for convincing single morphs, you are ready.
Take time to pick multiple morph photos carefully. you'll
want as much similarity as possible. The most impressive
multiple morphs have several similarities and a few
drastic differences.
3.Deform and Distortion Morphs
Most morphing software will allow you to alter a photo in
ways that will remind you of funhouse mirrors at the fair,
but morphing software manufacturers call these features
by different names. On our Morphing Software
Homepage, we've labeled warp morphs as morphs you
create manually with anchor points. Deform and
distortion morphs are morphs that are automated. You
IMAGE MORPHING 6
can add deform and distort effects by clicking a menu
option or by clicking and dragging the mouse.
For this article, we'll stick with the generic term "warp
morphing," meaning you can stretch, distort, or twist an
image. You can make a photo of your little brother grow
tall and skinny in seconds (instead of in years) or turn a
friend into a cone head.
If you are creating a warp morph from one image to
another, however, make sure that both images are the
same pixel size. Most morphing programs have a resize
tool so you can match image sizes closely before
distorting.
4.Mask Morphing
A "mask" will isolate a specific part of an image and keep
that part of the image stationary. Mask morphing tools
come in handy when you want to warp, deform or
animate just part of the picture. For example, you can
mask most of the face and enlarge only the nose on a
photo. You can even take a picture of an infant, mask all
but one eyelid, and then make the baby appear to wink.
IMAGE MORPHING 7
5.Auto Reverse Morphing
When the morph sequence reaches the end (the target
image) it will reverse, morphing back to the starting
photo. If your reverse morph is so impressive that you
never want it to end, try auto loop morphing.
6.Auto Loop Morphing
This morphing tool lets you automatically reverse then
replay a morph. After completing just one morph
sequence, you can morph a photo of a frog into a dill
pickle, back to a frog, then repeat the transformation
forever.
Beyond these morph types, you can polish your morph
with custom backgrounds, custom foregrounds or frames.
You can even perform simple edits, such as cropping,
rotating, altering color, sharpening focus, and tweaking
contrast. Some morphing software allows you to apply
filters to your images, such as negative morphing,
grayscale morphing, and embossed morphing. For more
details on what features each morphing software package
IMAGE MORPHING 8
offers, in addition to morphing definitions, see
our Morphing Software Homepage .
2.3.PRESENT USES OF MORPHING
Morphing algorithms continue to advance and programs
can automatically morph images that correspond closely
enough with relatively little instruction from the user. This
has led to the use of morphing techniques to create
convincing slow-motion effects where none existed in the
original film or video footage by morphing between each
individual frame using optical flow technology.Citation is
empty Morphing has also appeared as a transition
technique between one scene and another in television
shows, even if the contents of the two images are entirely
unrelated. The algorithm in this case attempts to find
corresponding points between the images and distort one
into the other as they crossfade.
While perhaps less obvious than in the past, morphing is
used heavily today. Citation is empty Whereas the effect
was initially a novelty, today, morphing effects are most
often designed to be seamless and invisible to the eye.
IMAGE MORPHING 9
Image
morphing is a technique to synthesize a fluid
transformation from one image (source image) to another
(destination image). Morphing has been wildly used in
making visual effects. One of the most famous morphing
example is Michael Jackson's "Black or White" MTV.
3.HOW MORPHING IS DONE?
3.1.GENERAL IDEA
As the morphism proceeds, the first image is
gradually distorted and is faded out.
The second image starts out totally distorted towards
the first and is faded in.
3.2.STEPS INVOLVED
The morph process consists of :-
Warping two images so that they have the same
“shape’’.
Cross dissolving the resulting images.
Outline of our Procedures
IMAGE MORPHING 10
Our algorithm consists of a feature finder and a face morpher. The following figure illustrates our procedures.
The details for the implementations will be discussed
in the following paragraphs.
Pre-Processing
When getting an image containing human faces, it
is always better to do some pre-processing such like
removing the noisy backgrounds, clipping to get a
proper facial image, and scaling the image to a
reasonable size. So far we have been doing the pre-
processing by hand because we would otherwise
need to implement a face-finding algorithm. Due to
time-limitation, we did not study automatic face
finder.
IMAGE MORPHING 11
Feature-Finding
Our goal was to find 4 major feature points, namely the
two eyes, and the two end-points of the mouth. Within
the scope of this project, we developed an eye-finding
algorithm that successfully detect eyes at 84% rate.
Based on eye-finding result, we can then find the
mouth and hence the end-points of it by heuristic
approach.
1.Eye-finding
The figure below illustrates our eye-finding
algorithm. We assume that the eyes are more
complicated than other parts of the face. Therefore,
we first compute the complexity map of the facial
image by sliding a fixed-size frame and measuring the
complexity within the frame in a "total variation"
sense. Total variation is defined as the sum of
difference of the intensity of each pair of adjacent
pixels. Then, we multiply the complexity map by a
weighting function that is set a priori. The weighting
function specifies how likely we can find eyes on the
face if we don't have any prior information about it.
IMAGE MORPHING 12
Afterwards, we find the three highest peaks in the
weighted complexity map, and then we decide which
two of the three peaks, which are our candidates of
eyes, really correspond to the eyes. The decision is
based on the similarity between each pair of the
candidates, and based on the location where these
candidates turn out to be. The similarity is measured
in the correlation-coefficient sense, instead of the area
inner-product sense, in order to eliminate the
contribution from variation in illumination.
2.Mouth-finding
After finding the eyes, we can specify the mouth
as the red-most region below the eyes. The red-ness
IMAGE MORPHING 13
function is given by
Redness = ( R > G * 1.2 ? ) * ( R
> Rth ? ) * { R / (G + epsilon ) }
where Rth is a threshold, and epsilon is a small
number for avoiding division by zero. Likewise, we
can define the green-ness and blue-ness functions.
The following figure illustrate our red-ness, green-
ness, and blue-ness functions. Note that the mouth
has relatively high red-ness and low green-ness
comparing to the surrounding skin. Therefore, we
believe that using simple segmentation or edge
detection techniques we would be able to implement
an algorithm to find the mouth and hence its end
points automatically, if time permitting.
IMAGE MORPHING 14
3.Image Partitioning
Our feature finder can give us the positions of the
eyes and the ending points of the mouth, so we get 4
feature points. Beside these facial features, the
edges of the face also need to be carefully
considered in the morphing algorithm. If the face
edges do not match well in the morphing process,
the morphed image will look strange on the face
edges. We generate 6 more feature points around
IMAGE MORPHING 15
the face edge, which are the intersection points of
the extension line of the first 4 facial feature points
with the face edges. Hence, totally we have 10
feature points for each face. In the following figure,
the white dots correspond to the feature points.
Based on these 10 feature points, our face-morpher
partitions each photo into 16 non-overlapping
triangular or quadrangular regions. The partition is
illustrated in the following two pictures. Ideally, if we
could detect more feature points automatically, we
would be able to partitioned the image into finer
meshes, and the morphing result would have been
even better.
IMAGE MORPHING 16
Image 1 Image 2
____ ______
Since the feature points of images 1 and 2 are,
generally speaking, at different positions, when doing
morphing between images, the images have to be
warped such that their feature points are matched.
Otherwise, the morphed image will have four eyes,
two mouths, and so forth. It will be very strange and
unpleasant that way.
Suppose we would like to make an intermediate
image between images 1 and 2, and the weightings
for images 1 and 2 are alpha and (1-alpha),
respectively. For a feature point A in image 1, and
the corresponding feature point B in image 2, we are
using linear interpolation to generate the position of
the new feature point F:
IMAGE MORPHING 17
The new feature point F is used to construct a point
set which partitions the image in another way
different from images 1 and 2. Images 1 and 2 are
warped such that their feature points are moved to
the same new feature points, and thus their feature
points are matched.In the warping process,
coordinate transformations are performed for each of
the 16 regions respectively.
4.COORDINATE
TRANSFORMATION
There exist many coordinate transformations for the
mapping between two triangles or between two
IMAGE MORPHING 18
quadrangles. We used affine and bilinear transformations
for the triangles and quadrangles, respectively. Besides,
bilinear interpolation is performed in pixel sense.
4.1. Affine Transformation
Suppose we have two triangles ABC and DEF. An
affine transformation is a linear mapping from one
triangle to another. For every pixel p within triangle
ABC, assume the position of p is a linear combination
of A, B, and C vectors. The transformation is given
by the following equations,
Here, there are two unknowns, Lambda1 and
Lambda2, and two equations for each of the two
dimensions. Consequently, Lambda1 and Lambda2
can be solved, and they are used to obtain q. I.e.,
the affine transformation is a one-to-one mapping
between two triangles.
4.2. Bilinear Transformation
IMAGE MORPHING 19
Suppose we have two quadrangles ABCD and
EFGH. The Bilinear transformation is a mapping from
one quadrangle to another. For every pixel p within
quadrangle ABCD, assume that the position of p is a
linear combination of vectors A, B, C, and D. Bilinear
transformation is given by the following equations,
There are two unknowns u and v. Because this is a
2D problem, we have 2 equations. So, u and v can be
solved, and they are used to obtain q. Again, the
Bilinear transformation is one-to-one mapping for
two quadrangles.
IMAGE MORPHING 20
5.TRANSFORMATION WITH LINES
5.1.TRANSFORMATION WITH SINGLE LINE
In this case, one line in the source image is mapped to a
corresponding line in the destination image. The other
parts of the image are moved appropriately to maintain
their relative position from the specified line. Each of
these lines is directed. This could be used to rotate and
scale images.
A few examples are shown below. The original image and
three different rotated and scaled versions of the image
are shown. The line chosen in the original image was the
edge to the left of the image. It was mapped to a line
along the bottom of the image. The rotated version is
obtained by using two lines of the same length. The
scaling is done using the lengths of the two lines. The
scaling could be done in two different ways. They are:
IMAGE MORPHING 21
Scaling along the line and perpendicular to the
lineScaling along the line only.
The two other scaled images are examples of these
two types of scaling.
Original Image
Rotated and Scaled in both
directions
Rotated Image
Rotated and Scaled only
along the line
The rotation and scaling can be restricted to a region
IMAGE MORPHING 22
around the line by introducing a weighting function.
5.2.TRANSFORMATION WITH MULTIPLE PAIRS OF
LINES
Transformations performed using more than one pair of
lines involve a weighted combination of the
transformations performed by each line. Since reverse
mapping is used, for each pixel in the destination image
we find a pixel in the source image which would be used.
The displacement corresponding to each line is calculated
for each pixel. A weigted average of these displacements
is used to determine how much each pixel needs to be
moved. The weight used depends on the distance of the
point under consideration from the line. The weight can
also depend on the length of the line. This
transformation, based on multiple lines, can be used
effectively for morphing.
IMAGE MORPHING 23
6.MORPHING WITH LINES
Morphing between two images I0 and I1 is done as
follows.
Lines are defined on the two images I0 and I1.
The Mapping between the lines is specified.
Depending on the number of intermediate frames
required, a set of interpolated lines are obtained
An intermediate frame is obtained by doing three things
The lines in the first image I0 are warped to the lines
corresponding to the intermediate image.
The lines in the second image I1 are warped to the
lines corresponding to the intermediate image.
The two warped images are now combined
proportionately depending on how close the frame is
with respect to the initial and final frames
Since the images have been warped to the same shape
before cross-dissolving the intermediate image looks
IMAGE MORPHING 24
good.
An example of a morph process is shown below.
Original Source Image
Cross-dissolve of the
Original Images
Warped Source Image
Cross-dissolve of the
Warped Images
IMAGE MORPHING 25
Original Destination Image Warped Destination Image
The two original images, the two warped images and the
two images corresponding to a cross-dissolve of the
original images and a cros-dissolve of the warped images
are shown.
7.WARPING
A warp is a 2-D geometric transformation and
generates a distorted image when it is applied to an
image.
Warping an image means to apply a given
deformation to it.
Let the pair of lines in the source and destination
images be as shown, A'B' and AB. The source image
pixel is X' and the destination image pixel is X.
IMAGE MORPHING 26
Given X, the destination image pixel position, u and v
are determined by the equations given below. Then X'
in the source image is determined. The pixel in the
souce image at X' is placed in the destination image at
X.
The weighting of the different lines is done as follows.
IMAGE MORPHING 27
There are two ways to warp an image:-
Forward mapping
Reverse mapping
7.1.FORWARD MAPPING
• Each pixel in source image is mapped to an
appropriate pixel in destination image.
• Some pixels in the destination image may not be
mapped. We need interpolation to determine these
pixel values. This mapping was used in our point
morphing algorithm.
7.2.REVERSE MAPPING
• This method goes through each pixel in the
destination
image and samples an appropriate source image
pixel.
• All destination image pixels are mapped to some
source image pixel.
IMAGE MORPHING 28
• This mapping is used in the Beier/Neely line
morphing
method.
• The reverse mapping to determine the pixel in the
source image corresponding to a given pixel in the
destination image is done as follows.
In either case, the problem is to determine the way in
which the pixels in one image should be mapped to the
pixels in the other image. So, we need to specify how
each pixel moves between the two images. This could
be done by specifying the mapping for a few important
pixels. The motion of the other pixels could be obtained
by appropriately extrapolating the information
specified for the control pixels. These sets of control
pixels can be specified as lines in one image mapping
to lines in the other image or points mapping to points.
7.3.TYPES OF WARPING
Mesh Warp: The warp is specified by control
meshes. The algorithm was described in class.
IMAGE MORPHING 29
Field Warp: The warp is specified by a set of line
pairs. An example is the Beier and Neely algorithm.
Point Warp: The warp is specified by a set of control
points. A simple solution is to use triangulation to
convert point warp to mesh warp. There are several
more complicated techniques such as scattered data
interpolation and free-form deformation. This method
of image warping is based on a forward
mapping technique, where each pixel from the input
image is mapped to a new position in the output
image. Since not every output pixel will be specified,
we must use an interpolating function to complete
the output image. We specify several control points,
which will map exactly to a given location in the
output image. The neighboring pixels will move
somewhat less than the control point, with the
amount of movement specified by a weighting
function consisting of two separate components,
both dependent on the distance from the pixel to
each control point in the image.
The first component of the weighting function is a
Gaussian function which is unity at the control point and
IMAGE MORPHING 30
decays to zero as you move away from the control point.
The idea is to have pixels far away from a control point be
unaffected by the movement of that point. The problem
with this scheme is that each pixel is affected by the
weighting functions of every control point in the image.
So even though the weighting function at a control point
may be one, that point will still be affected by the
movement of every other control point in the image, and
won't move all the way to its specified location.
In order to overcome this effect, we designed the second
component of the weighting function, which depends on
the relative distance from a pixel to each control point.
The distance to the nearest control point is used as a
reference, and the contribution of each control point is
reduced by a factor depending on the distance to the
nearest control point divided by the distance to that
control point. This serves to force control point pixels to
move exactly the same distance as their associated
control points, with all other pixels moving somewhat less
and being influenced most by nearby control points.
IMAGE MORPHING 31
The following examples show some of the uses of
warping. The first set of images shows how facial features
and/or expressions can be manipulated. The second set
shows how the overall shape of the image can be
distorted (e.g., to match the shape of a second image for
use in the morphing algorithm).
The first image is a "normal" Roger with the control points
we used identified. The second image shows how those
points were moved. The last image is the more cheery
Roger that results.
IMAGE MORPHING 32
Here we show what happens to Kevin when he combs his
hair a little differently and goes off his diet.
8.CROSS DISSOLVING
A cross-dissolve is a sequence of images which
implements a gradual fade from one to the other.
After performing coordinate transformations for each of
the two facial images, the feature points of these images
are matched. i.e., the left eye in one image will be at the
same position as the left eye in the other image. To
complete face morphing, we need to do cross-dissolving
as the coordinate transforms are taking place. Cross-
dissolving is described by the following equation,
IMAGE MORPHING 33
where A,B are the pair of images, and C is the morphing
result.This operation is performed pixel by pixel, and each
of the color components RGB are dealt with individually.
The following example demonstrates a typical
morphing process.
1. The original images of Ally and Lion, scaled to the
same size. Please note that the distance between the
eyes and the mouth is significantly longer in the lion's
picture than in Ally's picture.
__________
2. Perform coordinate transformations on the partitioned
images to match the feature points of these two images.
IMAGE MORPHING 34
Here, we are matching the eyes and the mouths for these
two images. We can find that Ally's face becomes longer,
and the lion's face becomes shorter.
__________
3. Cross-dissolve the two images to generate a new
image.
The morph result looks like a combination of these two
wrapped faces. The new face has two eyes and one
mouth, and it possesses the features from both Ally's and
the lion's faces.
_______________
IMAGE MORPHING 35
9.MORPHING PROCESS
Step I : Interpolating the lines:
• Interpolate the coordinates of the end points of every
pair of lines.
Step II : Warping the Images:
• Each of the source images has to be deformed
towards the needed frame.
• The deformation works pixel by pixel is based on the
reverse mapping. This algorithm is called Beier-Neely
Algorithm.
10.BEIER-NEELY ALGORITHM
IMAGE MORPHING 36
IDEA IS TO:
1)Compute position of pixel X in destination image
relative to the line drawn in destination image.
(x,y) (u,v)
2)Compute coordinates of pixel in source image whose
position relative to the line drawn in source image is
(u,v). (u,v) (x’,y’)
IMAGE MORPHING 37
For each pixel X=(x,y) in the destination image
DSUM=(0,0) , weightsum=0
for each line(Pi, Qi)
calculate(ui,vi) based on Pi, Qi
calculate (xi’, yi’) based on u,v and Pi, Qi
calculate displacement
Di = Xi’ – X for this line
compute weight for line(Pi,Qi)
DSUM+=Di*weight
weightsum+=weight
(x’y’) = (x,y)+DSUM/weightsum
color at destination pixel(x,y) = color at
source pixel(x’y’).
IMAGE MORPHING 38
11.LAPLACIAN EDGE DETECTIONWe wish to build a morphing algorithm which operates on
features automatically extracted from target images. A
good beginning is to find the edges in the target images.
We accomplished this by implementing a Laplacian Edge
Detector. Step 1: Start with an
image of a good looking team
member. Since no such images
were available, we used the
image shown to the right. Step 2:
Blur the image. Since we want to select edges to perform
a morph, we don't really need "every" edge in the image,
only the main features. Thus, we blur the image prior to
edge detection. This blurring is accomplished by
convolving the image with a gaussian (A gaussian is used
because it is "smooth"; a general low pass filter has
ripples, and ripples show up as edges) Step 3: Perform
the laplacian on this blurred image. Why do we use the
laplacian? Let's look at an example in one dimension.
IMAGE MORPHING 39
Suppose we have the following signal, with an edge as
highlited below. If we take the
gradient of this signal (which, in one dimension, is just
the first derivative with respect to t) we get the
following: Clearly, the gradient has a
large peak centered around the edge. By comparing the
gradient to a threshold, we can detect an edge whenever
the threshold is exceeded (as shown above). In this case,
we have found the edge, but the edge has become
"thick" due to the thresholding. However, since we know
the edge occurs at the peak, we can localize it by
computing the laplacian (in one dimension, the second
derivative with respect to t) and finding the zero
crossings. The above figure shows the
laplacian of our one-dimensional signal. As expected, our
IMAGE MORPHING 40
edge corresponds to a zero crossing, but we also see
other zero crossings which correspond to small ripples in
the original signal. When we apply the laplacian to our
test image, we get the following images:-
The left image is the log of the magnitude of the
laplacian, so the dark areas correspond to zeros. The
right image is a binary image of the zero crossings of the
laplacian. As expected, we have found the edges of the
test image, but we also have many false edges due to
ripple and texture in the image. To remove
these false edges, we add a step to our
algorithm. When we find a zero crossing of
the laplacian, we must also compute an
estimate of the local variance of the test
image, since a true edge corresponds to a significant
change in intensity of the original image. If this variance
is low, then our zero crossing must have been caused by
IMAGE MORPHING 41
ripple. Thus, we have find the zero crossings of the
laplacian and compare the local variance at this point to a
threshold. If the threshold is exceeded, declare an
edge. The result of this step is shown to the right. And
finally, we have median Filter the image. We apply a
median filter because it removes the spot noise while
preserving the edges. This yields a very clean
representation of the major edges of the original image,
as shown below
We are now ready to extract some morphing features!
Other Methods of Edge Detection
There are many ways to perform edge detection.
However, the most may be grouped into two categories,
gradient and Laplacian. The gradient method detects the
IMAGE MORPHING 42
edges by looking for the maximum and minimum in the
first derivative of the image. The Laplacian method
searches for zerocrossings in the second derivative of the
image to find edges. This first figure shows the edges of
an image detected using the gradient method (Roberts,
Prewitt, Sobel) and the Laplacian method (Marrs-
Hildreth).
Various Edge Detection Filters
Notice that the facial features (eyes, nose, mouth) have
very sharp edges. These also happen to be the best
reference points for morphing between two images.
IMAGE MORPHING 43
Notice also that the Marr-Hildreth not only has a lot more
noise than the other methods, the low-pass filtering it
uses distorts the actual position of the facial features.
Due to the nature of the Sobel and Prewitt filters we can
select out only vertical and horizontal edges of the image
as shown below. This is very useful since we do not want
to morph a vertical edge in the initial image to a
horizontal edge in the final image. This would cause a lot
of warping in the transition image and thus a bad morph.
Vertical and Horizontal Edges
IMAGE MORPHING 44
The next pair of images show the horizontal and vertical
edges selected out of the group members images with
the Sobel method of edge detection. You will notice the
difficulty it had with certain facial features, such as the
hairline of Sri and Jim. This is essentially due to the lack
of contrast between their hair and their foreheads.
Vertical Sobel Filter
IMAGE MORPHING 45
Horizonatal Sobel Filter
We can then compare the feature extraction using the
Sobel edge detection to the feature extraction using the
Laplacian.
IMAGE MORPHING 46
Sobel Filtered Common Edges: Jim
Sobel Filtered Common Edges: Roger
We see that although it does do better for some features
(ie. the nose), it still suffers from miss mapping some of
the lines. A morph constructed using individually selected
points would still work better. It should also be noted that
this method suffers the same drawbacks as the previous
page. Another method of detecting edges is using
wavelets. Specifically a two-dimensional Haar wavelet
transform of the image produces essentially edge maps
of the vertical, horizontal, and diagonal edges in an
image. This can be seen in the figure of the transform
below, and the following figure where we have combined
IMAGE MORPHING 47
them to see the edges of the entire face.
Haar Wavelet Transformed Image
Edge Images Generated from the Haar Wavelet
Transform
IMAGE MORPHING 48
And here are the maps of common control points
generated by the feature extraction algorithm for the Jim-
Roger morph.
Haar Filtered Common Edges: Jim
Haar Filtered Common Edges: Roger
IMAGE MORPHING 49
Although the Haar filter is nearly equivalent to the
gradient and Laplacian edge detection methods, it does
offer the ability to easily extend our edge detection to
multiscales as demonstrated in this figure.
Extended Haar Wavelet Transform
12.DESIGN DESCRIPTION
12.1.SOFTWARE DEVELOPMENT
The main idea of our project is to blend the pixels
between two images.For this project, the concept of
barycentric coordinates is used in the logic to create the
morphed image.
IMAGE MORPHING 50
To make use of the barycentric coordinates concept, the
project must contain two images that are preferably the
same size and oriented such that the outline of the object
is similar.
In this project, those points are the eyes and no se of the
face. Using these points, the image is divided into
triangle, such as in the figure below of the kitten and cat
images divided into eight triangles, as seen in the Figure
below.
Once the images are divided into triangles, the
processing for each triangle is independent. This
processing can be completed in parallel and therefore is a
good candidate for optimization on FPGA.
IMAGE MORPHING 51
For each triangle ( A0B0C0 in the first image, A1B1C1 in
the second image) there is a corresponding triangle
AtBtCt for the morphed image. For this project, t = 0.5 or
halfway through with image morph.
For the triangle AtBtCt for the morphed image, each
point Pt can be determined using barycentric coordinates
(α, β, γ), which are derived from the ratio of the smaller
triangle formed by P (PAB, PAC, PBC) over the area of the
entire triangle.
The equations to determine a barycentric coordinate (α,
β, γ), is shown here.
Using α, β, γ the corresponding points (and their pixel
color, I0 and I1) in triangle
A0B0C0 and A1B1C1 can be determined as shown
P0 = αt A0 + βtB0 + γtC0
P1 = αt A1 + βtB1 + γtC1
The color of point Pt can be determined as shown:
It = (1-t)I0 + t I1.
By doing this procedure for all points in all triangles in the
morphed image, the pixel color for the entire image can
be determined by an interpolation of the initial color and
final color.
IMAGE MORPHING 52
12.2.HARDWARE DEVELOPMENTThe MP3 software architecture was reused for the project
with the intent that integration of the image morphing
functions will be much easier. Also, by using MP3
architecture the results would have easily been displayed
on the monitor. APU_INV entity was reused, with slight
modifications from MP3. The state machine architecture
was designed to perform two successive loads followed
by a store. The first load was used to store image data
from the first image while the second load was used to
store data from the second image. Both loads were 128
bits long, with bits 120 to 127 containing blank bits. The
primary goal of each load was to store data of 5 pixels,
each of which contains 24 bits of color element data.
Therefore for 5 pixels total number of bits came out to be
120, hence last 8 bits were ignored.
The hardware design was broken up into 4 processes.
below outlines each of the processes. InputReg process is
where all inputs are mapped to local signals with in the
entity. If reset is detected, all signals are set to “0”,
otherwise they are mapped to the input ports. The
IMAGE MORPHING 53
second process, StateMachineReg is where the next state
of the state machine is determined. this was a clocked
process used to help sync the hardware design flow.
OutPutReg is also a clocked process where image
merging is carried out. The initial goal was to take
average of each element from each of the two images, i.e
pixel 1 red from image 1 was added to pixel 1 red from
image 2 and divided by two. The last process of the
hardware design is Comp_Nxt_St.
As described earlier, the state machine for checking the
instruction type is similar to the MP3 design, except the
state machine is designed to track two consecutive
loads.
The design defaults to INST_TYPE state where it waits for
a valid instruction. If a load instruction is detected, the
next state is set to WAIT_WBLV where it waits for either a
“loadvalid” or a “writebackok” input, whichever comes
first. Depending on which signal is detected first, the next
state is either set to WAIT_LV or WAIT_WB. From here the
next state is set to LOAD_DATA where data from the first
image is stored. At this point local signal LDCNT is set to
“1” and the next state is set to WAIT_LV to get ready for
IMAGE MORPHING 54
data from second load. LDCNT here is used to track the
number of consecutive loads. Once the second load
arrives, LDCNT is set o “2” and image merging takes
place inside OutputReg. The next state is set to
WAIT_INST, where the state machine either waits for the
store or another set of load instructions. It is important to
note that if another set of load instructions are detected
before the store, result from the first two loads will be
lost.
INST_TYPE WAIT_WBL V
WAIT_WB WAIT_L V
LOAD_DA T A STORE_DA T A
DECLOAD = 1
DECSTORE = 1
WRITEBACKOK = 1
LOADV ALID = 1
LDCNT = 01b
DECNONAUTION = 0
LDCNT : Local signal that counts number of consecutive
loads
WRITEBACKOK = 1
LOADV ALID = 1
IMAGE MORPHING 55
LDCNT = 10b
Figure 6. Hardware design state machine.
The hardware section was designed and tested with the
“TestApp_Peripheral” from MP3. The file was slightly
modified with two sets of two loads followed by a store.
The design was tested out in simulation. A couple of run
time errors were encountered where division was carried
out and so this part of the code was commented out. The
goal was to either do the division in software or to come
back and fix the error after a successful integration had
occurred.
13.DESIGN INTEGRATIONSince the morphing algorithm was complex, we decided
to break the integration into multiple steps. The main
morphing algorithm was coded on a windows based
machine and the hardware was developed on the Xilinx-3
machine. The goal was to design both sections
independent of each other to keep the complexity of the
design simple. The next steps were to integrate the
software on the ML507 board without the use of the
IMAGE MORPHING 56
VHDL. Once everything was working, VHDL code for pixel
merging would have been off loaded to the hardware.
However, a critical mistake was made in this design step.
We highly under estimated the integration step and spent
majority of our time on the software and hardware
designs. To much of our disappointment we were unable
to integrate the morphing on the ML507 board.
Independently, both the software and hardware features
work but once combined together on the hardware we
kept running into one roadblock after another. Our
integration via a modified echo didn’t work out as
expected. Our work schedules also didn’t help the cause,
as most of us couldn’t meet regularly to discuss issues
and come up with an appropriate solution. We were able
to successfully implement the morphing algorithm in
software and on any windows or linux-based machine.
We were also able to code and test majority of the
hardware design. However, both of the design blocks
could not be combined to work on the hardware board.
IMAGE MORPHING 57
14.REQUIREMENTSProgram Name:
IMAGE MORPHING
14.1.Hardware Requirement and Recommendations
Hardware Component Minimum
Recommendation
Processor 133 MHz 486 500MHz Pentium
RAM 32 MB 128 MB
Free Hard disk Space 30 MB 50 MB
Monitor VGA SVGA
14.2.SOFTWARE REQUIREMENTS:
JAVA (jdk1.6.0, jre1.6.0)
Pakage used: .io, .net, .util, .awt.event, .swing
14.3.System Requirements:
32-Bit versions
Windows Vista
Windows XP
IMAGE MORPHING 58
Windows 2000
Windows 98
Windows 95
64-Bit Versions
Windows Vista x64
Windows XP x64
15.SYSTEM ANALYSIS
15.1.IDENTIFICATION OF NEED
In the description given below we have talked about the
system used by the various Organization & Institutes for
chatting. We are describing the nature, purpose and
limitation of their existing system, and what we have
done to improve the functionality of their system in the
new system. Existing system can only work, if internet
facility is provided, which result in high cost. Person not
belonging to the organization can also interfere and can
try to break the security of the organizational data.
IMAGE MORPHING 59
15.2.EXISTING SYSTEM DETAILS
In the system used by the various Organization’s and
institutions, communication is generally being held
frequently. The institutes used telephones, fax & e-mail
systems to communicate with each other in which
redundancy of data is normal. A lot of work has to be
done for every task for mailing & fax. Besides this all the
management is also difficult and user has to depend upon
the services of ISPs for too long unnecessarily.
15.3.PROBLEM WITH EXISTING SYSTEM:
This existing system has several limitation and
drawbacks so switch to a better and effective system.
Low functionality
Security problem
Future improvement is difficult.
Miscellaneous.
16.FEASIBILITY STUDY
IMAGE MORPHING 60
The objective of feasibility study is to examine the
technical, operational and economical viability.
16.1.TECHNICAL FEASIBILTY
Technical feasibility centers on the existing computer
system (hardware, software etc) and to what extent it
can support the proposed addition. In our proposed
solution formulated, did not require any additional
technology. Hence the system is technically feasible.
The entire solution enlisted does not require any
additional software for this solution.
16.2.ECONOMIC FEASIBILTY
More commonly known as cost benefit analysis, the
procedure is to determine the benefits and savings that
are expected from proposed system compare them
with cost. The proposed system utilizes the currently
available technologies. The cost during the
development of the product is low. The interface and
the scripts used are as simple as it could be. Therefore
the net cost in its development is less.
16.3.OPERATIONAL FEASIBILTY
IMAGE MORPHING 61
Organizational, political and human aspects are
considered in order to ensure that the proposed system
will be workable when implemented.
17.SOFTWARE ENGINEERING PARADIGM
USED
Like any other product, a software product completes a
cycle from its inception
obsolescence/replacement/wearing out. The process of
software development not only needs writing the
program and maintains it, but also a detail study of the
system to identify current requirements related to
software development as well as to anticipate the future
requirements. It also needs to meet the objective of low
cost, and good quality along with minimum development
time.
Requirement analysis and specification for clear
understanding of the problem.
IMAGE MORPHING 62
Software design for planning the solution of the
problem.
Coding (implementation) for writing program as per
the suggested solution.
Testing for verifying and validating the objective of
the product.
Operation and maintenance for use and to ensure its
availability to users.
This application was also developed in phases for
effective output. Each phase was given its due
importance with respect to time and cost. The time
scheduling is later described in the PERT and Gantt
chart. The system development life cycle of Project
Management Information System is shown below.
Operation and Maintenance
Testing & validation
Coding
Designing
Requirement Analysis and Specification
IMAGE MORPHING 63
18.SOFTWARE DEVELOPMENT PROCESS
MODEL
A software development process model is a description of
the work practices, tools and techniques used to develop
software. Software models serve as standards as well
provide guidelines while developing software.
18.1.WATER FALL MODEL
It includes a sequential approach to software
development. It includes phases like task definition,
analysis, design, implementation, testing and
maintenance. The phases are always in order and are
never overlapped.
Implementation
Design
Requirement Analysis And Specification
Requirement Definition
IMAGE MORPHING 64
19.CODING
MORPH.JAVA
import java.awt.image.BufferedImage;
import java.awt.image.WritableRaster;
import edu.rit.pj.BarrierAction;
import edu.rit.pj.IntegerForLoop;
import edu.rit.pj.IntegerSchedule;
import edu.rit.pj.ParallelRegion;
import edu.rit.pj.ParallelTeam;
public class Morph {
// This describes how often the GUI will redraw the
morphed picture
// Set this to a higher value, like 16, for larger jobs.
public static int UPDATE_STEPS = 1;
public static int inner_steps;
//take a larger stepping factor for faster morphing
public static int FAST_STEPPING = 1;
IMAGE MORPHING 65
// The total number of iterations may be as high as 256
becuase // the difference in color-intensity in some point
may be 256 for some color
public static int MORPH_ITERATIONS =
256/FAST_STEPPING + FAST_STEPPING;
// After cropping, height/width should be equal for both
images.
public static int height;
public static int width;
public static BufferedImage morph;
public static WritableRaster morphRaster;
static int [][][] array1;
static int [][][] array2;
static MorphImg gui;
public static int numThreads =1 ;
public static boolean displayOff= false;
private Morph(){}
public static void setNumThreads(int threads)
{
IMAGE MORPHING 66
numThreads = threads;
}
public static void setUpdateSteps(int steps)
{
UPDATE_STEPS = steps;
}
public static void setFastStepping(int steps)
{
FAST_STEPPING = steps;
MORPH_ITERATIONS = 256/FAST_STEPPING +
FAST_STEPPING;
}
public static void setDisplayOff(boolean c)
{
displayOff = c;
}
public static void MorphInit(BufferedImage image1,
BufferedImage image2, MorphImg guiapp) {
gui = guiapp;
IMAGE MORPHING 67
// Double check that we clipped the images at read in
time
// if not the same size
assert(image1.getWidth() == image2.getWidth());
assert(image1.getHeight() == image2.getHeight());
height = image1.getHeight();
width = image1.getWidth();
int[] image1pixels = image1.getRaster().getPixels(0, 0,
width, height, (int[])null);
int[] image2pixels = image2.getRaster().getPixels(0,
0, width, height, (int[])null);
// The gui displays the morphed image, which initially is
the source image
// "From" image
morph = new BufferedImage(width, height,
BufferedImage.TYPE_3BYTE_BGR);
// Return a WritableRaster, an internal reprentation of
the
// image (class Raster), that we can modify
IMAGE MORPHING 68
morphRaster = morph.getRaster();
//Copy the "from" image and update the raster
morphRaster.setPixels(0, 0, width, height,
image1pixels);
gui.updateMorphImage(morph, false);
// Generate the 3D representations of the images
array1 = convertTo3DArray(image1pixels, width,
height);
array2 = convertTo3DArray(image2pixels, width,
height);
}
//redraws only if display mode is on
static void redrawPicture(int [][][]array1)
{
if(!displayOff){
//gui.freezeTimer();
int[] arr1D = convertTo1DArray(array1, width,
height);
IMAGE MORPHING 69
morphRaster.setPixels(0, 0, width, height, arr1D);
gui.updateMorphImage(morph, false);
// update the reported timing
gui.updateTimer();
//gui.unFreezeTimer();
}
}
public static BufferedImage parallelDoMorph() {
final int last_iteration = MORPH_ITERATIONS %
UPDATE_STEPS;
//start the timer
gui.startTimer();
try {
//create the threads to execute the code in parallel (data
parallelism)
new ParallelTeam(numThreads).execute(new
ParallelRegion() {
IMAGE MORPHING 70
//run method for the parallel team; all the team threads
call run(). This is where the execution of the parallel
region code happens.
@Override
public void run() throws Exception
{
//morph the picture
for(int h = 0; h < MORPH_ITERATIONS; h
+= UPDATE_STEPS ) {
// MORPH
//System.out.println("h is "+ h+ " from
thread "+ getThreadIndex());
final int steps = (h + UPDATE_STEPS) >
MORPH_ITERATIONS ? last_iteration : UPDATE_STEPS;
//have each thread loop through a sub-
section of the array1 array
execute(0, array1.length, new
IntegerForLoop(){
public IntegerSchedule schedule(){
IMAGE MORPHING 71
return
IntegerSchedule.dynamic(16);
//return
IntegerSchedule.guided(128);
}
@Override
public void run(int start, int
finish) throws Exception {
// send the array1 and
array 2 variables between start and finish to be
processed.
//System.out.println("There will be "+
MORPH_ITERATIONS+ " MORPH_ITERATIONS");
for(int i=0 ; i < steps; i++){
morphTick(array1, array2, start, finish-1);
//System.out.println("start to finish: "+ start
+ " "+ (finish-1) + " on thread "+ getThreadIndex());
//morphTick(array1, array2, 0, array1.length-1);
// update the reported timing
IMAGE MORPHING 72
gui.updateTimer();
/*barrier();
if(getThreadIndex()==0)
redrawPicture(array1);
*/
} } }
new BarrierAction(){
public void run() {
//System.out.println("IN THE
BARRIER, ALL THREADS ARE @ SAME POINT");
//Redraw the pictures, for
"animation".
redrawPicture(array1);
}
}
}
}
IMAGE MORPHING 73
}
} catch (Exception e) {
e.printStackTrace();
}
//redrawPicture(array1);
// We are done timing now
gui.stopTimer();
// Add a done label, if we are using the display
gui.updateDoneLabel();
// Saves the last image into a raster
int[] arr1D = convertTo1DArray(array1, width,
height);
morphRaster.setPixels(0, 0, width, height, arr1D);
return morph;
}
// The Morphing code, including display
// The actual morphing is done in morphTick
public static BufferedImage serialDoMorph() {
IMAGE MORPHING 74
int last_iteration = MORPH_ITERATIONS %
UPDATE_STEPS;
gui.startTimer();
for(int h = 0; h < MORPH_ITERATIONS; h +=
UPDATE_STEPS ) {
// MORPH
int steps = (h + UPDATE_STEPS) >
MORPH_ITERATIONS ? last_iteration : UPDATE_STEPS;
for(int i=0 ; i < steps; i++){
//System.out.println("start is "+ 0 + " finish is "+
(array1.length-1));
morphTick(array1, array2, 0, array1.length-1);
// update the reported timing
gui.updateTimer();
}
// Redraw the pictures, for "animation".
redrawPicture(array1);
}
// We are done timing now
IMAGE MORPHING 75
gui.stopTimer();
// Add a done label, if we are using the display
gui.updateDoneLabel();
// Saves the last image into a raster
int[] arr1D = convertTo1DArray(array1, width,
height);
morphRaster.setPixels(0, 0, width, height, arr1D);
return morph;
}
/**
* @param image1 is modified, with each pixel
modified to look one
* quantum level (or FAST_STEPPING) more like
* the corresponding pixel in image2.
*/
private static void morphTick(int[][][] image1, int[][][]
image2, int start, int finish) {
//row
for(int i = start; i <= finish; i++){
IMAGE MORPHING 76
// column
for(int j = 0; j < image1[0].length; j++){
// colors: red, green and blu
// calculate the difference between two images for each
color
int diffr = (image2[i][j][0] - image1[i][j][0]);
int diffg = (image2[i][j][1] - image1[i][j][1]);
int diffb = (image2[i][j][2] - image1[i][j][2]);
//signum() returns -1, 0 or 1.
int redstep = (int)Math.signum(diffr);
int greenstep = (int) Math.signum(diffg);
int bluestep = (int)Math.signum(diffb);
int adiffr = Math.abs(diffr);
int adiffg = Math.abs(diffg);
int adiffb = Math.abs(diffb);
//take a big or small step
if (adiffr >= FAST_STEPPING)
redstep *= (int)
Math.min(adiffr,FAST_STEPPING);
IMAGE MORPHING 77
if (adiffg >= FAST_STEPPING)
greenstep *= (int)
Math.min(adiffg,FAST_STEPPING);
if (adiffb >= FAST_STEPPING)
bluestep *= (int)
Math.min(adiffb,FAST_STEPPING);
//update the source image
image1[i][j][0] += redstep;
image1[i][j][1] += greenstep;
image1[i][j][2] += bluestep;
}
}
}
// Unpack a 1D image array into a 3D array
public static int[][][] convertTo3DArray( int[]
oneDPix1, int width, int height){
int[][][] data = new int[height][width][3];
// Convert 1D array to 3D array
for(int row = 0; row < height; row++){
IMAGE MORPHING 78
for(int col = 0; col < width; col++){
int element = (row * width + col)*3;
// Red
data[row][col][0] = oneDPix1[element+0];
// Green
data[row][col][1] = oneDPix1[element+1];
// Blue
data[row][col][2] = oneDPix1[element+2];
} }
return data;
}
public static int[] convertTo1DArray( int[][][] data, int
width, int height){
// Pack a 3D image array into a 1D array because raster
requires 1D int array
int[] oneDPix = new int[ width * height * 3];
int cnt = 0;
for (int row = 0; row < height; row++){
for (int col = 0; col < width; col++){
IMAGE MORPHING 79
//red
oneDPix[cnt++] = data[row][col][0];
//green
oneDPix[cnt++] = data[row][col][1];
//blue
oneDPix[cnt++] = data[row][col][2];
}
}
return oneDPix; } }
MORPHIMG.JAVA
import java.awt.BorderLayout;
import java.awt.Container;
import java.awt.Dimension;
import java.awt.FlowLayout;
import java.awt.RenderingHints;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import javax.imageio.ImageIO;
IMAGE MORPHING 80
import javax.swing.BorderFactory;
import javax.swing.ImageIcon;
import javax.swing.JFrame;
import javax.swing.JLabel;
import javax.swing.JPanel;
public class MorphImg extends JPanel {
static int THUMB_MAX_WIDTH = 280;
static int THUMB_MAX_HEIGHT = 210;
static int MORPHIMAGE_MAX_WIDTH = 400;
static int MORPHIMAGE_MAX_HEIGHT = 300;
static boolean parallelMode = false;
static boolean displayOff = false;
BufferedImage fromThumb;
BufferedImage toThumb;
BufferedImage fromImage;
BufferedImage toImage;
BufferedImage morphImage;
JLabel timerLabel;
JLabel timerLabelSeconds;
IMAGE MORPHING 81
JLabel morphLabel;
ImageIcon morphIcon;
Dimension morphImageSize;
long time0;
static long elapsedTime;
long timeSub0;
long timeSub;
public MorphImg(BufferedImage fromImage,
BufferedImage toImage) {
// Crop images to make them the same size
if (fromImage.getHeight() != toImage.getHeight() ||
fromImage.getWidth() != toImage.getWidth()){
int width = Math.min(fromImage.getWidth(),
toImage.getWidth());
int height = Math.min(fromImage.getHeight(),
toImage.getHeight());
toImage = toImage.getSubimage(0, 0, width,
height);
fromImage = fromImage.getSubimage(0, 0, width,
height);
IMAGE MORPHING 82
}
this.fromImage = fromImage;
this.toImage = toImage;
if(!displayOff){
// create thumbnails
morphImageSize =
ImageUtils.determineSize(fromImage.getWidth(),
fromImage.getHeight(), MORPHIMAGE_MAX_WIDTH,
MORPHIMAGE_MAX_HEIGHT);
Dimension thumbSize =
ImageUtils.determineSize(fromImage.getWidth(),
fromImage.getHeight(), THUMB_MAX_WIDTH,
THUMB_MAX_HEIGHT);
this.fromThumb =
ImageUtils.getScaledInstance(fromImage,
thumbSize.width, thumbSize.height,
RenderingHints.VALUE_INTERPOLATION_BILINEAR, true);
this.toThumb =
ImageUtils.getScaledInstance(toImage, thumbSize.width,
thumbSize.height,
RenderingHints.VALUE_INTERPOLATION_BILINEAR, true);
IMAGE MORPHING 83
this.createLayout();
}
}
private void createLayout() {
this.setLayout(new BorderLayout());
//Display the from and to images
Container topContainer = new JPanel();
topContainer.setLayout(new
FlowLayout(FlowLayout.CENTER));
Container btmContainer = new JPanel();
btmContainer.setLayout(new
FlowLayout(FlowLayout.CENTER));
ImageIcon fromIcon = new ImageIcon(fromThumb);
JLabel fromLabel = new JLabel("");
fromLabel.setIcon(fromIcon);
fromLabel.setBorder(BorderFactory.createTitledBorder("O
riginal image"));
topContainer.add(fromLabel);
IMAGE MORPHING 84
ImageIcon toIcon = new ImageIcon(toThumb);
JLabel toLabel = new JLabel("");
toLabel.setIcon(toIcon);
toLabel.setBorder(BorderFactory.createTitledBorder("Targ
et image"));
topContainer.add(toLabel);
this.add(topContainer, BorderLayout.NORTH);
//Display the image being changed
//morphIcon = new ImageIcon(morphImage);
morphLabel = new JLabel();
//morphLabel.setIcon(morphIcon);
morphLabel.setBorder(BorderFactory.createTitledBorder("
Morphing progress"));
btmContainer.add(morphLabel);
this.add(btmContainer, BorderLayout.CENTER);
Container timerContainer = new JPanel();
timerLabel = new JLabel("Time taken: ");
timerContainer.add(timerLabel);
timerLabelSeconds = new JLabel("0 ms");
IMAGE MORPHING 85
timerContainer.add(timerLabelSeconds);
this.add(timerContainer, BorderLayout.SOUTH);
}
public BufferedImage morph() {
Morph.MorphInit(fromImage, toImage, this);
//based on the input argument, one of them will be
called
if(parallelMode)
this.morphImage = Morph.parallelDoMorph();
else
this.morphImage = Morph.serialDoMorph();
updateMorphImage(morphImage, true);
return morphImage;
}
public void startTimer() {
time0 = System.currentTimeMillis();
timeSub = 0;
timeSub0 = 0;
}
IMAGE MORPHING 86
public void stopTimer() {
elapsedTime = System.currentTimeMillis() - time0 -
timeSub;
}
public void freezeTimer() {
timeSub0 = System.currentTimeMillis();
}
public void unFreezeTimer() {
timeSub += System.currentTimeMillis() - timeSub0;
}
public void updateTimer() {
if(!displayOff){
long t = System.currentTimeMillis() - time0;
t -= timeSub;
timerLabelSeconds.setText(Long.toString(t) + "
ms");
}
}
public void updateDoneLabel() {
IMAGE MORPHING 87
if(!displayOff)
timerLabel.setText("Done morphing images in:
");
}
public void updateMorphImage(BufferedImage image,
boolean hiQuality) {
if(!displayOff){
BufferedImage displayImage =
ImageUtils.getScaledInstance(image,
morphImageSize.width, morphImageSize.height,
RenderingHints.VALUE_INTERPOLATION_BILINEAR,
hiQuality);
morphIcon = new ImageIcon(displayImage);
morphLabel.setIcon(morphIcon);
}
}
static void Usage() {
System.err.println("Usage: MorphImg \n" +
"[-p <n> ] switch to parallel mode with n
number of threads\n" +
IMAGE MORPHING 88
"[-u <n> ] display frequency (default 8)\n"
+
"[-f <n> ] fast stepping (default 4)\n" +
"[-t ] turn off display\n" +
"[-i image1 image2 ]\n");
System.exit(0);
}
public static void main(String[] args)
{
JFrame frame ;
String title = "Serial Morpher";
// Default image file names
String fromImageStr = "from.jpeg";
String toImageStr = "to.jpg";
String file_extension = "_sm"; //serial morph
int i = 0;
int numThreads = 1 ;
int updateSteps = 8;
int fast_stepping = 4;
IMAGE MORPHING 89
while (i < args.length && args[i].startsWith("-"))
{
String arg = args[i++];
if (arg.equals("-i")) {
if (i > (args.length-2)){
System.err.println("-i Needs 2 file
arguments");
Usage();
}
// 2 input files
fromImageStr = args[i++];
toImageStr = args[i++];
}
// Parallel mode
else if (arg.equals("-p")) {
parallelMode = true;
try{ numThreads =( Integer.parseInt(args[i+
+]));
IMAGE MORPHING 90
} catch (NumberFormatException Exc){
System.err.println("-p needs an int");
Usage();
}
Morph.setNumThreads(numThreads);
title = "Parallel Morpher";
// The output file will have the extension "_pm" added
file_extension = "_pm"; //parallel morph
} //display frequency for update steps
else if (arg.equals("-u")) {
try{ updateSteps = ( Integer.parseInt(args[i+
+]));
Morph.setUpdateSteps(updateSteps);
} catch (NumberFormatException Exc){
System.err.println("-u needs an int");
Usage();
}
}
else if (arg.equals("-f")) {
IMAGE MORPHING 91
try{ fast_stepping = ( Integer.parseInt(args[i+
+]));
Morph.setFastStepping(fast_stepping);
} catch (NumberFormatException Exc){
System.err.println("-f needs an int");
Usage();
}
// shuts off display
} else if (arg.equals("-t")) {
displayOff = true;
System.out.println("Display turned off!");
Morph.setDisplayOff(true);
}
else{
System.err.println("Unknown option " + arg);
Usage();
}
}
if (i < args.length) {
IMAGE MORPHING 92
System.err.println("Unknown option " + args[i]);
Usage();
}
File fromFile = new File(fromImageStr);
File toFile = new File(toImageStr);
assert(fromFile.exists() && fromFile.isFile());
assert(toFile.exists() && toFile.isFile());
//Read images
BufferedImage fromImage = null;
BufferedImage toImage = null;
try {
fromImage = ImageIO.read(fromFile);
toImage = ImageIO.read(toFile);
} catch(IOException e) {
e.printStackTrace();
}
//Initiate app and morphing
IMAGE MORPHING 93
MorphImg ma = new MorphImg(fromImage,
toImage);
if(!displayOff){
frame = new JFrame(title);
frame.setContentPane(ma);
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.pack();
frame.setVisible(true);
frame.setSize(650, 650);
}
BufferedImage morphImage = ma.morph();
File output = new File(toImageStr + file_extension);
try {
ImageIO.write(morphImage, "JPG",
output);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace(); }
IMAGE MORPHING 94
System.out.println("Elapsed Time (ms): " +
Long.toString(elapsedTime));
if(!displayOff){
try{
// last 2 sec before self destruction
Thread.sleep(5000);
}
catch(Exception e)
{
System.out.println(e);
}
}
System.exit(0);
}
}
IMAGEUTILS.JAVA
IMAGE MORPHING 95
import java.awt.Dimension;
import java.awt.Graphics2D;
import java.awt.RenderingHints;
import java.awt.Transparency;
import java.awt.image.BufferedImage;
/**
* This code should not be changed
* It is just image resizing for display in the GUI.
*/
public class ImageUtils {
/**
*@author
http://today.java.net/pub/a/today/2007/04/03/perils-of-
image-getscaledinstance.html
* Convenience method that returns a scaled instance
of the
* provided {@code BufferedImage}
* @param img the original image to be scaled
IMAGE MORPHING 96
* @param targetWidth the desired width of the scaled
instance,
* in pixels
* @param targetHeight the desired height of the
scaled instance,
* in pixels
* @param hint one of the rendering hints that
corresponds to
* {@code RenderingHints.KEY_INTERPOLATION}
(e.g.
* {@code
RenderingHints.VALUE_INTERPOLATION_NEAREST_NEIGH
BOR},
* {@code
RenderingHints.VALUE_INTERPOLATION_BILINEAR},
* {@code
RenderingHints.VALUE_INTERPOLATION_BICUBIC})
* @param higherQuality if true, this method will use a
multi-step
IMAGE MORPHING 97
* scaling technique that provides higher quality than
the usual
* one-step technique (only useful in downscaling
cases, where
* {@code targetWidth} or {@code targetHeight} is
* smaller than the original dimensions, and generally
only when
* the {@code BILINEAR} hint is specified)
* @return a scaled version of the original {@code
BufferedImage}
*/
public static BufferedImage
getScaledInstance(BufferedImage img,
int targetWidth,
int targetHeight,
Object hint,
boolean higherQuality)
{
int type = (img.getTransparency() ==
Transparency.OPAQUE) ?
IMAGE MORPHING 98
BufferedImage.TYPE_INT_RGB :
BufferedImage.TYPE_INT_ARGB;
BufferedImage ret = (BufferedImage)img;
int w, h;
if (higherQuality) {
// Use multi-step technique: start with original
size, then
// scale down in multiple passes with drawImage()
// until the target size is reached
w = img.getWidth();
h = img.getHeight();
} else {
// Use one-step technique: scale directly from
original
// size to target size with a single drawImage() call
w = targetWidth;
h = targetHeight;
}
do {
IMAGE MORPHING 99
if (higherQuality && w > targetWidth) {
w /= 2;
if (w < targetWidth) {
w = targetWidth;
}
}
if (higherQuality && h > targetHeight) {
h /= 2;
if (h < targetHeight) {
h = targetHeight;
}
}
BufferedImage tmp = new BufferedImage(w, h,
type);
Graphics2D g2 = tmp.createGraphics();
g2.setRenderingHint(RenderingHints.KEY_INTERPOLATION
, hint);
g2.drawImage(ret, 0, 0, w, h, null);
IMAGE MORPHING 100
g2.dispose();
ret = tmp;
} while (w != targetWidth || h != targetHeight);
return ret;
}
public static Dimension determineSize(int origWidth, int
origHeight, int maxWidth, int maxHeight){
if(origWidth < maxWidth && origHeight <
maxHeight ){
return new Dimension(origWidth, origHeight);
}
double widthRatio = (double) maxWidth / origWidth;
double heightRatio = (double) maxHeight / origHeight;
if (widthRatio < heightRatio){
return new Dimension((int)(origWidth *
widthRatio), (int) (origHeight * widthRatio));
}else{
return new Dimension((int)(origWidth *
heightRatio), (int) (origHeight * heightRatio));
IMAGE MORPHING 101
}
}
}
SUPPORTEDFORMATS.JAVA
import javax.imageio.ImageIO;
public class SupportedFormats
{
/**
* Display file formats supported by JAI on your
platform.
* e.g BMP, bmp, GIF, gif, jpeg, JPEG, jpg, JPG, png, PNG,
wbmp, WBMP
* @param args not used
*/
public static void main ( String[] args )
{
String[] names = ImageIO.getWriterFormatNames();
for ( int i = 0; i < names.length; i++ )
IMAGE MORPHING 102
{
System.out.println( names[i] );
} } }
20.TESTING
1.
IMAGE MORPHING 103
2.
IMAGE MORPHING 104
IMAGE MORPHING 105
3.
IMAGE MORPHING 106
4.
IMAGE MORPHING 107
5.
IMAGE MORPHING 108
21.BIBLIOGRAPHY
Detlef Ruprecht, Heinrich Muller, Image Warping with
Scattered Data Interpolation, IEEE Computer
Graphics and Applications, March 1995, pp37-43.
Thaddeus Beier, Shawn Neely, Feature-Based Image
Metamorphosis, SIGGRAPH 1992, pp35-42.
George Wolberg, Image morphing: a survey, The
Visual Computer, 1998, pp360-372.
http://www.cs.rochester.edu/u/www/u/kyros/
Courses/CS290B/Lectures/lecture-18/sld002.htm
Morphing Magic by Scott Anderson , 1993.
Beier T. Neely S. Feature based image
metamorphosis. Proceedings of SIGGRAPH’92.