Date post: | 24-Oct-2014 |
Category: |
Documents |
Upload: | prem-kumar |
View: | 106 times |
Download: | 0 times |
ABSTRACT
In this paper, we propose a secure semi-fragile watermarking technique based
on integer wavelet transform with a choice of two watermarks to be embedded. A
self-recovering algorithm is employed, that hides the image digest into some wavelet
subbands for detecting possible illicit object manipulation undergone in the image.
The semi-fragility makes the scheme tolerant against JPEG lossy compression with
the quality factor as low as 70%, and locates the tampered area accurately. In
addition, the system ensures more security because the embedded watermarks are
protected with private keys. The computational complexity is reduced by using
parameterized integer wavelet transform. Experimental results show that the proposed
scheme guarantees safety of a watermark, recovery of image and localization of
tampered area.
CHAPTER-1
INTRODUCTION
1
Basic watermarking principles:
The digital revolution, the explosion of communication networks, and the
increasingly growing passion of the general public for new information technologies
lead to exponential growth of multimedia document traffic (image, text, audio, video,
etc.). This phenomenon is now so important that insuring protection and control of the
exchanged data has become a major issue. Indeed, from their digital nature,
multimedia documents can be duplicated, modified, transformed, and diffused very
easily.
In this context, it is important to develop systems for copyright protection,
protection against duplication, and authentication of content. Watermarking Seems to
be the alternative solution for reinforcing the security of multimedia documents. The
aim of watermarking is to include subliminal information (i.e., imperceptible) in a
multimedia document to ensure a security service or simply a labeling application. It
would be then possible to recover the embedded message at any time, even if the
document was altered by one or more Nondestructive attacks, whether malicious or
not. Until now, the majority of publications in the field of watermarking mainly
address the copyright of still images.
Other security services, such as image content authentication, are still
marginal and many fundamental questions remain open. We may wonder, for
example, whether it is preferable to use a fragile watermark, a robust watermark, or
even use a completely different technique. Furthermore, an authentication service
partially calls into question the settings commonly established in watermarking
copyright protection, particularly in terms of the quantity and nature of hidden
2
information (for copyright, the mark is independent of the image and is usually a 64-
bit identifier), as well as in terms of robustness.
Digital watermarking has become a promising solution for image content
authentication, which can provide more comprehensive verification of the integrity of
the content than the traditional cryptographic strategies, such as digital signature.
Digital watermarking verifies the image content by embedding additional information,
referred to as the watermark. The watermark is embedded by slightly modifying the
original content. Therefore, it is inevitable to change the host data in some way.
Although the modification of the original content is strictly controlled to be so slight
that it is commonly imperceptible to human eyes, most of the watermarking
algorithms will cause a certain amount of permanent loss of content fidelity during the
embedding process. The quality loss is usually proportional to the amount of the
embedded watermark.
For content authentication, high watermark payload is usually required. As the
integrity of every image part needs to be ensured, most of the existing watermarking
schemes embed the watermark ubiquitously over the entire image area. Hence the
degradation caused by the authentication watermark is consequently increased. Since
such quality degradation for the human observer is masked and minimized by the use
of perceptual models, it could be accepted by most common applications. In some
applications, however, the fidelity of the images is of special importance, such as
medical images, satellite images and military images. In these applications, even a
slight modification is not acceptable, especially in some important image regions. The
quality degradation caused by the watermark embedding becomes intolerable. For
example, the reliability of the data is an important issue for medical images, because
any manipulation or quality compromise could result in serious misdiagnosis of the
3
patient’s disease. Thereby, in the most important part of the medical images, any
slight modification is not allowed.
In addition, satellite images also require high image fidelity. Slight quality loss
might result in the deterioration of their commercial value, rendering it unfit for reuse
or further distribution . It is obvious that the common holistic watermarking
techniques, which embed the watermark ubiquitously in the whole image, can not
satisfy such special applications. One solution to satisfy the aforementioned special
requirements is to use an invertible watermarking technique. An invertible watermark
can be removed from the image content after extraction and the original image data
can be precisely recovered.
The other alternative strategy is watermarking with region of interest (ROI).
Some watermarking algorithms have been proposed with the concept of region of
interest (ROI). But these algorithms are either limited to specific image format, e.g.
JPEG2000, or they need the precise location information of the ROI in order to
extract the watermark . These requirements decrease the practicability and portability
of the watermarking schemes.
In the practical applications, the ROI information may be unavailable at the
side of the watermark detector. we first propose a framework for ROI supporting
watermarking systems, which can be applied to different watermark embedding
schemes. Based on the framework, a wavelet-based watermarking scheme is
presented, which can localize the manipulations both inside and outside the ROI(s).
The content inside the preferred ROI(s) is kept intact during the watermark
embedding process, while its integrity is still ensured. No ROI information is required
in the watermark extraction and image authentication processes.
4
Figure: General framework of the proposed watermarking scheme.
A digital watermark is a piece of information that is hidden in a multimedia
content, in such a way that it is imperceptible to a human observer, but easily detected
by a computer. The principal advantage is that the watermark is inseparable from the
content [1]. Digital watermarking is the process of hiding the watermark
imperceptibly in the content. This technique was initially used in paper and currency
as a measure of authenticity. The primary tool available for data protection is
encryption. Encryption protects content during the transmission of the data from the
sender to receiver. However, after receipt and subsequent decryption, the data is no
longer protected. Watermarking complements encryption [1]. Digital Watermarking
involves two major phases:
(i) Watermark embedding, and
(ii) Watermark extraction.
Digital watermarks can be a pseudo random sequence or a logo of a company
or an image. Watermark embedding is done in the watermark carriers such as Discrete
Cosine Transform (DCT) or Discrete Wavelet Transform (DWT), etc of the original
data resulting in watermarked data. The watermarked data may be compressed to
reduce its size, corrupted by noise during its transmission through a noisy channel. It
may be subjected to other normal image processing operations such as filtering,
histogram modification etc. Also malicious intruders may tamper the data.
5
CHAPTER-2
WATERMARK GENERATION
6
Watermark generation
The scheme is based on embedding of two watermarks. We proceed for the
watermark generation in the following subsection.
Binary image preprocessing
A binary signature which is used for accurate authenticity of the cover image is
preprocessed before being embedded. Let W be a binary signature of size X*Y then
W= ω (i,j) (1≤ i≤X,1≤j≤Y)…………..(1)
Where ω (i,j) є{0,1} and PRand be a pseudo-random matrix of
the same size generated by a secret key.
PRand=Rn(I,j) (1≤ i≤X,1≤j≤Y)……….(2)
Where Rn(I,j) є{0,1}
We adopt (3) to get the ultimate watermark Ŵ1:
Ŵ1=W PRand
where denotes the exclusive OR
Digest generation
The image digest (second watermark) which is the highly compressed version of the
original image, is generated using the following steps.
Step 1
One level integer wavelet transform is applied on the original image of size N*N. The
levels are called approximation LL1, Horizontal HL1,Vertical LH1 and Diagonal
HH1. We select LL1 to create the image digest after high compression.
Step 2
Full frame DCT on low pass compresson(LL1) is computed.
7
Step 3
DCT coefficients are quantized using JPEG quantization matrix as shown down to
decrease their obtrusiveness.
[16 11 10 16 24 40 51 61
12 12 14 19 26 58 60 55
14 13 16 24 40 57 69 56
14 17 22 29 51 87 80 62
18 22 37 56 68 109 103 77
24 35 55 64 81 104 113 92
49 64 78 87 103 121 120 101
72 92 95 98 112 100 103 99]
Step 4
The scaled DCT values are ordered through a zigzag scan and the first M coefficients
are selected and stored in vector q:
q=(q1 q2 q q…………qM)
where M=N2/32. DC component is not included because of its high energy. If we
include the DC component in digest generation, then the watermark will be
perceptible because of its exceeding high energy.
Vector q is futher scaled, based on secret key(k1):
Q(i)=q(i).α. ln(i+2+r(i))
Where α is a strength factor and its value depend upon the image quality while r is
the shift parameter ringing from -0.5 to 0.5.
8
Step 5
DCT coefficients are quadruplicated because we have available position for
embedding which are four times in number to M=N2/32 the number of DCT
coefficients. Thus we obtain the new vector V as:
V=(q1 q2 ………………qM , q1 q2 ………………qM,, q1 q2 ………………qM )
Step 6
Vpermutated is obtained by scrambling the vector V with the help of a secret key(k2) in
order to make ti more secure. Due to this permutation, the four copies of the DCT
coefficients will occupy different locations in the two subbands, HL2 and LH2.thus V
is the resultant image digest to be embedded in the highlighted subbabds,as shown in
Fig W is our second watermark ready for embedding.
W2=Vpermutated---- (8)
9
CHAPTER-3
WATERMARK EMBEDDING
Watermark Embedding:
10
Both the watermarks have been computed and are ready to be embedded into
the original image with the following steps. Given an N*N image, after applying a 1-
level integer wavelet transform ,the horizontal subband HL1 and vertical subband
LH1 are further decomposed while the approximation sub band LL1 is two times
decomposed to get LL3.Embedding areas HL2,LH2 and LL3.
Note that our scheme is more secure because we are using three secret keys
key1,key2 and key3.one key, ie key1,is used on generating pseudo random number
matrix (PRNM),while pre-processing the first watermark(binary image) and other
two keys, i.e jkey2 and key3, are used while generating the second watermark(image
digest).key2 is used in scaling the DCT coefficients and key3 is used in permuting
(scrambling) the second watermark before embedding. we use the following formula
to embed the watermark W1 in the LL3 subbabd coefficients .
Let LFB(a) denote the five least significant bits of a,while LFB(a,b) represent
the substitution of b for the five least significant bits of a. the two choices “11000”
and “01000” representing “1” and “0” respectively, are selected from the distance
diagram. We select these two choices according to the quality of the watermarked
image.
11
If we simply replace the least five significant bits of the coefficient by hese choices,
then the amplitude changes from -7 to 24 when “1” is embedded and form -23 to 8 when “0”
is embedded. keeping the performance of invisibility and robustness, the following
embedding method is proposed.
where f(I,j) is an IWT coefficient in LL3 sub band before embedding*(I,j) is
the IWT coefficient after embedding .with such embedding the amplitude of the
coefficients changes from -15 to 16.The two choices “11000” and “01000” are used to
represent the bits “1” and “0” respectively. On the authentication side we will just
examine the fifth least significant bit of these choices.
The second watermark Ŵ2 substituted in details,HL2 and LH2 sub bands.
Sizes of Ŵ2 and the two sub bands HL2 and LH2 are the same. Performing the
inverse IWT, We get the watermarked image.
In the figure 1 Ŵ1 is the binary image to be embedded in the LL3 sub band
and Ŵ2 is the image digest to be embedded in the HL2 sub band and LH2 sub bands.
12
Peak Signal to noise ratio (PSNR) is used to measure the induced distortion caused by
the watermark .PSNR in decibels’ (dB) is computed using
Our proposed scheme use the parameterize integer wavelet transform (IWT)
which is the fast approach of discrete wavelet transform (DWT) lifting scheme is an
effective method to improve the processing speed of DWT. On the other hand integer
wavelet transform allows constructing lossless wavelet transforms and through lifting
scheme thus we construct such integer wavelet transform. Consequently our approach
is based on a novel idea of using integer wavelet transform with parameters for the
development of secure semi-fragile watermarking for both image authentication and
recovery .In our current work we have used the daubechies wavelet.
Integrity Verification:
In the integrity verification phase, the watermarked image undergoes a
procedure, where the embedded watermarks,( Ŵ1and Ŵ2) are extracted.
The binary watermark W….is extracted from the LL3 subband, while the image
digest Ŵ2. is extracted from the HL2 and LH2 subbands The extraction procedure of
Ŵ1which is used for authentication, includes the following steps.
Given an N*N watermarked image, after applying a 1-level IWT the
approximation subband is two times decomposed and LL3 is selected as
shown in Fig 3(a)
13
Let Ŵ1 *’(i,j) denote the extracted watermark bit and LFB5th(a) denote the
fifth least significant bit of a, then
Ŵ1 *’(i,j) =1 LFB5th(f *’(i,j)) = 0
=0 LFB5th(f *’(i,j)) = 1 For (1 ≤i
≤X ,1≤ j ≤Y)
Now as the watermark has been processed therefore at the verification phase,
we again process it to obtain the ultimate watermark Ŵ1’ ( a binary image)
using below equation
Ŵ1’ (i,j)= Ŵ1 *’(i,j) PRand(i,j) For (1 ≤ i ≤X ,1≤ j ≤Y)
Where PRand is the Pseudo Random number matrix (PRNM) and Ŵ1 *’ is
the extracted binary signature.
We express the difference mark as
D(I,j)= Ŵ1 (i,j)- Ŵ1 (i,j) For(1 ≤i ≤X ,1≤ j ≤Y)
If D(i,j)=1 then the pixel in the difference binary image is white and represents mark
extraction error. On the contrary, black pixel represents accurate mark extraction.
To obtain the estimated image, we move further to extract Ŵ2.Following steps are
taken for the extraction of Ŵ2.
Horizontal and vertical details are further decomposed and HL2 and LH2 are
selected.
Here the data is reversed onto a vector V’scrambled which is inversely
scrambled by means of same key. Thus resulting in a sequence V’.An
estimate of the hidden DCT coefficients is then obtained by averaging all four
14
copies of each extracted coefficient. A unique set of authentication data
qextracted (i.e M coefficients) is obtained.
The invers scaling operation is performed using
qreconstructed (i)=qextracted(i).1/ α.1/ln(i+2+r(i)
The inverse scaled coefficients are then replaced in their correct positions, by
means of an anti-zigzag scanning (missing elements are set to zero and a DC
component with the value 128 is inserted)
These obtained values are weighed back with the JPEG quantization matrix
and inverse DCT is applied to obtain an approximation of the original image
with size N/2 *N/2 the aim of the recovered image in our proposes scheme is
that if someone tampers maliciously or incidentally the watermarked image,
we can recover the estimated image even if the image is tampered or not ,in
both the cases, the complete estimated image is recovered i.e we are not
recovering only the tampered blocks.
Fig: Extraction diagram
15
Fig: Analysis of extracted signature.
Fig: Analysis of recovered image Ŵ2
Tamper detection
We express the difference between the original binary image and the extracted
binary image watermark as:
Difference=|Ŵ1(I,j)-Ŵ1΄(I,j)|……….(16)
16
If Difference is “1” then it means that there exist a difference between the
correspondent pixels of original image and extracted binary watermarks. As we will
see in the experimental results that “0” , i.e black pixel in the difference image
corresponds to correctness while”1” i.e corresponds to error. Our proposed approach
accurately locates the tampered area and distinguishes between malicious and
incidental attacks. The details are given as follows.
Dense pixel: For a mark error pixel in the difference image it is a dense pixel
if at least one of its eight neighbor pixels is a mark error pixel is a mark error pixel
and a sparse pixel otherwise thus we have the following parameters.
Dense area: The total number of dense pixels of LL subband
Sparse area: The total number of pixels of LL sub band
Total area=dense area + sparse area,
∆= Dense area / Sparse area,
ξ=Total area / Area,
if ξ=0 then the image is not tampered,
if ξ>0 and ∆<γ, then tempering is incidental,
where γ is set empirically between 0.5~ 1.0;
If ∆≥γ,then tampering is malicious.
Above parameters depict that if the difference image has sparse pixels i.e ∆< γ the
image is incidentally attacked like compression and file format change atc. Otherwise
in case of dense pixels the image is maliciously attacked i.e tampered maliciously.
17
CHAPTER-4
DIGITAL IMAGE PROCESSING
18
Digital image processing is an area characterized by the need for extensive
experimental work to establish the viability of proposed solutions to a given problem.
An important characteristic underlying the design of image processing systems is the
significant level of testing & experimentation that normally is required before arriving
at an acceptable solution. This characteristic implies that the ability to formulate
approaches &quickly prototype candidate solutions generally plays a major role in
reducing the cost & time required to arrive at a viable system implementation.
What is DIP?
An image may be defined as a two-dimensional function f(x, y), where x & y
are spatial coordinates, & the amplitude of f at any pair of coordinates (x, y) is called
the intensity or gray level of the image at that point. When x, y & the amplitude
values of f are all finite discrete quantities, we call the image a digital image. The
field of DIP refers to processing digital image by means of digital computer. Digital
image is composed of a finite number of elements, each of which has a particular
location & value. The elements are called pixels.
Vision is the most advanced of our sensor, so it is not surprising that image play
the single most important role in human perception. However, unlike humans, who
are limited to the visual band of the EM spectrum imaging machines cover almost the
entire EM spectrum, ranging from gamma to radio waves. They can operate also on
images generated by sources that humans are not accustomed to associating with
image.
There is no general agreement among authors regarding where image
processing stops & other related areas such as image analysis& computer vision start.
19
Sometimes a distinction is made by defining image processing as a discipline in
which both the input & output at a process are images. This is limiting & somewhat
artificial boundary. The area of image analysis (image understanding) is in between
image processing & computer vision.
There are no clear-cut boundaries in the continuum from image processing at
one end to complete vision at the other. However, one useful paradigm is to consider
three types of computerized processes in this continuum: low-, mid-, & high-level
processes. Low-level process involves primitive operations such as image processing
to reduce noise, contrast enhancement & image sharpening. A low- level process is
characterized by the fact that both its inputs & outputs are images. Mid-level process
on images involves tasks such as segmentation, description of that object to reduce
them to a form suitable for computer processing & classification of individual objects.
A mid-level process is characterized by the fact that its inputs generally are images
but its outputs are attributes extracted from those images. Finally higher- level
processing involves “Making sense” of an ensemble of recognized objects, as in
image analysis & at the far end of the continuum performing the cognitive functions
normally associated with human vision.
Digital image processing, as already defined is used successfully in a broad
range of areas of exceptional social & economic value.
WHAT IS AN IMAGE?
An image is represented as a two dimensional function f(x, y) where x and y are
spatial co-ordinates and the amplitude of ‘f’ at any pair of coordinates (x, y) is called
the intensity of the image at that point.
20
Gray scale image:
A grayscale image is a function I (xylem) of the two spatial coordinates
of the image plane. I(x, y) is the intensity of the image at the point (x, y) on the image
plane. I (xylem) takes non-negative values assume the image is bounded by a
rectangle [0, a] ´[0, b]I: [0, a] ´ [0, b] ® [0, info).
Color image:
It can be represented by three functions, R (xylem) for red, G (xylem) for
green and B (xylem) for blue.
An image may be continuous with respect to the x and y coordinates and
also in amplitude. Converting such an image to digital form requires that the
coordinates as well as the amplitude to be digitized. Digitizing the coordinate’s values
is called sampling. Digitizing the amplitude values is called quantization.
Coordinate convention:
The result of sampling and quantization is a matrix of real numbers. We use
two principal ways to represent digital images. Assume that an image f(x, y) is
sampled so that the resulting image has M rows and N columns. We say that the
image is of size M X N. The values of the coordinates (xylem) are discrete quantities.
For notational clarity and convenience, we use integer values for these discrete
coordinates. In many image processing books, the image origin is defined to be at
(xylem)=(0,0).The next coordinate values along the first row of the image are
(xylem)=(0,1).It is important to keep in mind that the notation (0,1) is used to signify
the second sample along the first row. It does not mean that these are the actual values
of physical coordinates when the image was sampled. Following figure shows the
21
coordinate convention. Note that x ranges from 0 to M-1 and y from 0 to N-1 in
integer increments.
The coordinate convention used in the toolbox to denote arrays is different from
the preceding paragraph in two minor ways. First, instead of using (xylem) the
toolbox uses the notation (race) to indicate rows and columns. Note, however, that the
order of coordinates is the same as the order discussed in the previous paragraph, in
the sense that the first element of a coordinate topples, (alb), refers to a row and the
second to a column. The other difference is that the origin of the coordinate system is
at (r, c) = (1, 1); thus, r ranges from 1 to M and c from 1 to N in integer increments.
IPT documentation refers to the coordinates. Less frequently the toolbox also employs
another coordinate convention called spatial coordinates which uses x to refer to
columns and y to refers to rows. This is the opposite of our use of variables x and y.
Image as Matrices:
The preceding discussion leads to the following representation for a digitized
image function:
f (0,0) f(0,1) ……….. f(0,N-1)
f(1,0) f(1,1) ………… f(1,N-1)
f(xylem)= . . .
. . .
f(M-1,0) f(M-1,1) ………… f(M-1,N-1)
The right side of this equation is a digital image by definition. Each
element of this array is called an image element, picture element, pixel or pel. The
22
terms image and pixel are used throughout the rest of our discussions to denote a
digital image and its elements.
A digital image can be represented naturally as a MATLAB
matrix:
f(1,1) f(1,2) ……. f(1,N)
f(2,1) f(2,2) …….. f(2,N)
. . .
f = . . .
f(M,1) f(M,2) …….f(M,N)
Where f(1,1) = f(0,0) (note the use of a monoscope font to denote MATLAB
quantities). Clearly the two representations are identical, except for the shift in origin.
The notation f (p ,q) denotes the element located in row p and the column q. For
example f (6, 2) is the element in the sixth row and second column of the matrix f.
Typically we use the letters M and N respectively to denote the number of rows and
columns in a matrix. A 1xN matrix is called a row vector whereas an Mx1 matrix is
called a column vector. A 1x1 matrix is a scalar.
Matrices in MATLAB are stored in variables with names such as A, a, RGB,
real array and so on. Variables must begin with a letter and contain only letters,
numerals and underscores. As noted in the previous paragraph, all MATLAB
quantities are written using mono-scope characters. We use conventional Roman,
italic notation such as f(x ,y), for mathematical expressions
23
Reading Images:
Images are read into the MATLAB environment using function imread whose
syntax is
imread(‘filename’)
Format name Description recognized
extension
TIFF Tagged Image File Format .tif, .tiff
JPEG Joint Photograph Experts Group .jpg, .jpeg
GIF Graphics Interchange Format .gif
BMP Windows Bitmap .bmp
PNG Portable Network Graphics .png
XWD X Window Dump .xwd
Here filename is a spring containing the complete of the image file(including
any applicable extension).For example the command line
>> f = imread (‘8. jpg’);
reads the JPEG (above table) image chestxray into image array f. Note the use of
single quotes (‘) to delimit the string filename. The semicolon at the end of a
command line is used by MATLAB for suppressing output. If a semicolon is not
included. MATLAB displays the results of the operation(s) specified in that line. The
prompt symbol (>>) designates the beginning of a command line, as it appears in the
MATLAB command window.
24
When as in the preceding command line no path is included in filename, imread
reads the file from the current directory and if that fails it tries to find the file in the
MATLAB search path. The simplest way to read an image from a specified directory
is to include a full or relative path to that directory in filename.
For example,
>> f = imread ( ‘D:\myimages\chestxray.jpg’);
reads the image from a folder called my images on the D: drive, whereas
>> f = imread(‘ . \ myimages\chestxray .jpg’);
reads the image from the my images subdirectory of the current of the current
working directory. The current directory window on the MATLAB desktop toolbar
displays MATLAB’s current working directory and provides a simple, manual way
to change it. Above table lists some of the most of the popular image/graphics
formats supported by imread and imwrite.
Function size gives the row and column dimensions of an image:
>> size (f)
ans = 1024 * 1024
This function is particularly useful in programming when used in the following
form to determine automatically the size of an image:
>>[M,N]=size(f);
This syntax returns the number of rows(M) and columns(N) in the image.
25
The whole function displays additional information about an array. For
instance ,the statement
>> whos f
gives
Name size Bytes Class
F 1024*1024 1048576 unit8 array
Grand total is 1048576 elements using 1048576 bytes
The unit8 entry shown refers to one of several MATLAB data classes. A
semicolon at the end of a whose line has no effect ,so normally one is not used.
Displaying Images:
Images are displayed on the MATLAB desktop using function imshow, which
has the basic syntax:
imshow(f,g)
Where f is an image array, and g is the number of intensity levels used to
display it. If g is omitted ,it defaults to 256 levels .using the syntax
imshow(f,{low high})
Displays as black all values less than or equal to low and as white all
values greater than or equal to high. The values in between are displayed as
intermediate intensity values using the default number of levels .Finally the syntax
Imshow(f,[ ])
26
Sets variable low to the minimum value of array f and high to its
maximum value. This form of imshow is useful for displaying images that have a low
dynamic range or that have positive and negative values.
Function pixval is used frequently to display the intensity values of individual
pixels interactively. This function displays a cursor overlaid on an image. As the
cursor is moved over the image with the mouse the coordinates of the cursor position
and the corresponding intensity values are shown on a display that appears below the
figure window .When working with color images, the coordinates as well as the red,
green and blue components are displayed. If the left button on the mouse is clicked
and then held pressed, pixval displays the Euclidean distance between the initial and
current cursor locations.
The syntax form of interest here is Pixval which shows the cursor on the last
image displayed. Clicking the X button on the cursor window turns it off.
The following statements read from disk an image called rose_512.tif extract
basic information about the image and display it using imshow:
>>f=imread(‘rose_512.tif’);
>>whos f
Name Size Bytes Class
F 512*512 262144 unit8 array
Grand total is 262144 elements using 262144 bytes
>>imshow(f)
27
A semicolon at the end of an imshow line has no effect, so normally one is not
used. If another image, g, is displayed using imshow, MATLAB replaces the image
in the screen with the new image. To keep the first image and output a second image,
we use function figure as follows:
>>figure ,imshow(g)
Using the statement
>>imshow(f),figure ,imshow(g) displays both images.
Note that more than one command can be written on a line, as long as different
commands are properly delimited by commas or semicolons. As mentioned earlier, a
semicolon is used whenever it is desired to suppress screen outputs from a command
line.
Suppose that we have just read an image h and find that using imshow produces
the image. It is clear that this image has a low dynamic range, which can be remedied
for display purposes by using the statement.
>>imshow(h,[ ])
WRITING IMAGES:
Images are written to disk using function imwrite, which has the following basic
syntax:
Imwrite (f,’filename’)
28
With this syntax, the string contained in filename must include a recognized file
format extension .Alternatively, the desired format can be specified explicitly with a
third input argument. >>imwrite(f,’patient10_run1’,’tif’)
Or alternatively
For example the following command writes f to a TIFF file named patient10_run1:
>>imwrite(f,’patient10_run1.tif’)
If filename contains no path information, then imwrite saves the file in the
current working directory.
The imwrite function can have other parameters depending on e file format
selected. Most of the work in the following deals either with JPEG or TIFF images ,so
we focus attention here on these two formats.
More general imwrite syntax applicable only to JPEG images is
imwrite(f,’filename.jpg,,’quality’,q)
where q is an integer between 0 and 100(the lower number the higher the degradation
due to JPEG compression).
For example, for q=25 the applicable syntax is
>> imwrite(f,’bubbles25.jpg’,’quality’,25)
The image for q=15 has false contouring that is barely visible, but this effect
becomes quite pronounced for q=5 and q=0.Thus, an expectable solution with some
margin for error is to compress the images with q=25.In order to get an idea of the
compression achieved and to obtain other image file details, we can use function
imfinfo which has syntax.
Imfinfo filename
29
Here filename is the complete file name of the image stored in disk.
For example,
>> imfinfo bubbles25.jpg
outputs the following information(note that some fields contain no information in this
case):
Filename: ‘bubbles25.jpg’
FileModDate: ’04-jan-2003 12:31:26’
FileSize: 13849
Format: ‘jpg’
Format Version: ‘ ‘
Width: 714
Height: 682
Bit Depth: 8
Color Depth: ‘grayscale’
Format Signature: ‘ ‘
Comment: { }
Where file size is in bytes. The number of bytes in the original image is
corrupted simply by multiplying width by height by bit depth and dividing the result
by 8. The result is 486948.Dividing this file size gives the compression ratio:
(486948/13849)=35.16.This compression ratio was achieved. While maintaining
image quality consistent with the requirements of the appearance. In addition to the
30
obvious advantages in storage space, this reduction allows the transmission of
approximately 35 times the amount of un compressed data per unit time.
The information fields displayed by imfinfo can be captured in to a so called
structure variable that can be for subsequent computations. Using the receding an
example and assigning the name K to the structure variable.
We use the syntax >>K=imfinfo(‘bubbles25.jpg’);
To store in to variable K all the information generated by command imfinfo, the
information generated by imfinfo is appended to the structure variable by means of
fields, separated from K by a dot. For example, the image height and width are now
stored in structure fields K. Height and K. width.
As an illustration, consider the following use of structure variable K to commute the
compression ratio for bubbles25.jpg:
>> K=imfinfo(‘bubbles25.jpg’);
>> image_ bytes =K.Width* K.Height* K.Bit Depth /8;
>> Compressed_ bytes = K.FilesSize;
>> Compression_ ratio=35.162
Note that iminfo was used in two different ways. The first was to type imfinfo
bubbles25.jpg at the prompt, which resulted in the information being displayed on the
screen. The second was to type K=imfinfo (‘bubbles25.jpg’),which resulted in the
information generated by imfinfo being stored in K. These two different ways of
calling imfinfo are an example of command_ function duality, an important concept
that is explained in more detail in the MATLAB online documentation.
31
More general imwrite syntax applicable only to tif images has the form
Imwrite(g,’filename.tif’,’compression’,’parameter’,….’resloution’,[colres rowers] )
Where ‘parameter’ can have one of the following principal values: ‘none’
indicates no compression, ‘pack bits’ indicates pack bits compression (the default for
non ‘binary images’) and ‘ccitt’ indicates ccitt compression. (the default for binary
images).The 1*2 array [colres rowers]
Contains two integers that give the column resolution and row resolution in dot
per_ unit (the default values). For example, if the image dimensions are in inches,
colres is in the number of dots(pixels)per inch (dpi) in the vertical direction and
similarly for rowers in the horizontal direction. Specifying the resolution by single
scalar, res is equivalent to writing [res res].
>>imwrite(f,’sf.tif’,’compression’,’none’,’resolution’,……………..[300 300])
the values of the vector[colures rows] were determined by multiplying 200 dpi by the
ratio 2.25/1.5, which gives 30 dpi. Rather than do the computation manually, we
could write
>> res=round(200*2.25/1.5);
>>imwrite(f,’sf.tif’,’compression’,’none’,’resolution’,res)
where its argument to the nearest integer. It function round rounds is important
to note that the number of pixels was not changed by these commands. Only the scale
of the image changed. The original 450*450 image at 200 dpi is of size 2.25*2.25
inches. The new 300_dpi image is identical, except that is 450*450 pixels are
32
distributed over a 1.5*1.5_inch area. Processes such as this are useful for controlling
the size of an image in a printed document with out sacrificing resolution.
Often it is necessary to export images to disk the way they appear on the
MATLAB desktop. This is especially true with plots .The contents of a figure window
can be exported to disk in two ways. The first is to use the file pull-down menu is in
the figure window and then choose export. With this option the user can select a
location, filename, and format. More control over export parameters is obtained by
using print command:
Print-fno-dfileformat-rresno filename
Where no refers to the figure number in the figure window interest, file
format refers one of the file formats in table above. ‘resno’ is the resolution in dpi,
and filename is the name we wish to assign the file.
If we simply type print at the prompt, MATLAB prints (to the default printer)
the contents of the last figure window displayed. It is possible also to specify other
options with print, such as specific printing device.
DATA CLASSES:
Although we work with integers coordinates the values of pixels themselves are
not restricted to be integers in MATLAB. Table above list various data classes
supported by MATLAB and IPT are representing pixels values. The first eight entries
in the table are refers to as numeric data classes. The ninth entry is the char class and,
as shown, the last entry is referred to as logical data class.
All numeric computations in MATLAB are done in double quantities, so this is
also a frequent data class encounter in image processing applications. Class unit 8
33
also is encountered frequently, especially when reading data from storages devices, as
8 bit images are most common representations found in practice. These two data
classes, classes logical, and, to a lesser degree, class unit 16 constitute the primary
data classes on which we focus. Many ipt functions however support all the data
classes listed in table. Data class double requires 8 bytes to represent a number uint8
and int 8 require one byte each, uint16 and int16 requires 2bytes and unit 32.
Name Description
Double Double _ precision, floating_ point numbers the Approximate.
Uinit8 unsigned 8_bit integers in the range [0,255] (1byte per
Element). Element).
Uinit16 unsigned 16_bit integers in the range [0, 65535] (2byte
Per element).
Uinit 32 unsigned 32_bit integers in the range [0, 4294967295]
(4 bytes per element).
Int8 signed 8_bit integers in the range [-128,127]
1 byte per element)
Int 16 signed 16_byte integers in the range
[32768, 32767] (2 bytes per element).
Int 32 Signed 32_byte integers in the range
[-2147483648, 21474833647]
(4 byte per element).
34
Single single _precision floating _point numbers with values
In the approximate range (4 bytes per elements).
Char characters (2 bytes per elements).
Logical values are 0 to 1 (1byte per element).
int 32 and single required 4 bytes each. The char data class holds characters
in Unicode representation. A character string is merely a 1*n array of characters
logical array contains only the values 0 to 1,with each element being stored in
memory using function logical or by using relational operators.
IMAGE TYPES:
The toolbox supports four types of images:
1. 1 .Intensity images
2. Binary images
3. Indexed images
4. R G B images
Most monochrome image processing operations are carried out using binary or
intensity images, so our initial focus is on these two image types. Indexed and RGB
colour images.
INTENSITY IMAGES:
An intensity image is a data matrix whose values have been scaled to represent
intentions. When the elements of an intensity image are of class unit8, or class unit
16, they have integer values in the range [0,255] and [0, 65535], respectively. If the
image is
35
of class double, the values are floating _point numbers. Values of scaled, double
intensity images are in the range [0, 1] by convention.
BINARY IMAGES:
Binary images have a very specific meaning in MATLAB.A binary image is a
logical array 0s and1s.Thus, an array of 0s and 1s whose values are of data class, say
unit8, is not considered as a binary image in MATLAB .A numeric array is converted
to binary using function logical. Thus, if A is a numeric array consisting of 0s and 1s,
we create an array B using the statement.
B=logical (A)
If A contains elements other than 0s and 1s.Use of the logical function converts
all nonzero quantities to logical 1s and all entries with value 0 to logical 0s.
Using relational and logical operators also creates logical arrays.
To test if an array is logical we use the I logical function:
islogical(c)
If c is a logical array, this function returns a 1.Otherwise returns a 0. Logical
array can be converted to numeric arrays using the data class conversion functions.
INDEXED IMAGES:
An indexed image has two components:
A data matrix integer, x.
A color map matrix, map.
36
Matrix map is an m*3 arrays of class double containing floating_ point values in
the range [0, 1].The length m of the map are equal to the number of colors it defines.
Each row of map specifies the red, green and blue components of a single color. An
indexed images uses “direct mapping” of pixel intensity values color map values. The
color of each pixel is determined by using the corresponding value the integer matrix
x as a pointer in to map. If x is of class double ,then all of its components with values
less than or equal to 1 point to the first row in map, all components with value 2 point
to the second row and so on. If x is of class units or unit 16, then all components value
0 point to the first row in map, all components with value 1 point to the second and so
on.
RGB Image:
An RGB color image is an M*N*3 array of color pixels where each color pixel
is triplet corresponding to the red, green and blue components of an RGB image, at a
specific spatial location. An RGB image may be viewed as “stack” of three gray scale
images that when fed in to the red, green and blue inputs of a color monitor..
Produce a color image on the screen. Convention the three images forming an
RGB color image are referred to as the red, green and blue components images. The
data class of the components images determines their range of values. If an RGB
image is of class double the range of values is [0, 1].
Similarly the range of values is [0,255] or [0, 65535].For RGB images of class
units or unit 16 respectively. The number of bits use to represents the pixel values of
the component images determines the bit depth of an RGB image. For example, if
each component image is an 8bit image, the corresponding RGB image is said to be
24 bits deep.
37
Generally, the number of bits in all component images is the same. In this case
the number of possible color in an RGB image is (2^b) ^3, where b is a number of bits
in each component image. For the 8bit case the number is 16,777,216 colors
38
CHAPTER – 5
INTRODUCTION TO MATLAB
39
What Is MATLAB?
MATLAB® is a high-performance language for technical computing. It
integrates computation, visualization, and programming in an easy-to-use
environment where problems and solutions are expressed in familiar mathematical
notation. Typical uses include
Math and computation
Algorithm development
Data acquisition
Modeling, simulation, and prototyping
Data analysis, exploration, and visualization
Scientific and engineering graphics
Application development, including graphical user interface building.
MATLAB is an interactive system whose basic data element is an array that
does not require dimensioning. This allows you to solve many technical computing
problems, especially those with matrix and vector formulations, in a fraction of the
time it would take to write a program in a scalar non interactive language such as C or
FORTRAN.
The name MATLAB stands for matrix laboratory. MATLAB was originally
written to provide easy access to matrix software developed by the LINPACK and
EISPACK projects. Today, MATLAB engines incorporate the LAPACK and BLAS
libraries, embedding the state of the art in software for matrix computation.
MATLAB has evolved over a period of years with input from many users. In
university environments, it is the standard instructional tool for introductory and
40
MATLAB
MATLAB
Programming language
User written / Built in functions
Graphics
2-D graphics
3-D graphics
Color and lighting
Animation
Computation
Linear algebra
Signal processing
Quadrature
Etc
External interface
Interface with C and
FORTRAN
Programs
Tool boxes
Signal processingImage processingControl systemsNeural NetworksCommunicationsRobust controlStatistics
advanced courses in mathematics, engineering, and science. In industry, MATLAB is
the tool of choice for high-productivity research, development, and analysis.
MATLAB features a family of add-on application-specific solutions called
toolboxes. Very important to most users of MATLAB, toolboxes allow you to learn
and apply specialized technology. Toolboxes are comprehensive collections of
MATLAB functions (M-files) that extend the MATLAB environment to solve
particular classes of problems. Areas in which toolboxes are available include signal
processing, control systems, neural networks, fuzzy logic, wavelets, simulation, and
many others.
41
The MATLAB System:
The MATLAB system consists of five main parts:
Development Environment:
This is the set of tools and facilities that help you use MATLAB functions and
files. Many of these tools are graphical user interfaces. It includes the MATLAB
desktop and Command Window, a command history, an editor and debugger, and
browsers for viewing help, the workspace, files, and the search path.
The MATLAB Mathematical Function:
This is a vast collection of computational algorithms ranging from elementary
functions like sum, sine, cosine, and complex arithmetic, to more sophisticated
functions like matrix inverse, matrix eigen values, Bessel functions, and fast Fourier
transforms.
The MATLAB Language:
This is a high-level matrix/array language with control flow statements,
functions, data structures, input/output, and object-oriented programming features. It
allows both "programming in the small" to rapidly create quick and dirty throw-away
programs, and "programming in the large" to create complete large and complex
application programs.
Graphics:
MATLAB has extensive facilities for displaying vectors and matrices as
graphs, as well as annotating and printing these graphs. It includes high-level
functions for two-dimensional and three-dimensional data visualization, image
processing, animation, and presentation graphics. It also includes low-level functions
42
that allow you to fully customize the appearance of graphics as well as to build
complete graphical user interfaces on your MATLAB applications.
The MATLAB Application Program Interface (API):
This is a library that allows you to write C and Fortran programs that interact
with MATLAB. It includes facilities for calling routines from MATLAB (dynamic
linking), calling MATLAB as a computational engine, and for reading and writing
MAT-files.
MATLAB WORKING ENVIRONMENT:
MATLAB DESKTOP:-
Matlab Desktop is the main Matlab application window. The desktop contains
five sub windows, the command window, the workspace browser, the current
directory window, the command history window, and one or more figure windows,
which are shown only when the user displays a graphic.
The command window is where the user types MATLAB commands and
expressions at the prompt (>>) and where the output of those commands is displayed.
MATLAB defines the workspace as the set of variables that the user creates in a work
session. The workspace browser shows these variables and some information about
them. Double clicking on a variable in the workspace browser launches the Array
Editor, which can be used to obtain information and income instances edit certain
properties of the variable.
The current Directory tab above the workspace tab shows the contents of the
current directory, whose path is shown in the current directory window. For example,
in the windows operating system the path might be as follows: C:\MATLAB\Work,
indicating that directory “work” is a subdirectory of the main directory “MATLAB”;
43
WHICH IS INSTALLED IN DRIVE C. clicking on the arrow in the current directory
window shows a list of recently used paths. Clicking on the button to the right of the
window allows the user to change the current directory.
MATLAB uses a search path to find M-files and other MATLAB related files,
which are organize in directories in the computer file system. Any file run in
MATLAB must reside in the current directory or in a directory that is on search path.
By default, the files supplied with MATLAB and math works toolboxes are included
in the search path. The easiest way to see which directories are on the search path.
The easiest way to see which directories are soon the search path, or to add or modify
a search path, is to select set path from the File menu the desktop, and then use the
set path dialog box. It is good practice to add any commonly used directories to the
search path to avoid repeatedly having the change the current directory.
The Command History Window contains a record of the commands a user has
entered in the command window, including both current and previous MATLAB
sessions. Previously entered MATLAB commands can be selected and re-executed
from the command history window by right clicking on a command or sequence of
commands. This action launches a menu from which to select various options in
addition to executing the commands. This is useful to select various options in
addition to executing the commands. This is a useful feature when experimenting with
various commands in a work session.
Using the MATLAB Editor to create M-Files:
The MATLAB editor is both a text editor specialized for creating M-files and a
graphical MATLAB debugger. The editor can appear in a window by itself, or it can
be a sub window in the desktop. M-files are denoted by the extension .m, as in
44
pixelup.m. The MATLAB editor window has numerous pull-down menus for tasks
such as saving, viewing, and debugging files. Because it performs some simple
checks and also uses color to differentiate between various elements of code, this text
editor is recommended as the tool of choice for writing and editing M-functions. To
open the editor , type edit at the prompt opens the M-file filename.m in an editor
window, ready for editing. As noted earlier, the file must be in the current directory,
or in a directory in the search path.
Getting Help:
The principal way to get help online is to use the MATLAB help browser,
opened as a separate window either by clicking on the question mark symbol (?) on
the desktop toolbar, or by typing help browser at the prompt in the command window.
The help Browser is a web browser integrated into the MATLAB desktop that
displays a Hypertext Markup Language(HTML) documents. The Help Browser
consists of two panes, the help navigator pane, used to find information, and the
display pane, used to view the information. Self-explanatory tabs other than navigator
pane are used to perform a search.
MATLAB WORKING ENVIRONMENT:
MATLAB DESKTOP:-
Matlab Desktop is the main Matlab application window. The desktop contains
five sub windows, the command window, the workspace browser, the current
directory window, the command history window, and one or more figure windows,
which are shown only when the user displays a graphic.
45
The command window is where the user types MATLAB commands and
expressions at the prompt (>>) and where the output of those commands is displayed.
MATLAB defines the workspace as the set of variables that the user creates in a work
session. The workspace browser shows these variables and some information about
them. Double clicking on a variable in the workspace browser launches the Array
Editor, which can be used to obtain information and income instances edit certain
properties of the variable.
The current Directory tab above the workspace tab shows the contents of the
current directory, whose path is shown in the current directory window. For example,
in the windows operating system the path might be as follows: C:\MATLAB\Work,
indicating that directory “work” is a subdirectory of the main directory “MATLAB”;
46
WHICH IS INSTALLED IN DRIVE C. clicking on the arrow in the current directory
window shows a list of recently used paths. Clicking on the button to the right of the
window allows the user to change the current directory.
MATLAB uses a search path to find M-files and other MATLAB related files,
which are organize in directories in the computer file system. Any file run in
MATLAB must reside in the current directory or in a directory that is on search path.
By default, the files supplied with MATLAB and math works toolboxes are included
in the search path. The easiest way to see which directories are on the search path.
The easiest way to see which directories are soon the search paths, or to add or
modify a search path, is to select set path from the File menu the desktop, and then
use the set path dialog box. It is good practice to add any commonly used directories
to the search path to avoid repeatedly having the change the current directory.
The Command History Window contains a record of the commands a user has
entered in the command window, including both current and previous MATLAB
sessions. Previously entered MATLAB commands can be selected and re-executed
from the command history window by right clicking on a command or sequence of
commands. This action launches a menu from which to select various options in
addition to executing the commands. This is useful to select various options in
addition to executing the commands. This is a useful feature when experimenting with
various commands in a work session.
Using the MATLAB Editor to create M-Files:
The MATLAB editor is both a text editor specialized for creating M-files and a
graphical MATLAB debugger. The editor can appear in a window by itself, or it can
47
be a sub window in the desktop. M-files are denoted by the extension .m, as in
pixelup.m. The MATLAB editor window has numerous pull-down menus for tasks
such as saving, viewing, and debugging files. Because it performs some simple
checks and also uses color to differentiate between various elements of code, this text
editor is recommended as the tool of choice for writing and editing M-functions. To
open the editor, type edit at the prompt opens the M-file filename.m in an editor
window, ready for editing. As noted earlier, the file must be in the current directory,
or in a directory in the search path.
48
CHAPTER-6
RESULTS
49
MATLAB CODE:
%%%%%% wavelet basedimage authentication and recovery%%%
%%%%% by rafiullah chamlawi and asifullah khan%%%%%
clear all;close all;clc
I=imread('bvg.bmp');
I=im2bw(imresize(I,[256 256]));%%%%% generating a binary image eq-
(1)
imshow(I);title('Binary image');it=I;
pr=round(rand(size(I))); %%%% generating a random sequence of the
secret key eq-(2) Key 1
W=xor(I,pr); %%%%% applying exor operation eq-(3)
%%%%% 2.2 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% read a original image and apply the inteer wavelet
transformation%%%
img=uigetfile('.jpg,.bmp','select the image');
img=imread(img);
img=imresize(img,[256 256]);figure,imshow(img);title('original image
for watermarking');
[LL1,LH1,HL1,HH1]=inwavtras(img);
figure,imshow(mat2gray(LL1));title('approximation details of IWT');
[LL2,LH2,HL2,HH2]=inwavtras(LL1);
[LL3,LH3,HL3,HH3]=inwavtras(LL2);
figure,imshow(mat2gray(LL2));title('approximation details of IWT');
figure,imshow(mat2gray(LL3));title('approximation details of IWT');
%%%%% select LL1 components and apply DCT %%%%%%%%%%%%%%%%
M=(size(img,1)*size(img,2))/32;
bs=jpg(LL1);
q=bs;
alp=2;
r=-0.5:0.015:0.5;
50
for i=1:length(q)
qs(i)=q(i)*alp*log(i+2+r(i));
end
V=[q q q q];
z=[1 2 6 7 15 16 28 29
3 5 8 14 17 27 30 43
4 9 13 18 26 31 42 44
10 12 19 25 32 41 45 54
11 20 24 33 40 46 53 55
21 23 34 39 47 52 56 61
22 35 38 48 51 57 60 62
36 37 49 50 58 59 63 64];%%%%% secret key 3
%Vper=[q(z) q(z); q(z) q(z)];
%Vper=[q(z) q(z) q(z) q(z);q(z) q(z) q(z) q(z);q(z) q(z) q(z)
q(z);q(z) q(z) q(z) q(z);q(z) q(z) q(z) q(z);q(z) q(z) q(z)
q(z);q(z) q(z) q(z) q(z)];
%%%%%%%%%%%% 2.3 watermark embedding %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%
Vper(1:8,1:8)=qs(z);Vper(9:16,9:16)=qs(z);Vper(17:24,17:24)=qs(z);Vpe
r(25:32,25:32)=qs(z);Vper(33:40,33:40)=qs(z);Vper(41:48,41:48)=qs(z);
Vper(49:56,49:56)=qs(z);Vper(57:64,57:64)=qs(z);
%%%%% now store W into LL3 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
bt=dec2bin(LL3);
s2=size(bt,2);
btl=bt(1:end,3:end);
imbp1=zeros(length(btl),8);
for i=1:length(btl)
for j=1:size(btl,2)
imbp1(i,j)=strread(btl(i,j));
end
51
end
f=reshape(W,size(W,1)*size(W,2),1);
f1=reshape(imbp1,size(imbp1,1)*size(imbp1,2),1);
for i=1:length(f1)-5
if f(i)==0
if f1(i:i+4)<=[0 1 0 0 0]'
f1(i:i+4)=f1(i:i+4)-[0 1 0 0 0]';
else
f1(i:i+4)=[1 0 0 0 0]'; %%%%% embedding of
watermark 1 eqn(9) and eqn(10)
end
else
if f1(i:i+4)<=[1 1 0 0 0]'
f1(i:i+4)=f1(i:i+4)+[1 0 0 0 0]';
else
f1(i:i+4)=[0 1 0 0 0]';
end
end
end
%%%%% watermarking 2 %%%%% substitute Vper in place of LL2 HL2 %%%%%
HL2=Vper;
LL2=Vper;
re=dec2bin(num2str(f1),8);
sub=bin2dec(re);
rel=imadd(LL3,reshape(sub(1:numel(LL3)),size(LL3)));
I=invinwavtras(rel,LH3,HL3,HH3);
Ir=invinwavtras(I(1:end-1,1:end-1),LH2,HL2,HH2);
Ir2=invinwavtras(Ir(1:end-1,1:end-1),LH1,HL1,HH1);
52
figure,imshow(mat2gray(Ir2(6:end,6:end)));title('watermarked image');
imwrite(mat2gray(Ir2(6:end,6:end)),'marked.bmp');
% %%%%%%%% difference between the original and the watermarked %%%%%
diff=img-uint8(Ir2(1:end-1,1:end-1));
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
watimg=(Ir2(1:end-1,1:end-1));
[LLW1,LHW1,HLW1,HHW1]=inwavtras(watimg);
[LLW2,LHW2,HLW2,HHW2]=inwavtras(LLW1);
[LLW3,LHW3,HLW3,HHW3]=inwavtras(LLW2);
%%%%%% inverse scarmbling %%%%%
lb=dec2bin(LLW3);
for i=1:length(lb)
for j=1:size(lb,2)
if (lb(i,5))~=num2str(0)
lb(i,5)=num2str(0);
else
if strread(lb(i,5))==0
w1(i,j)=1;
else
w1(i,j)=0;
end
end
end
end
m=w1(:,1:8);
m=num2str(m);
rim=bin2dec(m);
r=reshape(rim,[32 32]);
rb=im2bw(r);
pnew=imresize(pr,[32 32]);
53
wc=xor(rb,pnew);
%%%%%%% DIFFERENCE MARK %%%%%
Wnes=imresize(W,[32 32]);
for i=1:length(wc)
for j=1:length(wc)
D(i,j)=Wnes(i,j)-wc(i,j);
end
end
alp=2;
r=-0.5:0.015:0.5;
for i=1:length(q)
qrc(i)=q(i)*(1/alp*log(i+2+r(i)));
end
f=qrc(z);
for i=1:numel(f);
if f(i)==0
f(i)=128;
end
end
imh=im2bw([f f f f]);
accof=reshape(imh,[1 256]);
acarr=jacdec(accof);
Q =[16 11 10 16 24 40 51 61
12 12 14 19 26 58 60 55
14 13 16 24 40 57 69 56
14 17 22 29 51 87 80 62
18 22 37 56 68 109 103 77
24 35 55 64 81 104 113 92
49 64 78 87 103 121 120 101
72 92 95 98 112 100 103 99];
z=[1 2 6 7 15 16 28 29
54
3 5 8 14 17 27 30 43
4 9 13 18 26 31 42 44
10 12 19 25 32 41 45 54
11 20 24 33 40 46 53 55
21 23 34 39 47 52 56 61
22 35 38 48 51 57 60 62
36 37 49 50 58 59 63 64];
z=z(:);
mb=256/16; nb=256/16; % Number of blocks
Eob=find(acarr==31); %%%% 31 for lena & 127 hat 59 for baboon
kk=1;ind1=1;
n=length(Eob);
for ii=1:mb
for jj=1:nb
ac=acarr(ind1:Eob(n)-1);
ind1=Eob(n)+1;
kk=kk+1;
ri(8*(ii-1)+1:8*ii,8*(jj-1)+1:8*jj)=dezz([ac zeros(1,64-
length(ac))]);
end
end
iFq=round(blkproc(ri,[8 8],'idivq',Q));
iFf=blkproc(iFq,[8 8],'idct2');
iFf=round(iFf+128);
imshow(mat2gray(iFf)),title(' Reconstructed ')
itr=imresize(it,[128 128]);
imshow(imadd(mat2gray(iFf),mat2gray(itr)))
%%%%%%%%%%%%%%%%%%%%% Tamper Detection %%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
den=[];
spar=[];
55
for i=1:length(D)
for j=1:length(D)
if D(i,j)==-1
den=[den 1];
else
spar=[spar 1];
end
end
end
denarea=numel(den);
sparea=numel(spar);
totalarea=denarea+sparea;
del=denarea/sparea;
gamma=totalarea/numel(LLW3);
if gamma==0
msgbox('Image is not tampered');
else
if gamma>0 && del<gamma
msgbox('Tampering is incidental');
else
if del>=gamma
msgbox('Tampering is malicious');
end
end
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% JPEG compression attack %%%%%
%
jattk
RESULTS:
56
Binary image
original image for watermarking
57
Original watermarked image
Reconstructed
58
60 65 70 75 80 85 90 95 100
101.33015
101.33016
101.33017
performance analysis PSNR Vs quality
----- quality
----
PS
NR
59
CONCLUSION
The detailed experiments are conducted and it is found that the proposed scheme is
able to distinguish the malicious and incidental attacks and also recovers a good
estimate of original contents. The technique is highly secure because of inclusion of
three private keys at various stages of watermark generation. The proposed scheme
also shows efficient authentication for a smallest scale transformation on an image.
Embedding of two watermarks in this scheme makes it more efficient in accurate
detection of tampered area and recovery of estimated image. Invisible tamper
detection is another authentication criteria achieved in this semi fragile secured
watermarking method.
60
REFERENCES
[1] Ching-Yang Lin and Shi Fu-Chang, Semi-Fragile Watermarking for
authentication of JPEG visual contents.
[2] Alessandro Piva, Franco Bartolini and Roberto Caldelliy, Self recovery
authentication of images in the DWT domain, International Journal of Image and
Graphics Vol. 5, No. 1 149-165 (2005)
[3] Xiaoyun Wu, Junquan Hu, Zhixiong Gu, Jiwu Huang (contacting author), A
Secure Semi-Fragile Watermarking for Image Authentication Based on Integer
Wavelet Transform with Parameters, Copyright © 2005 Australian Computer Society,
Inc. This paper appeared at the Australasian Info: Security Workshop 2005,
\[4] Meerward, P.and Uhl,A. watermark security via wavelet filter parameterization.
Proc. IEEE Int. Conf. on image processing, (3): 1027-1030 (2001)
[5] Ingemar J Cox, Methiw L Miller and Jeffery A Bloom, Digital Watermarking.
(2002).
[6] Liu, H.M., Liu, J.F, Huang, J.W, Huang, D.R. and Shi, Y.Q. (2002): A robust
DWT-based blind data hiding algorithm. Proc. of IEEE on Circuits and Systems,
(2):672 - II-675.
[7] Kurato Maeno, Qibin Sun, Shih-Fu Chang, Masayuki Suto, New Semi- Fragile
Image Authentication Watermarking Techniques, Using Random Bias and Non-
Uniform Quantization, IEEE Transactions on Multimedia, Vol 8, No 1, (2006).
61