+ All Categories
Home > Documents > [IEEE 2012 7th IEEE International Symposium on Applied Computational Intelligence and Informatics...

[IEEE 2012 7th IEEE International Symposium on Applied Computational Intelligence and Informatics...

Date post: 09-Dec-2016
Category:
Upload: lucian
View: 218 times
Download: 4 times
Share this document with a friend
6
7th IEEE International Symposium on Applied Computational Intelligence and Informatics· May 24-26, 2012 Timi�oara, Romania High capaci steganographic algorithm based on payload adaptation and optimization Septimiu Fabian Me, Mircea Vladutiu and Lucian Prodan Department of Computer Science "Politehnica" University of Timisoa, Romania [email protected], [email protected], [email protected] Absact-The paper introduces a new and enhanced high capacity steganographic algorithm based on the original smart LSB pixel mapping and data rearrangement design. Throughout our research we've discovered that the best solutions for payload adaptation to the carrier image in our initial algorithms were found within the first 30 attempts, of which less than 4 have been improved during the evaluation and optimation stage. All others became redundant after a certain number of initial breeds (embedding solutions) have been generated. In this paper we introduce a more complex solution generator and evaluator that joins the iterative stages of the previous design for the purpose of optimizing and identig the best solution in an earlier evaluation stage, while featuring an extended validation region than the original design. By focusing on reducing the image degradation in the embedding process with the original luminosity of the image as a quality metric, the algorithm is capable of maintaining even more of the original color quality. As outlined in the experimental results, this innovative new approach raises the logical visual and statistical imperceptibility of the resulting image therefore building an even stronger steganographic modeL Kords: Steganography, LSB Matching, Payload adaptation I. INODUCON the recent years, due to the increasing computational power, standard cryptographic algorits have been continuously proven to show weaknesses against statistical or mathematical reverse engineering. Security is not the same it was ten years ago, because the resech in reverse engineering methods has been aided by the current processing power, leading to a tight race between resech in crtoaphy d cryptanalysis. Encryption secures the data by translating it into readable state. In theory the transcoded data is considered secure because without the proper decrtion keys, one caot extract the original information. In reality however, the encrypted data is easily distinguishable among other non-secured data streams in a process called interception. Once intercepted, it c be a matter of time until the data can be extracted using cryptographic reverse engineering (cryptanalysis). In present days, resechers have been giving more attention to finding ways to cover the entire communication of secured information in terms of steganoaphy. As opposed to ctoaphy, steganoaphy is the of transcoding the sensitive data into another e of data that does not arouse any suspicion. Using some conventional data formats circulated toughout the Inteet as images, audio or video streams, steganoaphy successlly embeds the sensitive information onto a carrier information, in a way that is not easily identifiable by exteal, third party listeners (either human or computerized). As a research domain, steganoaphy is vast and mostly unexplored. When using stegoaphy, there are multiple ways data can be hidden thin other data. Since images are one of the most popular data types transmitted toughout the Inteet, many resech have opted for this data te for hiding puoses. LSB steganoaphy is one of the most commonly used techniques. uses the LSBs of every pixel channel (if y) and substitutes it with the secret data. This substitution is based on the premise that the human eye cannot perceive the entire color spectrum a typical high-color image c represent; the secret data is usually stored within the overhead color variations. This method is usually a reference for more advanced and complex algorithms. The major doside of the LSB method represents its reduced storage capacity; a typical 24-bit image offers a capaci that represents 12,5 of the color table's size. This can be sufficient for short texts, but is definitely insufficient for larger files and other data tes. In the recent years, resechers have been increasing the number of LSBs that e changed in the embedding process. By doing so, the perceivable color is changed, implying the usage of optimization and filtering techniques to ick the human eye to thi there e no differences between the original and modified version of the crier image. Other at, by affecting the visible color specm d by replacing the hazardous color information of a natural image with logical data, computerized analysis (steganalysis) can be used to identi a suspicious image. As the number of LSBs that get altered gets increased, so does the computational power needed to pre- and post-process the image for quality enhancement, d it is not always possible to restore e visual, statistical, d logical imperceptibility of a natural image. The ppose of every steganoaphic gorithm is to maintain . the naturalness of the original Image aſter the embeddmg process, so that it is not distinguishable among other images, therefore successlly hiding the aces of a covert communication taking place. II. D WO The process of optimizing the image in order to eance its resemblance to the original, aſter the payload secret data has been applied to it is a very difficult process and oſten fails to optimize the stegoaphic result. This 978-1-4673-1014-7/12/$31.00 ©2012 IEEE -87-
Transcript
Page 1: [IEEE 2012 7th IEEE International Symposium on Applied Computational Intelligence and Informatics (SACI) - Timisoara, Romania (2012.05.24-2012.05.26)] 2012 7th IEEE International Symposium

7th IEEE International Symposium on Applied Computational Intelligence and Informatics· May 24-26, 2012 • Timi�oara, Romania

High capacity steganographic algorithm based on payload adaptation and optimization

Septimiu Fabian Mare, Mircea Vladutiu and Lucian Prodan

Department of Computer Science "Politehnica" University of Timisoara, Romania

[email protected], [email protected], [email protected]

Abstract-The paper introduces a new and enhanced high capacity steganographic algorithm based on the original smart LSB pixel mapping and data rearrangement design. Throughout our research we've discovered that the best solutions for payload adaptation to the carrier image in our initial algorithms were found within the first 30 attempts, of

which less than 4 have been improved during the evaluation and optimization stage. All others became redundant after a certain number of initial breeds (embedding solutions) have been generated. In this paper we introduce a more complex solution generator and evaluator that joins the iterative stages of the previous design for the purpose of optimizing and identifying the best solution in an earlier evaluation stage, while featuring an extended validation region than the original design. By focusing on reducing the image degradation in the embedding process with the original luminosity of the image as a quality metric, the algorithm is capable of maintaining even more of the original color quality. As outlined in the experimental results, this innovative new approach raises the logical visual and statistical imperceptibility of the resulting image therefore building an even stronger steganographic modeL

Keywords: Steganography, LSB Matching, Payload adaptation

I. INIRODUCTION

In the recent years, due to the increasing computational power, standard cryptographic algorithms have been continuously proven to show weaknesses against statistical or mathematical reverse engineering. Security is not the same it was ten years ago, because the research in reverse engineering methods has been aided by the current processing power, leading to a tight race between research in cryptography and cryptanalysis. Encryption secures the data by translating it into an unreadable state. In theory the transcoded data is considered secure because without the proper decryption keys, one cannot extract the original information. In reality however, the encrypted data is easily distinguishable among other non-secured data streams in a process called interception. Once intercepted, it can be a matter of time until the data can be extracted using cryptographic reverse engineering (cryptanalysis). In present days, researchers have been giving more attention to finding ways to cover the entire communication of secured information in terms of steganography. As opposed to cryptography, steganography is the art of transcoding the sensitive data into another type of data that does not arouse any suspicion. Using some conventional data formats circulated throughout the Internet as images, audio or

video streams, steganography successfully embeds the sensitive information onto a carrier information, in a way that is not easily identifiable by external, third party listeners (either human or computerized).

As a research domain, steganography is vast and mostly unexplored. When using steganography, there are multiple ways data can be hidden within other data. Since images are one of the most popular data types transmitted throughout the Internet, many research have opted for this data type for hiding purposes. LSB steganography is one of the most commonly used techniques. It uses the LSBs of every pixel channel (if any) and substitutes it with the secret data. This substitution is based on the premise that the human eye cannot perceive the entire color spectrum a typical high-color image can represent; the secret data is usually stored within the overhead color variations. This method is usually a reference for more advanced and complex algorithms. The major downside of the LSB method represents its reduced storage capacity; a typical 24-bit image offers a capacity that represents 12,5 of the color table's size. This can be sufficient for short texts, but is definitely insufficient for larger files and other data types.

In the recent years, researchers have been increasing the number of LSBs that are changed in the embedding process. By doing so, the perceivable color is changed, implying the usage of optimization and filtering techniques to trick the human eye to think there are no differences between the original and modified version of the carrier image. Other than that, by affecting the visible color spectrum and by replacing the hazardous color information of a natural image with logical data, computerized analysis (steganalysis) can be used to identify a suspicious image. As the number of LSBs that get altered gets increased, so does the computational power needed to pre- and post-process the image for quality enhancement, and it is not always possible to restore the visual, statistical, and logical imperceptibility of a natural image. The purpose of every steganographic �gorithm is to maintain

.the naturalness of the original

Image after the embeddmg process, so that it is not distinguishable among other images, therefore successfully hiding the traces of a covert communication taking place.

II. RELATED WORK

The process of optimizing the image in order to enhance its resemblance to the original, after the payload secret data has been applied to it is a very difficult process and often fails to optimize the steganographic result. This

978-1-4673-1014-7/12/$31.00 ©2012 IEEE -87-

Page 2: [IEEE 2012 7th IEEE International Symposium on Applied Computational Intelligence and Informatics (SACI) - Timisoara, Romania (2012.05.24-2012.05.26)] 2012 7th IEEE International Symposium

s. F. Mare et al.· High Capacity Steganographic Algorithm Based on Payload Adaptation and Optimization

is the reason researchers have been recently trying to find alternative methods on how to make optimizations within the embedding process [6][7][8][9]. Since the image has fme-grained color values in the LSBs of each color component, simply ignoring them and applying the payload results in degradation beyond the optimization capabilities of most filters. LSB matching algorithms have been developed that try to find the optimal way to embed the secret data in a way that the payload does not alter the original information to much.

In [3] we proposed an improved LSB matching algorithm, capable of preserving the original color information to a higher degree compared to other state-of­the-art algorithms (OLSB [7] and OP AP [9]). The initial design functions somewhat similar to a genetic algorithm, trying to find the best payload remapping sequence that produces the minimum amount of change on the original image when embedding. Due to the repetitive manner in which the generator and evaluator function where designed, the algorithm continuously issues better solutions when not limited by iteration cycles and time. The initial design served not only as a new algorithm, its structure was designed in a way that made it usable as a design blueprint for future algorithms deriving from it. The modular way in which it was built allows easy separation of the entire creation, generation, analysis, optimization and embedding process. One of the strongest points of the original design was that the algorithm successfully embedded the data and issued very good visual quality images without making use of any enhancement pre or post processing stages.

In our extended research, we have identified that the original design had some isolated flaws in some cases and with some specific images. In [4] we introduced a modified version of the original design that introduced a new solution generator that used simulated HDR lightning instead of the traditional PSNR approach in the original design. Other than that, we introduced a post processing control-bit correction in order to adjust places in which the colors have saturated due to an erroneous choice in the initial generator stage. Due to repetitive wrong choices in the generation stage, the algorithm sometimes accumulated strongly modified pixels into a smaller region building "brightness spots". Although the evaluator passed them as being valid in relationship with the whole image, these spot regions where distinguishable on the resulting image. The algorithm increased the original design's image quality only by an average of 0,5 dB, which was a relatively small improvement in quality. The strongest point of the extended version however was the error correction stage that eliminated the flawed cases in which the algorithm issued images with bright spots.

In our most recent research, we have tested both designs ([3] and [4]) repeatedly, leaving the algorithms run over a longer timespan. By doing this, we also tested the quality of solutions in time and we've identified a repetitive pattern in the solution generation process. In over 78% of the cases, the best solution for embedding was found in the first 30 solutions generated. Most of the solutions generated afterwards where redundant and repetitive in quality and PSNR. Out of the 30 solutions, the evaluation and optimization stage was able to enhance them only in 4 out of 30 cases. This immediately led to the conclusion that the only way to build better solutions relies almost entirely on the generator, and not on the

evaluator and optimizer stage as previously stated in [3] and [4]. In a second analysis we have identified that the size of the jump table is often a problem. Firstly because the addresses are variable in size, depending on how many image blocks are used for data mapping, leading to a loss in capacity due to the fact that the table itself must be stored inside the image for data extraction purposes. Secondly because the immediate optimization space is equal in size with the size of the table, the more entries the jump table has, the larger the optimized region of the image gets, but at the cost of losing valuable storage space inside the image.

The original algorithm has been designed to be very configurable using many parameters that vary from embedding ration (the number of LSBs to reuse) to jump table size and algorithm runtime. By testing different combinations of these parameters, our research has indicated that our algorithm outperforms the other state­of-the-art algorithms in most of the cases, but the best results are obtained when reusing 3 LSBs from each color component.

The current paper introduces a completely revised algorithm that follows the structure and stage separation first presented in [3], but which successfully optimizes the generation process and builds better solutions in 1164 of the time needed by its predecessors.

Ill. THE PROPOSED MErnOD

We propose an alternative LSB matching algorithm with a more complex solution generator. This algorithm has been designed to work with high color RGB images (24 bpp and above) with an embedding ration of 3:8, meaning that 9 LSBs will be used for storage for each image pixel (3 bits of Red, Green and Blue channel). These constraints have been introduced since the original design has been proven more effective in this particular constellation.

The algorithm is built in a repetitive manner, generating better embedding strategies with each additional running loop. The embedding strategy consists in a jump table that indicates the exact order in which the data is embedded in order to produce the minimal amount of change. Since the original image already has bit combinations in the last 3 LSBs of each color channel, the data must be embedded in a manner that reduces the difference (delta) between the original and inserted bits. The smaller the delta, the greater the resemblance to the original color variation of the image, the stronger the algorithm gets. In a hypothetical ideal case, the secret data (payload) is identical on a binary level with the original color combinations. In this case the image would already contain the secret data without the need of embedding. This case however, is impossible, but the jump table tries to reach this goal by choosing strategies that minimize the change effect of applying payload bits.

Particular to this design, we have changed the core evaluation metric inside the solution generator to a more human vision oriented one. Although an increased Peak­to-Signal-Noise Ratio (PSNR) value indicates a higher quality conservation level between the original and modified image from a computerized perspective, the brightness spots identified in some cases in the original design are easily identified using human vision. This observation pushed us into finding a better evaluation metric that satisfies not only computerized analysis but

-88-

Page 3: [IEEE 2012 7th IEEE International Symposium on Applied Computational Intelligence and Informatics (SACI) - Timisoara, Romania (2012.05.24-2012.05.26)] 2012 7th IEEE International Symposium

7th IEEE International Symposium on Applied (omputational lntelligence and Informatics· May 24-26, 2012 • Timi�oara, Romania

also human-vis ion-based analysis by using alternative color space representations for the evaluation process. The human eye is known to be very good at identifYing light and color variations over a larger pixel area than distinguishing between light spikes inside a smaller area. This led our research into using YCbCr as alternative color representation for evaluation purposes. The most important component of this color representation is the Y channel that represents the luminance of a color value (the amount of light as perceived by a human eye). The Y channel is defined as described below:

Y = 0.299 . R + 0.587· G + 0.114' B (1)

The luminance channel quantizes the light in relationship with its perceivable spectrum, with the green channel being the most important color component leading to brightness change. The generation stage choses those solutions that produce the minimal Y -channel PSNR over smaller portions of the image.

The algorithm uses a series of 4 separate stages, as did its predecessor in [3], but in a modified manner, as observed in Figure 1.

r················ __ ·· ------------------1 ! 3. Solution enhancement ! L __________________ .-__________________ !

4. Solution embedding I 4. Solution embedding I Initial workflow New workflow

Figure 2. Initial VS. New Workflow

In the next paragraphs we will provide more information on the entire algorithm workflow ass it passes the four stages.

A. Image segmentation

In order to successfully remap the secret data into a less destructive manner over the original color information, the algorithm must be able to have multiple embedding regions and chose the region where the data produces the minimum amount of change. Therefore, the image segmentation represents a crucial step in the entire process as it offers the necessary amount of binary combinations to choose from. The image is cut into smaller portions that can be analyzed and handled more easily in a two-step process, as shown in Figure 2, which involves tiling and partitioning

First the cover image C (with an internal pixel matrix denoted as Cij) is divided into 4 equal tiles (denoted Sx, where x = 1 .. 4). For faster processing the pixel values corresponding to each of these tiles are stored within four separate data vectors (Sx) that handle each tile separately.

Input Image axe Block

t

TIling Partitioning

Figure 1. Image segmentation

Tiles are then cut into smaller portions in a process called partitioning. Just like in JPEG compression, the tiles are split into 8x8 blocks (64 pixels), which represent a relatively small visual group of pixels. Each tile consists in blocks of 8x8 pixels denoted Sx. B [y] (where x represents the tile address and y represents the block number. In Figure 2, an example image is shown having a size of 5 12x5 12 pixels. The tiling process would cut the image into four equal tiles, each sized at 256x256 pixels. The partitioning step would split each tile into 8x8 block, in this case building up 1024 blocks/tile.

As opposed to the original design where the data was inserted one pixel at a time, this algorithm uses a 64-pixel buffer, for performance and optimization purposes, as later described in the paper.

This stage of the algorithm is intended to split the image into multiple pixel streams to choose from when trying to find the best embedding path. Since the algorithm uses 3 bpp, the maximum number of available combinations is 8: 000, 001, 010, all, 100, 101, 110 and 11 1. The tiling process covers a maximum of 4 combinations, which is less than the theoretical minimum needed in order to statistically assure a balanced distribution of data. For this reason we introduced a third virtual direction that we call layer ordering address that basically tells the order in which the data is inserted within a single pixel. Due to the fact that a color pixel is represented as a RGB triplet, the data can be inserted using different ordering of the color channels. Out of the total number of combinations possible (6) we only chose 3, as shown in Table 1.

Table 1. Channel order

Channel order Binary Address

00

-89-

Page 4: [IEEE 2012 7th IEEE International Symposium on Applied Computational Intelligence and Informatics (SACI) - Timisoara, Romania (2012.05.24-2012.05.26)] 2012 7th IEEE International Symposium

S. F. Mare et al.· High Capacity Steganographic Algorithm Based on Payload Adaptation and Optimization

� 1 ci

:-- .. ---------------------------- ------------------------------------ ---------------------------------------- ------------------------------------ -------------------1

! .. f d : ! " ! : i : 1 : I

: ! -------------------------------t------ '

I N : ! 1 ! ! .5 ! i ! ,_ •••••••••••••••••••••••••••• __ •••• __ ••• __________________ • ______ • _____ •••• ________________________ • ____ •••••••• __________________ • ____ •••• __________________ • ___ ._. ____ • __ • _______ .1

Figure 3. The solution generator, evaluator and optimizer

Using this additional addressing, the algorithm has 12 embedding combination to chose from each step of the generation process.

B. Solution generation, evaluation and optimization

This stage is a recursive method that generates the best embedding strategy in form of jump table entries. A table entry in the solution generator stage represents a two­value address that consists in:

Level 1 Address (channel reordering address) Level 2 Address (tile address)

The four tiles S1..S4 and the secret data D represent the main inputs of this stage, as shown in Figure 3. Each tile internally stores the current active pixel block Sx.bi (active Block Index). The algorithm starts by reserving a certain number of blocks where the jump table will be stored. The number of blocks is always a multiple of 4 in order to assure a balanced distribution among the four image tiles. The minimum number of blocks required by this algorithm is 4, leaving room for 576 table entries. For the image in Figure 2 it would mean that by reusing 6,82% of the image's total storage capacity we can optimize up to 54,71 % of the entire storage capacity, representing an important improvement over the original design. The secret data is streamed 72 bytes (576 bits) at a time:

.

(

bitS) 576 btts = 3 channels· 64 Broups· 3 -.­pIxels (2)

As opposed to the initial design where the data is streamed and analyzed one bit group at a time, this algorithm uses a larger buffer to increase the overall effectiveness of each choice over al larger area turning the once near-sighted generator [3] into a far-sighted one in this design. There are two direct advantages of this approach: the algorithm optimizes more information because it tests the effect onto multiple cascading pixels and dramatically reduces the size of the jump. In the original design optimizing 64 pixels would require 192 table entries with an address length of 3 bits each, totaling 576 bits for the storage of the jump table alone. By using a 4 bit address in this case the same effect is achieved, meaning that the algorithm optimizes that same area as it's predecessor by utilizing 99,3% less storage space for the jump table. These changes not only improve the general effectiveness of the entire algorithm, but also render the evaluation stage from the previous design useless. The current algorithm is capable of finding better solutions easier and more rapidly and the jump table gets negligibly small in size.

1) Levell Address Generation

The four image tiles stream the currently active block (Sx.bi) are divided into individual the color channels R, G and B. Based on these channels, for each active block of every tile, the algorithm determines the lightness component Y as reference lightness (RL). The RL is used to determine the alteration delta after the data has been

-90-

Page 5: [IEEE 2012 7th IEEE International Symposium on Applied Computational Intelligence and Informatics (SACI) - Timisoara, Romania (2012.05.24-2012.05.26)] 2012 7th IEEE International Symposium

7th IEEE International Symposium on Applied (omputational lntelligence and Informatics· May 24-26, 2012 • Timi�oara, Romania

embedded. The three-color channels are ordered as indicated above in RGB, GBR and BRG using three parallel reordering streams. Together with the secret data the algorithm virtually embeds (VE) the secret data onto the currently available block (Sx.bi) of each image tile (Sx). The embedding is simulated in order to determine the modified lightness (ML) of the resulting block in case that specific block is chosen for embedding. This part of the evaluation takes place on each tile and for each reordering sequence, totalizing 12 possible choices. Each tile picks out the reordering sequence that produced the minimum loss of luminosity in forms of PSNR (the greater the PSNR the higher the overall quality), reducing the 12 choices to just 4. The best reordering address (RA) is saved temporarily for the next step of the process in form of a Levell Address.

2) Level 2 Address Generation

In the second part of the process the algorithm picks out the tile that has the smallest degradation of lightness Y. Once this block is determined from the 4 remaining choices only one is picked and marked as being a solution, building the Level 2 Address. Together with the RA corresponding to the currently picked image tile, the Level 1 and Level 2 addresses are joined and added as a jump table entry.

The tile that produces the minimum amount of change will change its state to the next available free block (Sx.bi++), while all other tiles remain unchanged. Also, the data stream positions itself to the next 72 Bytes of data to be inserted.

The solution generator stage iterates for new solutions limited only by execution or iteration time limits, constantly searching for better solutions. One complete solution is considered as being obtained once the jump table has been filled. Since the size of the jump table has been reduced by 99,3% since the original design, we can use more table entries, allowing the algorithm optimize even larger portions of the image and therefore issuing better solutions than its predecessors [3] and [4].

C. Solution Enhancement

In this design, with the introduction of a more advanced solution generator, optimizer and evaluator, there is no need for a separate evaluation step. The enhancement step was introduced more as a guideline for future implementations, but has not been explicitly built into the testing algorithm used for the current paper. As later shown in the experimental results, the algorithm is capable of increasing the overall PSNR of the resulting image up to 3 dB more than it's predecessors without the need of any additional solution enhancement steps (in [3]) or post­processing methods (in [4]).

D. Solution Embedding

Once the algorithm has reached this stage, the optimization process is finished and the best solution in form of a full jump table has been determined. The algorithm then uses the reserved blocks of each image tile to first embed the jump table for data extraction purposes. The data is embedded by embedding the data sequentially in groups of 72 Bytes in the order indicated by the jump table. The evaluator extracts a 4-bit address from the jump table representing a full addressing sequence. The first two bits represent the image tile address. Once selected the algorithm reads the current state and identifies the

currently available image block and selects it. Based on this block and using the layer ordering address the block's pixels are red in the right order (RGB, GBR or BRG) as shown in Figure 4.

Current block address

Figure 4. Initial vs. New Workflow

The solution embedding stage cycles several times throughout the jump table, consuming the secret data bits. Once there are no data bits left to embed the algorithm finishes and outputs a steganographic image.

The process is simple and requires only the image segmentation and reverse solution embedding stage. First the extraction algorithm must rebuild the four image tiles and the image blocks accordingly. After this step the jump table is extracted from the reserved blocks (assuming the size of the jump table is known - the only parameter that needs to be transmitted), after which, following the exact sequence illustrated in Figure 4 the last 3 bits of each color channel are red and put together to reassemble the original secret information.

IV. EXPERIMENTAL RESULTS

In this section we present the experimental results of our improved method. We have conducted two types of tests: basic and thorough tests.

A. Basic test

The basic test has been conducted on the classic 4 images (Figure 5) with an embedding ratio of 3-bits/color channels. This test was chosen in order to highlight the performance of the algorithm in relationship with other state-of-the-art methods such as Simple LSB (SLB), Optimal LSB (OLSB), Optimal Pixel Adjustment Process (OP AP) and our previously introduced methods [3][4].

I .�?, ,

\ '-. / . ( ,

... J (a) Lena (b) Baboon

(c) Jet (d) Peppers Figure 5. The four cover images used

- 91 -

Page 6: [IEEE 2012 7th IEEE International Symposium on Applied Computational Intelligence and Informatics (SACI) - Timisoara, Romania (2012.05.24-2012.05.26)] 2012 7th IEEE International Symposium

S. F. Mare et at. • High Capacity Steganographic Algorithm Based on Payload Adaptation and Optimization

The results of this initial test are highlighted in table 2.

SLSB

1 37.9402

Table 2. Perfonnance comparison (dB)

OLSB OPAP

38.2194 40.2401

Smart Smart LSB [3] HDR [4] 42.5083 42.9735

New Method

44.3701 I

The new and improved algorithm registered higher average PSNR values in all of the four testing image than in our other two algorithms ([3] and [4]), with a gain of + 1.39 dB over Smart lIDR and + 1.86 dB over Smart LSB. The gain in quality translates into higher color conservation in the embedding process, even when the payload remains the same throughout all algorithms.

B. Thorough test

In our extended test we have used 100 full lID images (l920xl080 pixels) with a 2050 byte randomly generated payload data (occupies 90% of the total steganographic capacity) and with jump table size that does not exceed 10% of the total steganographic capacity the carrier medium offers. The tests have been conducted over a longer timespan, allowing the algorithms to find the best solution. For testing we are comparing our current method with our older algorithms [3] and [4].

Table 3. Perfonnance comparison (dB)

Smart LSB[3] Smart HDR [4] New method

Min Max Avg Min Max Avg Min Max Avg

39.7636 42.7258 41.8694 40.3120 43.0984 42.9713 42.0175 46.0027 44.4951

Table 3 illustrates the performance of the new method over a larger test basis. The testing has been programmed in a way that registers the minimum, maximum and average quality registered after the algorithm finished searching for the best solution on all of the 100 image cases. The new algorithm registered an all-high PSNR value of 46.0027 meaning a quality increase of over 3 dB compared to the best result of the initial design. The average quality registered by the new method is constantly 2 dB over its predecessors.

Table 4. Perfonnance comparison (dB gain)

Gain over

Smart LSB[3]

Gain over

Smart HDR [4]

Min Max Avg Min Max Avg

+2.2539 +3.2769 +2.6267 +1.7055 +2.9043 +1.5238

Table 4 shows the gain of the new method over the existing ones. All of the tests have been conducted using the same timeframe limitation. Due to the omission of the evaluation stage and because the generator has a wider solution checking range (64 pixel blocks) in our current design, the algorithm usually finished the execution in less than half of the time in comparison with [3] and [4].

V. CONCLUSIONS

In this paper we have introduced an improved steganographic algorithm that successfully outperforms

some of the most known state-of the art algorithms used in testing. Based on an initial algorithm structure first introduced by us in [3] and later extended and enhanced in [4], by using the current design, we can successfully obtain the best results in terms of steganographic image quality and processing speed.

Aside from the quality perspective the current design reduces the size of the jump table needed for extraction, leaving more room for secret data. With the fusion of the generation and evaluation stage, the algorithm can find better solutions than its predecessors in less execution time and with smaller capacity penalty.

With even lower noise rates, the current algorithm raises the visual, statistical and logical resistance of the steganographic image against reverse engineering, therefor successfully fulfilling the stealth requirements of a steganographic algorithm.

For our future work we will try to build a 4 LSB version of the current algorithm that issues comparable results in terms of PSNR. Other than that, we will also try to find alternative ways to store the jump table in the nosy areas (identified by edge and noise detection algorithms).

REFERENCES

[I] c. Patsakis, N. Aroukatos, "A DCT steganographic classifier based on compressive sensing", 7th International Conference on Intelligent Information Hiding and Multimedia Signal Processing, 2011, pp.169-172

[2] L. Wang, Y, Zhang, J. Feng, "On the Euclidean Distance of Images", IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005, pp.l334-1339

[3] S. F. Mare, M. Vladutiu, L. Prodan, "Decreasing change impact using smart LSB Pixel mapping and data rearrangement", 11 th IEEE International Conference on Computer and Information Technology, 2011, pp.269-276

[4] S. F. Mare, M. Vladutiu, L. Prodan, "HDR based steganographic algorithm", 17th International Symposium for Design and Technology in Electronic Packaging, 2011, pp.333-338

[5] W. Fraczek, W. Mazurczyk, K. Szczypiorski, "How Hidden Can Be Even More Hidden?", 3rd International Conference on Multimedia Information Networking and Security, 2011, pp.581-585

[6] C. -C. Chang and H. -W. Tseng, "Data Hiding in Images by Hybrid LSB Substitution", Third International Conference on Multimedia and Ubiquitous Engineering, 2009, pp. 360-363

[7] C. -K. Chang, J. -Yo Hsiao, C. -So Chan, "Finding optimal least­significant-bit substitution in image hiding by dynamic programming strategy", Pattern Recognition, Vol. 36, pp.1583-1595,2003

[8] C. -K. Chan, L. M. Cheng, "Improved hiding data in images by optimal LSB substitution and genetic algorithm", IEEE Electronic Letters, Vol. 37, pp.lOI7-I018, 2001

[9] R. -Z. Wang, C. -F. Lin and 1. -C. Lin, "Image hiding by optimal LSB substitution and genetic algorithm", Pattern Recognition, Vol. 34, pp. 671-683, 2001

[10] K. B. S. Kumar, T. Khasim, K. B. Raja, "Dual Transform Technique for Robust Steganography", International Conference on Computational Intelligence and Communication Systems, 2011, pp.310-314

[11] W. Li, C. Niam, R. Jinlin, Y. Hongyue, "Histogram-Preserving Steganography Using Maximum Flow Algorithms", 2nd International Conference on Digital Manufacturing & Automation, 2011, pp.590-593

[12] W. Bender, D. Gruhl, N. Morimoto, A. Lu, "Techniques for data hiding", ffiM Systems Journal, 1996, pp.313-336

-92-


Recommended