HadamardTransforms
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Bellingham, Washington USA
HadamardTransforms
Sos Agaian
Hakob Sarukhanyan
Karen Egiazarian
Jaakko Astola
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
To our families for their love, affection, encouragement, and understanding.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Library of Congress Cataloging-in-Publication Data
Hadamard transforms / Sos Agaian ... [et al.].p. cm. – (Press monograph ; 207)
Includes bibliographical references and index.ISBN 978-0-8194-8647-91. Hadamard matrices. I. Agaian, S. S.QA166.4.H33 2011512.9 ′434–dc22
2011002632
Published by
SPIEP.O. Box 10Bellingham, Washington 98227-0010 USAPhone: +1 360.676.3290Fax: +1 360.647.1445Email: [email protected]: http://spie.org
Copyright c© 2011 Society of Photo-Optical Instrumentation Engineers (SPIE)
All rights reserved. No part of this publication may be reproduced or distributed inany form or by any means without written permission of the publisher.
The content of this book reflects the work and thoughts of the author(s). Everyeffort has been made to publish reliable and accurate information herein, but thepublisher is not responsible for the validity of the information or for any outcomesresulting from reliance thereon. For the latest updates about this title, please visitthe book’s page on our website.
Printed in the United States of America.
First printing
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Contents
Preface............................................................................................................... xi
Acknowledgments ........................................................................................... xiii
Chapter 1 Classical Hadamard Matrices and Arrays...................................... 1
1.1 Sylvester or Walsh–Hadamard Matrices .................................................. 11.2 Walsh–Paley Matrices .................................................................................... 111.3 Walsh and Related Systems.......................................................................... 13
1.3.1 Walsh system..................................................................................... 151.3.2 Cal–Sal orthogonal system........................................................... 171.3.3 The Haar system .............................................................................. 241.3.4 The modified Haar “Hadamard ordering”............................... 291.3.5 Normalized Haar transforms........................................................ 301.3.6 Generalized Haar transforms ....................................................... 321.3.7 Complex Haar transform............................................................... 321.3.8 kn-point Haar transforms............................................................... 32
1.4 Hadamard Matrices and Related Problems ............................................. 341.5 Complex Hadamard Matrices ...................................................................... 38
1.5.1 Complex Sylvester–Hadamard transform ............................... 391.5.2 Complex WHT ................................................................................. 411.5.3 Complex Paley–Hadamard transform....................................... 421.5.4 Complex Walsh transform ............................................................ 42
References ....................................................................................................................... 45
Chapter 2 Fast Classical Discrete Orthogonal Transforms............................ 51
2.1 Matrix-Based Fast DOT Algorithms ......................................................... 522.2 Fast Walsh–Hadamard Transform .............................................................. 542.3 Fast Walsh–Paley Transform........................................................................ 622.4 Cal–Sal Fast Transform.................................................................................. 702.5 Fast Complex HTs ........................................................................................... 752.6 Fast Haar Transform........................................................................................ 79References ....................................................................................................................... 86
Chapter 3 Discrete Orthogonal Transforms and Hadamard Matrices............ 93
3.1 Fast DOTs via the WHT ................................................................................ 94
v
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
vi Contents
3.2 FFT Implementation ....................................................................................... 95
3.3 Fast Hartley Transform .................................................................................. 106
3.4 Fast Cosine Transform ................................................................................... 115
3.5 Fast Haar Transform........................................................................................ 122
3.6 Integer Slant Transforms ............................................................................... 129
3.6.1 Slant HTs............................................................................................ 130
3.6.2 Parametric slant HT ........................................................................ 131
3.7 Construction of Sequential Integer Slant HTs........................................ 136
3.7.1 Fast algorithms ................................................................................. 141
3.7.2 Examples of slant-transform matrices ...................................... 142
3.7.3 Iterative parametric slant Haar transform construction....... 143
References ....................................................................................................................... 147
Chapter 4 “Plug-In Template” Method: Williamson–Hadamard Matrices ...... 155
4.1 Williamson–Hadamard Matrices ................................................................ 156
4.2 Construction of 8-Williamson Matrices ................................................... 168
4.3 Williamson Matrices from Regular Sequences...................................... 173
References ....................................................................................................................... 182
Chapter 5 Fast Williamson–Hadamard Transforms........................................ 189
5.1 Construction of Hadamard Matrices Using Williamson Matrices... 189
5.2 Parametric Williamson Matrices and Block Representation ofWilliamson–Hadamard Matrices ................................................................ 192
5.3 Fast Block Williamson–Hadamard Transform....................................... 195
5.4 Multiplicative-Theorem-Based Williamson–Hadamard Matrices... 199
5.5 Multiplicative-Theorem-Based Fast Williamson–HadamardTransforms.......................................................................................................... 202
5.6 Complexity and Comparison........................................................................ 206
5.6.1 Complexity of block-cyclic, block-symmetricWilliamson–Hadamard transform.............................................. 206
5.6.2 Complexity of the HT from the multiplicative theorem..... 208
References ....................................................................................................................... 209
Chapter 6 Skew Williamson–Hadamard Transforms ...................................... 213
6.1 Skew Hadamard Matrices ............................................................................. 213
6.1.1 Properties of the skew-symmetric matrices ............................ 213
6.2 Skew-Symmetric Williamson Matrices .................................................... 215
6.3 Block Representation of Skew-Symmetric Williamson–HadamardMatrices ............................................................................................................... 217
6.4 Fast Block-Cyclic, Skew-Symmetric Williamson–HadamardTransform............................................................................................................ 219
6.5 Block-Cyclic, Skew-Symmetric Fast Williamson–HadamardTransform in Add/Shift Architectures ...................................................... 222
References ....................................................................................................................... 224
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Contents vii
Chapter 7 Decomposition of Hadamard Matrices .......................................... 229
7.1 Decomposition of Hadamard Matrices by (+1, −1) Vectors ............. 2307.2 Decomposition of Hadamard Matrices and Their Classification ..... 2377.3 Multiplicative Theorems of Orthogonal Arrays and Hadamard
Matrix Construction ........................................................................................ 243References ....................................................................................................................... 247
Chapter 8 Fast Hadamard Transforms for Arbitrary Orders........................... 249
8.1 Hadamard Matrix Construction Algorithms ........................................... 2498.2 Hadamard Matrix Vector Representation................................................. 2518.3 FHT of Order n ≡ 0 (mod 4) ........................................................................ 2568.4 FHT via Four-Vector Representation ........................................................ 2638.5 FHT of Order N ≡ 0 (mod 4) on Shift/Add Architectures................. 2668.6 Complexities of Developed Algorithms ................................................... 268
8.6.1 Complexity of the general algorithm ........................................ 2688.6.2 Complexity of the general algorithm with shifts................... 270
References ....................................................................................................................... 270
Chapter 9 Orthogonal Arrays .......................................................................... 275
9.1 ODs ....................................................................................................................... 2759.1.1 ODs in the complex domain......................................................... 278
9.2 Baumert–Hall Arrays...................................................................................... 2809.3 A Matrices........................................................................................................... 2829.4 Goethals–Seidel Arrays ................................................................................. 2899.5 Plotkin Arrays ................................................................................................... 2939.6 Welch Arrays ..................................................................................................... 295References ....................................................................................................................... 301
Chapter 10 Higher-Dimensional Hadamard Matrices ....................................... 309
10.1 3D Hadamard Matrices .................................................................................. 31110.2 3D Williamson–Hadamard Matrices ......................................................... 31210.3 3D Hadamard Matrices of Order 4n + 2 .................................................. 31810.4 Fast 3D WHTs................................................................................................... 32510.5 Operations with Higher-Dimensional Complex Matrices .................. 32910.6 3D Complex HTs ............................................................................................. 33210.7 Construction of (λ, μ) High-Dimensional Generalized Hadamard
Matrices ............................................................................................................... 335References ....................................................................................................................... 339
Chapter 11 Extended Hadamard Matrices ........................................................ 343
11.1 Generalized Hadamard Matrices................................................................. 34311.1.1 Introduction and statement of problems .................................. 343
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
viii Contents
11.1.2 Some necessary conditions for the existence of genera-lized Hadamard matrices............................................................... 346
11.1.3 Construction of generalized Hadamard matrices of neworders ................................................................................................... 347
11.1.4 Generalized Yang matrices and construction of genera-lized Hadamard matrices............................................................... 350
11.2 Chrestenson Transform .................................................................................. 351
11.2.1 Rademacher functions.................................................................... 351
11.2.2 Example of Rademacher matrices ............................................. 353
11.2.2.1 Generalized Rademacher functions .......................... 354
11.2.2.2 The Rademacher–Walsh transforms......................... 355
11.2.2.3 Chrestenson functions and matrices ......................... 357
11.3 Chrestenson Transform Algorithms........................................................... 359
11.3.1 Chrestenson transform of order 3n ............................................. 359
11.3.2 Chrestenson transform of order 5n ............................................. 361
11.4 Fast Generalized Haar Transforms............................................................. 365
11.4.1 Generalized Haar functions.......................................................... 365
11.4.2 2n-point Haar transform................................................................. 367
11.4.3 3n-point generalized Haar transform......................................... 369
11.4.4 4n-point generalized Haar transform......................................... 371
11.4.5 5n-point generalized Haar transform......................................... 374
References ....................................................................................................................... 379
Chapter 12 Jacket Hadamard Matrices ............................................................. 383
12.1 Introduction to Jacket Matrices ................................................................... 383
12.1.1 Example of jacket matrices .......................................................... 383
12.1.2 Properties of jacket matrices........................................................ 385
12.2 Weighted Sylvester–Hadamard Matrices ................................................. 389
12.3 Parametric Reverse Jacket Matrices .......................................................... 392
12.3.1 Properties of parametric reverse jacket matrices................... 394
12.4 Construction of Special-Type Parametric Reverse JacketMatrices ............................................................................................................... 399
12.5 Fast Parametric Reverse Jacket Transform.............................................. 404
12.5.1 Fast 4 × 4 parametric reverse jacket transform...................... 405
12.5.1.1 One-parameter case........................................................ 405
12.5.1.2 Case of three parameters .............................................. 407
12.5.2 Fast 8 × 8 parametric reverse jacket transform...................... 409
12.5.2.1 Case of two parameters................................................. 409
12.5.2.2 Case of three parameters .............................................. 409
12.5.2.3 Case of four parameters................................................ 411
12.5.2.4 Case of five parameters................................................. 413
12.5.2.5 Case of six parameters .................................................. 414
References ....................................................................................................................... 416
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Contents ix
Chapter 13 Applications of Hadamard Matrices in Communication Sys-tems .................................................................................................................... 419
13.1 Hadamard Matrices and Communication Systems............................... 419
13.1.1 Hadamard matrices and error-correction codes..................... 419
13.1.2 Overview of Error-Correcting Codes........................................ 419
13.1.3 How to create a linear code .......................................................... 425
13.1.4 Hadamard code................................................................................. 427
13.1.5 Graphical representation of the (7, 3, 4) Hadamard code .. 431
13.1.6 Levenshtein constructions............................................................. 431
13.1.7 Uniquely decodable base codes .................................................. 435
13.1.8 Shortened code construction and application to datacoding and decoding....................................................................... 438
13.2 Space–Time Codes from Hadamard Matrices........................................ 440
13.2.1 The general wireless system model........................................... 440
13.2.2 Orthogonal array and linear processing design ..................... 442
13.2.3 Design of space–time codes from the Hadamard matrix ... 444
References ....................................................................................................................... 445
Chapter 14 Randomization of Discrete Orthogonal Transforms and Encry-ption .................................................................................................................... 449
14.1 Preliminaries ...................................................................................................... 450
14.1.1 Matrix forms of DHT, DFT, DCT, and other DOTs............. 450
14.1.2 Cryptography .................................................................................... 452
14.2 Randomization of Discrete Orthogonal Transforms ............................ 453
14.2.1 The theorem of randomization of discrete orthogonaltransforms........................................................................................... 454
14.2.2 Discussions on the square matrices P and Q.......................... 454
14.2.3 Examples of randomized transform matrix Ms..................... 456
14.2.4 Transform properties and features ............................................. 459
14.2.5 Examples of randomized discrete orthogonal transforms.. 459
14.3 Encryption Applications ................................................................................ 460
14.3.1 1D data encryption .......................................................................... 462
14.3.2 2D data encryption and beyond .................................................. 463
14.3.3 Examples of image encryption.................................................... 464
14.3.3.1 Key space analysis.......................................................... 464
14.3.3.2 Confusion property......................................................... 465
14.3.3.3 Diffusion property .......................................................... 466
References ....................................................................................................................... 470
Appendix ........................................................................................................... 475
A.1 Elements of Matrix Theory........................................................................... 475
A.2 First Rows of Cyclic Symmetric Williamson-Type Matrices ofOrder n, n = 3, 5, . . . , 33, 37, 39, 41, 43, 49, 51, 55, 57, 61, 63 [2] .... 479
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
x Contents
A.3 First Block Rows of the Block-Cyclic, Block-Symmetric(BCBS) Williamson–Hadamard Matrices of order 4n, n = 3, 5,. . . , 33, 37, 39, 41, 43, 49, 51, 55, 57, 61, 63 [2]....................................... 484
A.4 First Rows of Cyclic Skew-Symmetric Williamson-Type Matri-ces of Order n, n = 3, 5, . . . , 33, 35............................................................. 487
A.5 First Block Rows of Skew-Symmetric Block Williamson–Hada-mard Matrices of Order 4n, n = 3, 5, . . . , 33, 35.................................... 494
References ....................................................................................................................... 498
Index................................................................................................................... 499
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
PrefaceThe Hadamard matrix and Hadamard transform are fundamental problem-solvingtools used in a wide spectrum of scientific disciplines and technologies includingcommunication systems, signal and image processing (signal representation,coding, filtering, recognition, and watermarking), and digital logic (Booleanfunction analysis and synthesis, and fault-tolerant system design). They are thusa key component of modern information technology. In communication, themost important applications include error-correcting codes, spreading sequences,and cryptography. Other relevant applications include analysis of stock marketdata, combinatorics, experimental design, quantum computing, environmentalmonitoring, and many problems in chemistry, physics, optics, and geophysicalanalysis.
Hadamard matrices have attracted close attention in recent years, owing to theirnumerous known and new promising applications. In 1893, Jacques Hadamardconjectured that for any integer m divisible by 4, there is a Hadamard matrix ofthe order m. Despite the efforts of a number of individuals, this conjecture remainsunproved, even though it is widely believed to be true. Historically, the problemgoes back to James Joseph Sylvester in 1867.
The purpose of this book is to bring together different topics concerning currentdevelopments in Hadamard matrices, transforms, and their applications. HadamardTransforms distinguishes itself from other books on the same topic because itachieves the following:
• Explains the state of our knowledge of Hadamard matrices, transforms, and theirimportant generalizations, emphasizing intuitive understanding while providingthe mathematical foundations and description of fast transform algorithms.• Provides a concise introduction to the theory and practice of Hadamard matrices
and transforms. The full appearance of this theory has been realized onlyrecently, as the authors have pioneered, for example, multiplication theorems,4m-point fast Hadamard transform algorithms, and decomposition Hadamardmatrices by vectors.• Offers a comprehensive and unified coverage of Hadamard matrices with a
balance between theory and implementation. Each chapter is designed to beginwith the basics of the theory, progressing to more advanced topics, and thendiscussing cutting-edge implementation techniques.
xi
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Acknowledgments
This work has been achieved through long-term research collaboration among thefollowing three institutions:
• Department of Electrical Engineering, University of Texas at San Antonio, USA.• Institute for Informatics and Automation Problems of the National Academy of
Sciences of Armenia (IIAP NAS RA), Yerevan, Armenia.• The Tampere International Center for Signal Processing (TICIP), Tampere
University of Technology, Tampere, Finland.
This work is partially supported by NSF Grant No. HRD-0932339. The authorsare grateful to these organizations.
Special thanks are due to Mrs. Zoya Melkumyan (IIAP) and to Mrs. PirkoRuotsalainen (TICIP), of the Official for International Affairs of TICSP, for greathelp in organizing several of S. Agaian’s and H. Sarukhanyan’s trips and visits toFinland. Thanks go to Mrs. Carol Costello for her careful editing of our manuscript.We also express our appreciation to Tim Lamkins, Dara Burrows, Gwen Weerts,Beth Kelley, and Scott McNeill, members of the SPIE editing team.
xiii
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Chapter 1Classical Hadamard Matrices andArrays
In this chapter, we introduce the primary nonsinusoidal orthogonal transforms,such as Hadamard, Haar, etc., which are extensively reported in the literature.1–81
The basic advantages of the Hadamard transform (HT) are as follows:
• Multiplication by HT involves only an algebraic sign assignment.• Digital circuits can generate Hadamard functions because they assume only the
values +1 and −1.• Computer processing can be accomplished using addition, which is very fast,
rather than multiplication.• The continence case of these systems is very good for representing piecewise
constants or continuous functions.• The simplicity and efficiency of the transforms is found in a variety of
practical applications.1–20 These include, for example, digital signal andimage processing, such as compact signal representation, filtering, cod-ing, data compression, and recognition;4,14,31,40,54,55,57,60,61,66,69,73–75,77 speechand biomedical signal analysis;1,13,14,17,31,35,46,48,67,68 and digital communica-tion.3,4,11,22,31,45,49,65,70,74,76,78,79 A prime example is the code division multipleaccess system (CDMA) cellular standard IS-95, which uses a 64-Hadamardmatrix in addition to experimental and combinatorial designs, digital logic,statistics, error-correcting codes, masks for spectroscopic analysis, encryp-tion, and several other fields.3,5,6,8,9,15,18,55,68 Among other possible applications,which can be added to this list, are analysis of stock market data, combina-torics, error-correcting codes, spreading sequences, experimental design, quan-tum computing, environmental monitoring, chemistry, physics, optics, combi-natorial designs, and geophysical analysis.1,3,6,7,12,14–19,25,27,33,34,38,48,49 In thischapter, we introduce the commonly used Sylvester, Walsh–Hadamard, Walsh,Walsh–Paley, and complex HTs.
1.1 Sylvester or Walsh–Hadamard Matrices
In this section, we present the Walsh–Hadamard transform (WHT) as an exampleof a simpler, so-called HT. The concepts introduced have close analogs in other
1
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
2 Chapter 1
Figure 1.1 James Joseph Sylvester (1814–1897, London, England) is known especiallyfor his work on matrices, determinants, algebraic invariants, and the theory of numbers. In1878, he founded the American Journal of Mathematics, the first mathematical journal in theUnited States (from: www.gap-system.org/~history/Biographies).
transforms. The WHT, which is of considerable practical importance, is based onthe Sylvester matrix.
In 1867, in Sylvester’s (see Fig. 1.1) seminal paper, “Thoughts on inverseorthogonal matrices, simultaneous sign-successions and tessellated pavements intwo or more colors with application to Newton’s rule, ornamental tile work andthe theory of numbers”,20 he constructed special matrices (called later Sylvestermatrices). He constructed these matrices recurrently as
H2n =
(H2n−1 H2n−1
H2n−1 −H2n−1
), n = 1, 2, 3 . . .where H1 = [1], (1.1)
which means that a Hadamard matrix of order 2n may be obtained from a knownHadamard matrix H of order n simply by juxtaposing four copies of H and negatingone of those. It is easy to confirm that H2k , k = 1, 2, 3, . . . , is a square 2k×2k matrixwhose elements are +1 or −1, and H2k HT
2k = 2kI2k .
Definition: A square matrix Hn of order n with elements −1 and +1 is called aHadamard matrix if the following equation holds true:
HnHTn = HT
n Hn = nIn, (1.2)
where HTn is the transpose of Hn, and In is the identity matrix of order n.
Sometimes, the Walsh–Hadamard system is called the natural-orderedHadamard system. We present the Sylvester-type matrices of orders 2, 4, 8, and16, as follows:
H2 =
(+ ++ −
), H4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + + ++ − + −+ + − −+ − − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (1.3a)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Classical Hadamard Matrices and Arrays 3
Figure 1.2 Sylvester-type Hadamard matrices of orders 2, 4, 8, 16, and 32.
H8 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + + + + + + +
+ − + − + − + −+ + − − + + − −+ − − + + − − ++ + + + − − − −+ − + − − + − ++ + − − − − + ++ − − + − + + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (1.3b)
H16 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + + + + + + + + + + + + + + +
+ − + − + − + − + − + − + − + −+ + − − + + − − + + − − + + − −+ − − + + − − + + − − + + − − ++ + + + − − − − + + + + − − − −+ − + − − + − + + − + − − + − ++ + − − − − + + + + − − − − + ++ − − + − + + − + − − + − + + −+ + + + + + + + − − − − − − − −+ − + − + − + − − + − + − + − ++ + − − + + − − − − + + − − + ++ − − + + − − + − + + − − + + −+ + + + − − − − − − − − + + + ++ − + − − + − + − + − + + − + −+ + − − − − + + − − + + + + − −+ − − + − + + − − + + − + − − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (1.3c)
The symbols + and − denote +1 and −1, respectively, throughout the book.Figure 1.2 displays the Sylvester-type Hadamard matrices of order 2, 4, 8, 16,
and 32. The black squares correspond to the value of +1, and the white squarescorrespond to the value of −1.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
4 Chapter 1
Table 1.1 Construction Sylvestermatrix (1, 3) element.
k/m 00 01 10 11
00 • • • •01 • • • −110 • • • •11 • • • •
Sylvester matrices can be constructed from two Sylvester matrices usingthe Kronecker product (see the Appendix concerning the Kronecker product ofmatrices and their properties),
H4 = H2 ⊗ H2 =
(+ +
+ −)⊗
(+ +
+ −)=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + + +
+ − + −+ + − −+ − − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (1.4a)
H8 = H2 ⊗ H4 =
(+ +
+ −)⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + + +
+ − + −+ + − −+ − − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + + + + + + +
+ − + − + − + −+ + − − + + − −+ − − + + − − ++ + + + − − − −+ − + − − + − ++ + − − − − + ++ − − + − + + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
, (1.4b)
and so on,
H2n = H2 ⊗ H2 ⊗ · · · ⊗ H2 =
(+ +
+ −)⊗
(+ +
+ −)⊗ · · · ⊗
(+ +
+ −). (1.5)
We now present another approach to Hadamard matrix construction. We startfrom a simple example. Let us consider the element walh(1, 3) of the matrixH4 = [walh(m, k)]3
m,k=0 at the intersection of its second row (m = 1) and the fourthcolumn (k = 3). The binary representation of m = 1 and k = 3 is (01) and (11);hence v = (m, k) = 0 · 1 + 1 · 1 = 1, and walh(1, 3) = (−1)1 = −1. In otherwords, the value of element walh(1, 3) in the Sylvester matrix H4 can be obtainedby summing up the element-wise multiplication of the binary mod 2 expansions of1 and 3, raised to the power of −1 (see Table 1.1). Similarly, we find the remainingelements in Table 1.2.
In general, the element walh(m, k) in the Sylvester matrix can be expressed as
walh(m, k) = (−1)n−1∑i=0
miki, (1.6)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Classical Hadamard Matrices and Arrays 5
Figure 1.3 The first 16 discrete Walsh–Hadamard systems.
Table 1.2 Constructed 4 × 4 Sylvestermatrix.
k/m 00 01 10 11
00 1 1 1 101 1 −1 1 −110 1 1 −1 −111 1 −1 −1 1
where mi, ki are the binary expansions of m, k = 0, 1, . . . , 2n−1,
m = mn−12n−1 + mn−22n−2 + · · · + m121 + m020,
k = kn−12n−1 + kn−22n−2 + · · · + k121 + k020. (1.7)
The set of functions {walh(0, k),walh(1, k), . . . ,walh(n − 1, k)}, where
walh(0, k) ={walh(0, 1) walh(0, 2) . . . walh(0, n − 1)
}walh(1, k) =
{walh(1, 1) walh(1, 2) . . . walh(1, n − 1)
}. . . . . .
walh(n − 1, k) ={walh(n − 1, 1) walh(n − 2, 2) . . . walh(n − 1, n − 1)
},
(1.8)
is called the discrete Walsh–Hadamard system, or the discrete Walsh–Hadamardbasis. The first 16 Walsh functions are shown in Fig. 1.3.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
6 Chapter 1
Figure 1.4 The first eight continuous Walsh Hadamard functions on the interval [0, 1).
The set of functions {walh(0, t),walh(1, t), . . . ,walh(n − 1, t)} is called thecontinuous Walsh–Hadamard system. The discrete Walsh–Hadamard system canbe generated by sampling continuous Walsh–Hadamard functions {walh(k, t), k =0, 1, 2, . . . , n − 1} at t = 0, 1/N, 2/N, . . . , (N − 1)/N. The first eight continuousWalsh–Hadamard functions are shown in Fig. 1.4.
The discrete 1D forward and inverse WHTs of the signal x[k], k = 0, 1, . . . ,N−1are defined as
y =1√N
HN x, (forward WHT) and x =1√N
HNy, (inverse WHT), (1.9)
where HN is a Walsh–Hadamard matrix of order N, and the rows of these matricesrepresent the discrete Walsh–Hadamard basis functions.
The discrete 1D forward and inverse WHTs of the signal x[k], k = 0, 1, . . . ,N−1are defined, respectively, as follows:
y[k] =1√N
N−1∑n=0
x[n]walh[n, k], k = 0, 1, . . . ,N − 1, (1.10)
x[n] =1√N
N−1∑k=0
y[k]walh[n, k], n = 0, 1, . . . ,N − 1. (1.11)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Classical Hadamard Matrices and Arrays 7
Figure 1.5 Example of the representation a 2D signal (image) by a 2D Walsh–Hadamardsystem.
Example of the representation of a signal by the Hadamard system
Let the signal vector x[n] be (9, 7, 7, 5)T . Then x[n] may be expressed as acombination of the Hadamard basis functions
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝9775
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ = 7
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1111
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ +⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
11−1−1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ + 0
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1−1−1
1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ + 1
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1−1
1−1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ or
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝9775
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ =⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 1 11 1 −1 −11 −1 −1 11 −1 1 −1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝7101
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ .(1.12)
2D forward and inverse WHTs are defined as
y[k,m] =1
N2
N−1∑n=0
N−1∑j=0
x[n, j]walh[n, k]walh[ j,m], k,m = 0, 1, . . . ,N − 1,
x[n, j] =N−1∑k=0
N−1∑m=0
y[k,m]walh[n, k]walh[ j,m], n, j = 0, 1, . . . ,N − 1.
(1.13)
Or, the discrete 2D forward and inverse WHTs of a 2D signal X in the matrix formis defined as
Y =1
N2HN XHT
N ,
X = HTNYHN .
(1.14)
The 2D WHT can be computed via the 1D WHTs. In other words, the 1D WHTis evaluated for each column of the input data (array) X to produce a new array A.Then, the 1D WHT is evaluated for each row of A to produce y as in Fig. 1.5.
Let a 2D signal have the form
X =
(9 75 3
). (1.15)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
8 Chapter 1
We want to show that the signal X may be expressed as a combination of the 2DWalsh–Hadamard basis functions. First, we define
Y =14
H2XHT2 =
14
(1 11 −1
) (9 75 3
) (1 11 −1
)=
(6 12 0
). (1.16)
Thus, the 2D Walsh–Hadamard discrete basis functions are obtained from the1D basis function as follows:(
11
) (1 1
)=
(1 11 1
),
(11
) (1 −1
)=
(1 −11 −1
),(
1−1
) (1 1
)=
(1 1−1 −1
),
(1−1
) (1 −1
)=
(1 −1−1 1
).
(1.17)
Therefore, the signal X may be expressed as
X =
(9 75 3
)= 6 ×
(1 11 1
)+ 1 ×
(1 −11 −1
)+ 2 ×
(1 1−1 −1
)+ 0 ×
(1 −1−1 1
). (1.18)
Graphically, the representation of the 2D signal by a combination of theWalsh–Hadamard functions may be represented as
(1.19)
where +1 is white and −1 is black.The basis function
(1 11 1
)represents the average intensity level of the four
signal elements. The basis function(
1 −11 −1
)represents the detail consisting of one
horizontal crossing. The basis function(
1 1−1 −1
)represents the compliment of the
2D signal elements. The basis function(
1 −1−1 1
)represents the one zero crossing in
both horizontal directions.
Selected Properties
• The row vectors of H define a complete set of orthogonal functions.• The elements in the first column and the first row are equal to one—all positive.
The elements in all of the other rows and columns are evenly divided betweenpositive and negative.• The WHT matrix is orthogonal; this means that the inner product of its any two
distinct rows is equal to zero. This is equivalent to HHT = NIN . For example,
H2HT2 =
(+ +
+ −) (+ +
+ −)=
(2 00 2
)= 2I2. (1.20)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Classical Hadamard Matrices and Arrays 9
• The Walsh–Hadamard matrix is symmetric [i.e., HTN = H, or H−1
N = (1/N)HN].• | det HN | = NN/2.• For example, we have det
(+ +
+ −)= 1(−1) − (1)(1) = −2,
det(H4) = det
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎝− + −+ − −− − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎠ − det
⎛⎜⎜⎜⎜⎜⎜⎝+ + −+ − −+ − +
⎞⎟⎟⎟⎟⎟⎟⎠ + det
⎛⎜⎜⎜⎜⎜⎜⎝+ − −+ + −+ − +
⎞⎟⎟⎟⎟⎟⎟⎠ − det
⎛⎜⎜⎜⎜⎜⎜⎝+ − ++ + −− + −
⎞⎟⎟⎟⎟⎟⎟⎠ = 16. (1.21)
• There is a very simple method to generate the Hadamard matrix HN of the orderN (N = 2n) directly.79 Let us use a binary matrix Bn that has N = 2n rows and ncolumns. For example, the first four counting matrices are
BT1 =
(0 1
),
BT2 =
(0 0 1 10 1 0 1
),
BT3 =
⎛⎜⎜⎜⎜⎜⎜⎜⎝0 0 0 0 1 1 1 10 0 1 1 0 0 1 10 1 0 1 0 1 0 1
⎞⎟⎟⎟⎟⎟⎟⎟⎠ ,
BT4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 10 0 0 0 1 1 1 1 0 0 0 0 1 1 1 10 0 1 1 0 0 1 1 0 0 1 1 0 0 1 10 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ .
(1.22)
It can be shown that if in the matrix BnBTn we replace each 0 with +1 and each
1 with −1, we obtain the Hadamard matrix H2n of the order 2n. For n = 1, weobtain
B1BT1 =
(01
) (0 1
)=
(0 00 1
)⇒ replace 0→ +1, 1→ −1
⇒(+1 +1+1 −1
)= H2, (1.23)
B2BT2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝0 00 11 01 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠(0 0 1 10 1 0 1
)=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝0 0 0 00 1 0 10 0 1 10 1 1 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠⇒ replace [0, 1]→ [+1,−1]
⇒⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + + ++ − + −+ + − −+ − − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ = H4. (1.24)
• The elements of the WHT matrix can be calculated as
walh(m, k) = exp
⎛⎜⎜⎜⎜⎜⎜⎝ jπn−1∑i=0
miki
⎞⎟⎟⎟⎟⎟⎟⎠ = (−1)n−1∑i=0
miki,
j =√−1, since exp( jπr)
= cos(πr) + j sin(πr). (1.25)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
10 Chapter 1
• The discrete system {walh(m, k)}, m, k = 0, 1, . . . ,N − 1 is called the Walsh–Hadamard system. It can be shown that
walh(n, r + N) = walh(n, r). (1.26)
Also, the Walsh–Hadamard system forms a complete system of orthogonalfunctions, i.e.,
1N
N−1∑n=0
walh(k, n)walh(s, n) ={
1, if k = s,0, otherwise,
k, s = 0, 1, 2, . . . ,N − 1. (1.27)
In order to experimentally show the orthogonality of a matrix, it is necessary tomultiply every row of the matrix by every other row, element by element, andexamine the result. If the rows are different, then the sum should be zero (thatis, they have nothing in common, because everything cancels out).• The eigenvalues of matrix H2 =
(+ +
+ −)
are equal to +√
2 and −√2; then, usingthe properties of the Kronecker products of matrices, we conclude that theeigenvalues of a Walsh–Hadamard matrix of order n equal ±√n. The eigendecomposition of matrix H2 =
(+ +
+ −)
is given by
H2 = UDU−1, where U =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝cosπ
8− sin
π
8
sinπ
8cosπ
8
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , D =
⎛⎜⎜⎜⎜⎝√
2 00 −√2
⎞⎟⎟⎟⎟⎠ , (1.28)
or
U =12
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝√
2 +√
2 −√
2 − √2√2 − √2
√2 +√
2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠n∑
i=1
Xi. (1.29)
It has been shown (see the proof in the Appendix) that if A is an N × N matrixwith Axn = anxn, n = 1, 2, . . . ,N, and B is an M × M matrix with Bym = bmym,m = 1, 2, . . . ,M, then
(A ⊗ B)(xn ⊗ bm) = anbm(xn ⊗ ym). (1.30)
This means that if {xn} is a Karhunen–Loeve transform (KLT)46 for A, and {ym}is a KLT for B, then xn ⊗ ym is the KLT transform for A ⊗ B. Using this fact, wemay find the eigenvalues and the eigen decomposition of matrix Hn.• If H f and Hg are WHTs of vectors f and g, respectively, then H ( f ∗g) =
H f · Hg, where * is the dyadic convolution of two vectors f and g, which isdefined by v(m) =
∑N−1k=0 f (k)g(m⊕ k), where m⊕ k is the decimal number whose
binary extension is [(m0 + k0) mod 2, (m1 + k1) mod 2, . . . , (mn−1 + kn−1) mod 2],and m, k are given by Eq. (1.7).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Classical Hadamard Matrices and Arrays 11
An example serves to illustrate these relationships. Let f T = ( f0, f1, f2, f3) andgT = (g0, g1, g2, g3). Compute
F = H4 f =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + + +
+ − + −+ + − −+ − − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
f0f1f2f3
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ =⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
f0 + f1 + f2 + f3f0 − f1 + f2 − f3f0 + f1 − f2 − f3f0 − f1 + f2 − f3
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ =⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝F0
F1
F2
F3
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (1.31a)
G = H4g =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + + +
+ − + −+ + − −+ − − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝g0
g1
g2
g3
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ =⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝g0 + g1 + g2 + g3
g0 − g1 + g2 − g3
g0 + g1 − g2 − g3
g0 − g1 + g2 − g3
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ =⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝G0
G1
G2
G3
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (1.31b)
Now, compute vm =∑3
k=0 fkg(m ⊕ k). We find that
v0 = f0g0 + f1g1 + f2g2 + f3g3, v1 = f0g1 + f1g0 + f2g3 + f3g2,
v2 = f0g2 + f1g3 + f2g0 + f3g1, v3 = f0g3 + f1g2 + f2g1 + f3g0.(1.32)
Now, we can check that
H4v =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + + +
+ − + −+ + − −+ − − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝v0
v1
v2
v3
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ =⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝v0 + v1 + v2 + v3
v0 − v1 + v2 − v3
v0 + v1 − v2 − v3
v0 − v1 + v2 − v3
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ =⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝F0G0
F1G1
F2G2
F3G3
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (1.33)
Let x be an integer vector. Then, y = (y0, y1, . . . , yN−1) = HN x is also an integervector. Moreover, if y0 is odd (even), then all yi (i = 1, 2, . . . ,N − 1) are odd(even).
1.2 Walsh–Paley Matrices
The Walsh–Paley system (sometimes called the dyadic-ordered Walsh–Hadamardmatrix) introduced by Walsh in 192321 is constructed recursively by
PN =
⎛⎜⎜⎜⎜⎜⎜⎝PN/2 ⊗(1 1
)PN/2 ⊗
(1 −1
) ⎞⎟⎟⎟⎟⎟⎟⎠ , where P1 = (1), N = 2n, n = 1, 2, . . . . (1.34)
Below, we present the Paley matrices of orders 2, 4, 8, and 16 (see Fig. 1.6).
For n = 1 we have
P2 =
(P1 ⊗ (++)P1 ⊗ (+−)
)=
((+) ⊗ (++)(+) ⊗ (+−)
)=
(+ +
+ −). (1.35)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
12 Chapter 1
Figure 1.6 Walsh–Paley matrices of orders 2, 4, 8, 16, and 32.
For n = 2, and from definition of the Kronecker product, we obtain
P4 =
(P2 ⊗ (++)P2 ⊗ (+−)
)=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝(+ ++ −
)⊗ (++)(
+ ++ −
)⊗ (+−)
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ =⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + + ++ + − −+ − + −+ − − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (1.36)
For n = 3, we have
P8 =
(P4 ⊗ (++)P4 ⊗ (+−)
)=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + + ++ + − −+ − + −+ − − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ⊗ (++)
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + + ++ + − −+ − + −+ − − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ⊗ (+−)
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + + + + + + ++ + + + − − − −+ + − − + + − −+ + − − − − + ++ − + − + − + −+ − + − − + − ++ − − + + − − ++ − − + − + + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (1.37)
Similarly, for n = 4, we obtain
P16 =
(P8 ⊗ (++)P8 ⊗ (+−)
)=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + + + + + + + + + + + + + + +
+ + + + + + + + − − − − − − − −+ + + + − − − − + + + + − − − −+ + + + − − − − − − − − + + + ++ + − − + + − − + + − − + + − −+ + − − + + − − − − + + − − + ++ + − − − − + + + + − − − − + ++ + − − − − + + − − + + + + − −+ − + − + − + − + − + − + − + −+ − + − + − + − − + − + − + − ++ − + − − + − + + − + − − + − ++ − + − − + − + − + − + + − + −+ − − + + − − + + − − + + − − ++ − − + + − − + − + + − − + + −+ − − + − + + − + − − + − + + −+ − − + − + + − − + + − + − − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (1.38)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Classical Hadamard Matrices and Arrays 13
The elements of the Walsh–Paley matrix of order N = 2n can be expressed as
walp( j, k) = (−1)n−1∑m=0
(kn−m+kn−m−1) jm, (1.39)
where jm, km are the m’th bits in the binary representations of j and k.Let us consider an example. Let n = 2; then, from Eq. (1.39), we obtain
Wp( j, k) = (−1)k2 j0+(k2+k1) j1+(k1+k0) j2 . (1.40)
Because 3 = 0 · 22 + 1 · 21 + 1 · 20, j2 = 0, j1 = 1, j0 = 1, and 3 = (011), 5 = (101),then it is also true that walp(3, 5) = (−1)2 = 1. Similarly, we can generate otherelements of a Walsh–Paley matrix of order 4.
Walsh–Paley matrices have properties similar to Walsh–Hadamard matrices.The set of functions {walp(0, k),walp(1, k), . . . ,walp (n − 1, k)}, where
walp(0, k) ={walp(0, 1) walp(0, 2) . . . walp(0, n − 1)
}walp(1, k) =
{walp(1, 1) walp(1, 2) . . . walp(1, n − 1)
}. . . . . .
walp(n − 1, k) ={walp(n − 1, 1) walp(n − 1, 2) . . . walp(n − 1, n − 1)
}(1.41)
is called the discrete Walsh–Paley functions system, or the discrete Walsh–Paleyfunctions basis. The set of functions {walp(0, t),walp(1, t), . . . ,walp(n − 1, t)} iscalled the continuous Walsh–Hadamard system, or the Walsh–Hadamard system.46
The 16-point discrete Walsh–Paley basis functions and the first eight continuousWalsh–Paley functions are shown in Figs. 1.7 and 1.8, respectively. The 16 discreteWalsh–Paley basis functions given in Fig. 1.7 can be generated by samplingcontinuous Walsh–Paley functions at t = 0, 1/16, 2/16, 3/16, . . . , 15/16.
Comparing Figs. 1.8 and 1.4 we can find the following relationship between theWalsh–Hadamard and the Walsh–Paley basic functions:
walh(0, t) = walp(0, t), walh(4, t) = walp(1, t),
walh(1, t) = walp(4, t), walh(5, t) = walp(5, t),
walh(2, t) = walp(2, t), walh(6, t) = walp(3, t),
walh(3, t) = walp(6, t), walh(7, t) = walp(7, t).
(1.42)
This means that most of the Walsh–Hadamard matrices and functions are true forthe Walsh–Paley basic functions case.
1.3 Walsh and Related Systems
The Walsh system differs from the Walsh–Hadamard system in the order ofthe rows. Furthermore, we present the construction of the Walsh matrix and
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
14 Chapter 1
1
0
–1
0 10
1
0
–1
0 10
1
0
–1
0 10
1
0
–1
0 10
1
0
–1
0 10
1
0
–1
0 10
1
0
–1
0 10
1
0
–1
0 10
1
0
–1
0 10
1
0
–1
0 10
1
0
–1
0 10
1
0
–1
0 10
1
0
–1
0 10
1
0
–1
0 10
1
0
–1
0 10
1
0
–1
0 10
Figure 1.7 16-point discrete Walsh–Paley basis functions.
Figure 1.8 The first eight continuous Walsh–Paley functions in the interval [0, 1).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Classical Hadamard Matrices and Arrays 15
the Walsh system. On the basis of this system, we derive two importantorthogonal systems, namely the Cal–Sal and Haar systems, to be discussed inthe following sections. Both of these systems have applications in signal/imageprocessing, communication, and digital logic.1–79 The Walsh–Hadamard functionwas introduced in 1923 by Walsh.21
1.3.1 Walsh systemWalsh matrices are often described as discrete analogues of the cosine and sinefunctions. The Walsh matrix is constructed recursively by
WN =[W2 ⊗ A1, (W2R) ⊗ A2, . . . ,W2 ⊗ A(N/2)−1, (W2R) ⊗ A(N/2)
], (1.43)
where W2 =(+ +
+ −), R =
(0 11 0
), and Ai is the i’th column of a Walsh matrix of order
N = 2n.
Example: Walsh matrices of order 4 and 8 have the following structures:
W4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + + +
+ + − −+ − − ++ − + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ,
W8 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + + + + + + +
+ + + + − − − −+ + − − − − + ++ + − − + + − −+ − − + + − − ++ − − + − + + −+ − + − − + − ++ − + − + − + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠.
(1.44)
Indeed,
W4 =
(W2 ⊗
(+
+
),
(W2
(0 11 0
))⊗
(+
−))
=
((+ +
+ −)⊗
(+
+
),
((+ +
+ −) (
0 11 0
))⊗
(+
−))=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + + +
+ + − −+ − − ++ − + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (1.45a)
W8 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝W2 ⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+
+
+
+
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (W2R) ⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+
+
−−
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ,W2 ⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+
−−+
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (W2R) ⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+
−+
−
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
=
(+ +
+ −)⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+
+
+
+
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ,(+ +
− +)⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+
+
−−
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ,(+ +
+ −)⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+
−−+
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ,(+ +
− +)⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+
−+
−
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (1.45b)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
16 Chapter 1
Figure 1.9 The first eight continuous Walsh functions in the interval [0, 1).
The elements of Walsh matrices can be also expressed as
walw( j, k) = (−1)n−1∑i=0
( jn−i−1+ jn−i)ki, (1.46)
where N = 2n and jm, km are the m’th bits in the binary representations of j and k,respectively.
The set of functions {walw(0, k),walw(1, k), . . . ,walw(n − 1, k)}, where
walw(0, k) = {walw(0, 0), walw(0, 1), walw(0, 2), . . . , walw(0, n − 1)} ,walw(1, k) = {walw(1, 0), walw(1, 1), walw(1, 2), . . . , walw(1, n − 1)} ,...
......
... . . ....
walw(n − 1, k) = {walw(n − 1, 0), walw(n − 1, 1), walw(n − 1, 2), . . . , walw(n − 1, n − 1)} ,(1.47)
is called a discrete Walsh system, or discrete Walsh basis functions. The set offunctions {walw(0, t),walw(1, t), . . . ,walw(n − 1, t)}, t ∈ [0, 1) are called continuousWalsh functions (Fig. 1.9).
The continuous Walsh functions can be defined as
walw(2m + p, t) = walw[m, 2(t + 1/2)] + (−1)m+pwalw[m, 2(t − 1/2)], t ∈ [0, 1),
(1.48)
where m = 0, 1, 2, . . ., walw(0, t) = 1, for all t ∈ [0, 1).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Classical Hadamard Matrices and Arrays 17
Note that the Walsh functions may be also constructed by
walw(n ⊕ m, t) = walw(n, t)walw(m, t), t ∈ [0, 1), (1.49)
where the symbol ⊕ denotes the logic operation Exclusive OR, i.e., 0 ⊕ 0 = 0,0 ⊕ 1 = 1, 1 ⊕ 0 = 1, and 1 ⊕ 1 = 0.
For example, let n = 3 or 3 = 0·22+1·21+1·20 and m = 5 or 5 = 1·22+0·21+1·20,but
3 ⊕ 5 = (011) ⊕ (101) = (0 ⊕ 1)(1 ⊕ 0)(1 ⊕ 1) = (110) = 1 · 22+ 1 · 21+ 0 · 20 = 6.
(1.50)
Hence, we obtainwalw(3, t)walw(5, t) = walh(3 ⊕ 5, t) = walh(6, t). (1.51)
1.3.2 Cal–Sal orthogonal system
A Cal–Sal function system can be defined as
cal( j, k) = walw(2 j, k), j = 0, 1, 2, . . . ,
sal( j, k) = walw(2 j − 1, k), j = 1, 2, . . . ,(1.52)
where walw( j, k) is the ( j’th, k’th) element of the Walsh matrix defined inEq. (1.46).
The Cal–Sal matrix elements can be calculated by T ( j, k) = (−1)∑n−1
i=0 piki , where
j, k = 0, 2n − 1 and p0 = jn−1, p0 = jn−1, p1 = jn−2 + jn−1, . . . , pn−2 = j1 + j2,pn−1 = j0 + j1.
Cal–Sal Hadamard matrices of orders 4 and 8 are of the following form:
T2 =
(+ +
+ −), T4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + + +
+ − − ++ − + −+ + − −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ,
T8 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + + + + + + +
+ + − − − − + ++ − − + + − − ++ − + − − + − ++ − + − + − + −+ − − + − + + −+ + − − + + − −+ + + + − − − −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠.
(1.53)
Cal–Sal matrices of order 2, 4, 8, 16, and 32 are shown in Fig. 1.10, and the firsteight continuous Cal–Sal functions are shown in Fig. 1.11.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
18 Chapter 1
Figure 1.10 Cal–Sal matrices of order 2, 4, 8, 16, and 32.
Figure 1.11 The first eight continuous Cal–Sal functions in the interval [0, 1).
The following example shows the relationship between a 4 × 4 Cal–SalHadamard matrix and continuous Cal–Sal functions:⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + + +
+ − − ++ − + −+ + − −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠→ walw(0, t)→ cal(1, t)→ sal(2, t)→ sal(1, t).
(1.54)
There are many selected properties of the Cal–Sal system. The Walsh functionscan be constructed by
walw(n ⊕ m, t) = walw(n, t)walw(m, t), (1.55)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Classical Hadamard Matrices and Arrays 19
where the symbol ⊕ denotes the logic operation Exclusive OR, i.e.,
0 ⊕ 0 = 0, 0 ⊕ 1 = 1, 1 ⊕ 0 = 0, and 1 ⊕ 1 = 0. (1.56)
For example, let
n = 3 or n = 0 · 22 + 1 · 21 + 1 · 20, and m = 5 or
m = 1 · 22 + 0 · 21 + 1 · 20, (1.57)
then
n ⊕ m = (0 ⊕ 1) · 22 + (1 ⊕ 0) · 21 + (1 ⊕ 1) · 20 = 1 · 22 + 1 · 21 + 0 · 20 = 6,
(1.58)
thus,
walw(n ⊕ m, t) = walw(3 ⊕ 5, t) = walw(3, t)walw(5, t) = walw(6, t). (1.59)
Particularly, from this expression, we may have
walw(m ⊕ m, t) = walw(m, t)walw(m, t) = walw(0, t). (1.60)
Furthermore,
cal(0, k) = ww(0, k),
cal( j, k)cal(m, k) = cal( j ⊕ m, k),
sal( j, k)sal(m, k) = sal(( j − 1) ⊕ (m − 1), k),
sal( j, k)cal(m, k) = sal(m ⊕ ( j − 1) + 1, k),
cal( j, k)sal(m, k) = sal( j ⊕ (m − 1) + 1, k),
cal(2m, k − 2−m−2) = sal(2m, k),
sal(2m, k − 2−m−2) = −cal(2m, k).
(1.61)
Also note the sequency-ordered Hadamard functions. In Fourier analysis,“frequency” can be interpreted physically as the number of cycles/unit of time,which also may be interpreted as one half of the number of zero crossings per unittime. In analogy to the relationship of frequency to the number of zero crossingsor sign changes in periodic functions, Harmuth22 defines sequency as a half of theaverage number of zero crossings. The sequency si of the Walsh basis system isgiven by
s0 = 0, and si =
⎧⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎩i2, if i is even,
(i + 1)2, if i is odd.
(1.62)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
20 Chapter 1
We may classify the Hadamard systems with respect to sequency as follows:
• Sequency (Walsh) order is directly related to frequency and is superior for usein communication and signal processing applications such as filtering, spectralanalysis, recognition, and others.• Paley or dyadic order has analytical and computational advantages and is used
for most mathematical investigations.• Hadamard or Natural order has computational benefits and is simple to generate
and understand.
Sometimes the number of sign changes along the column/row of the Hadamardmatrices is called the sequency of that column/row. Below, we present examplesof different Walsh–Hadamard matrices with corresponding sequences listed on theright side.
(a) Natural-ordered Walsh–Hadamard matrix
Hh(8) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + + + + + + +
+ − + − + − + −+ + − − + + − −+ − − + + − − ++ + + + − − − −+ − + − − + − ++ + − − − − + ++ − − + − + + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
07341625
(1.63a)
(b) Sequency-ordered Walsh matrix
Hw(8) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + + + + + + +
+ + + + − − − −+ + − − − − + ++ + − − + + − −+ − − + + − − ++ − − + − + + −+ − + − − + − ++ − + − + − + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
01122334
(1.63b)
(c) Dyadic-ordered Paley matrix
Hp(8) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + + + + + + +
+ + + + − − − −+ + − − + + − −+ + − − − − + ++ − + − + − + −+ − + − − + − ++ − − + + − − ++ − − + − + + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
01214323
(1.63c)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Classical Hadamard Matrices and Arrays 21
(d) Cal–Sal-ordered Hadamard matrix
Hcs(8) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + + + + + + +
+ + − − − − + ++ − − + + − − ++ − + − − + − ++ − + − + − + −+ − − + − + + −+ + − − + + − −+ + + + − − − −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
wal(0, t) 0cal(1, t) 1cal(2, t) 2cal(3, t) 3sal(4, t) 4sal(3, t) 3sal(2, t) 2sal(1, t) 3
(1.63d)
where + and − indicate +1 and −1, respectively.The relationship among different orderings of Hadamard systems have been
discussed in the literature, particularly see Refs. 11 and 14. We will show that anyof the above Hadamard matrixes is the same as the Walsh–Hadamard matrix withshuffled rows. The relations among ordering of Hadamard systems is schematicallygiven in Fig. 1.12, where
• BGC = Binary to gray code conversion• GBC = Gray to binary code conversion• GIC = Gray to binary inverse code conversion• IGC = Binary inverse to Gray code conversion• IBC = Binary inverse to binary code conversion• BIC = Binary to binary inverse code conversion
The Gray code is a binary numeral system, or base-2 number system, wheretwo successive values differ in only one bit. Or, a Gray code is an encoding ofnumbers so that adjacent numbers differ in only one bit. Gray codes were appliedto mathematical puzzles before they became known to engineers. The Frenchengineer Émile Baudot used Gray codes in telegraphy in 1878. He received theFrench Legion of Honor medal for his work.80 Frank Gray, who became famous forinventing the signaling method that came to be used for compatible color television,invented a method to convert analog signals to reflected binary code groups usinga vacuum-tube-based apparatus. The method and apparatus were patented in 1953,and the name of Gray stuck to the codes.79,80
Figure 1.12 Schematic of Hadamard systems.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
22 Chapter 1
Table 1.3 Binary Gray Codes.
Decimal 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15Binary 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111Gray 0000 0001 0011 0010 0110 0111 0101 0100 1100 1101 1111 1110 1010 1011 1001 1000
Mathematically, these relations are given as follows. Let b = (bn−1, bn−2, . . . , b0),c = (cn−1, cn−2, . . . , c0), and g = (gn−1, gn−2, . . . , g0) denote code words in the n-bit binary, inverse, and Gray code representations, respectively. Below, we give amore detailed description of the conversion operations that are given in the abovescheme.
(a) Binary-to-Gray code conversion (BGC) given by{gn−1 = bn−1,gi = bi ⊕ bi+1, i = 0, 1, . . . , n − 2,
(1.64)
where the symbol ⊕ denotes addition modulo 2.
Example: Let b = (1, 0, 1, 0, 0, 1), then g = (1, 1, 1, 1, 0, 1). The schematicpresentation for this conversion is given as
Binary Code 1 ⊕ 0 ⊕ 1 ⊕ 0 ⊕ 0 ⊕ 1↓ ↓ ↓ ↓ ↓ ↓
Gray Code 1 1 1 1 0 1(1.65)
So, too, the binary code b = (1, 0, 1, 0, 0, 1) corresponds to the following Graycode: g = (1, 1, 1, 1, 0, 1). In Table 1.3, there are Gray codes given for binary codesof the decimal numbers of 0, 1, 2, . . . , 15.
(b) As can be seen from Table 1.3, each row differs from the row above/belowby only one bit.
(c) Conversion from Gray code to natural binary: Let {gk, k = 0, 1, . . . , n − 1}be an n-bit Gray code and {bk, k = 0, 1, . . . , n − 1} the corresponding binary codeword. Gray-to-binary code conversion (GBC) can be done by{
bn−1 = gn−1,bn−i = gn−1 ⊕ gn−2 ⊕ · · · ⊕ gn−i, i = 2, 3, . . . , n.
(1.66)
The conversion from a Gray-coded number to binary can be achieved by using thefollowing scheme:
• To find the binary next-to-MSB (most significant bit), add the binary MSB andthe Gray code next-to-MSB.• Fix the sum.• Continue this computation from the first to last numbers.
Note that both the binary and the Gray-coded numbers will have a similarnumber of bits, and the binary MSB (left-hand bit) and Gray-code MSB are alwaysthe same.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Classical Hadamard Matrices and Arrays 23
Example: Let g = (1, 0, 1, 0, 1, 1).
b5 = g5 = 1,
b4 = g4 ⊕ b5 = 0 ⊕ 1 = 1,
b3 = g3 ⊕ b4 = 1 ⊕ 1 = 0,
b2 = g2 ⊕ b3 = 0 ⊕ 0 = 0,
b1 = g1 ⊕ b2 = 1 ⊕ 0 = 1,
b0 = g0 ⊕ b1 = 1 ⊕ 1 = 0.
(1.67)
Thus, b = (1, 1, 0, 0, 1, 0).
Binary-to-binary-inverse code conversion (BIC): the formation ci is given by
ci = bn−i−1, i = 0, 1, . . . , n − 1. (1.68)
Example: Let b = (1, 0, 1, 0, 0, 1), then we have c0 = b5 = 1, c1 = b4 = 0,c2 = b3 = 1, c3 = b2 = 0, c4 = b1 = 0, c5 = b0 = 1. Hence, we obtainc = (1, 0, 0, 1, 0, 1).
(d) Binary-inverse-to-binary code conversion (IBC): the formation bi is given by
bi = cn−i−1, i = 0, 1, . . . , n − 1. (1.69)
Example: Let c = (1, 0, 0, 1, 0, 1), then b = (1, 0, 1, 0, 0, 1).Gray to binary inverse code conversion (GIC): the formation ci, i = 0, 1, . . . , n−1
is initiated from the most significant bit as{c0 = gn−1,ci−1 = gn−1 ⊕ gn−2 ⊕ · · · ⊕ gn−i, i = 2, 3, . . . , n.
(1.70)
Example: Let g = (1, 0, 1, 1, 0, 1), then we have
c0 = g5 = 1,
c1 = g5 ⊕ g4 = 1 ⊕ 0 = 1,
c2 = g5 ⊕ g4 ⊕ g3 = 1 ⊕ 0 ⊕ 1 = 0,
c3 = g5 ⊕ g4 ⊕ g3 ⊕ g2 = 1 ⊕ 0 ⊕ 1 ⊕ 1 = 1,
c4 = g5 ⊕ g4 ⊕ g3 ⊕ g2 ⊕ g1 = 1 ⊕ 0 ⊕ 1 ⊕ 1 ⊕ 0 = 1,
c5 = g5 ⊕ g4 ⊕ g3 ⊕ g2 ⊕ g1 ⊕ g0 = 1 ⊕ 0 ⊕ 1 ⊕ 1 ⊕ 0 ⊕ 1 = 0.
(1.71)
Hence, we have c = (0, 1, 1, 0, 1, 1).
Binary-inverse-to-Gray code conversion (IGC): the formation gi, i = 0, 1, . . . ,n − 1 is given by {
gn−1 = c0,gi = cn−i−1 ⊕ cn−i−2, i = 0, 1, . . . , n − 2.
(1.72)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
24 Chapter 1
Example: Let c = (1, 0, 1, 1, 1, 0). A schematic presentation of this conversion isgiven as
Binary Inverse 1 0 1 1 1 0↓ ↓ ↓ ↓ ↓ ↓⊕ ⊕ ⊕ ⊕ ⊕ ↓
Output Code 1 1 0 0 1 0g0 g1 g2 g3 g4 g5
(1.73)
Thus, we obtain g = (0, 1, 0, 0, 1, 1).
Concerning signal representation/decomposition, the theory of communicationsystems has traditionally been based on orthogonal systems such as sine and cosinesystems. The Cal–Sal system is similar to the Fourier system. The sinusoids in theFourier system are characterized by their frequency of oscillation in terms of thenumber of complex cycles they make.22,46.
Because Walsh functions form an orthogonal system, the Walsh series is definedby
f (x) =∞∑
m=0
cmwal(m, x), (1.74)
where cm =∫ 1
0f (x)wal(m, x)dx, m = 0, 1, 2, . . ..
For the Cal–Sal system, the analogy with the Fourier series motivates thefollowing representation:
f (x) = a0 +
∞∑m=0
amcal(m, x) + bmsal(m, x), (1.75)
where am =∫ 1/2
−1/2f (x)cal(m, x)dx, bm =
∫ 1/2
−1/2f (x)sal(m, x)dx, m = 0, 1, 2, . . ..
Defining cm =√
a2m + b2
m, αm = tan−1(bm/am), and plotting them versus thesequence m yields plots similar to Fourier spectra and phase. Here, cm provides ananalogy to the modulus, while the artificial phase αm is analogous to a classicalphase.
It can be shown that any signal f (x) is square integrable over to [01]. Therefore,f (x) can be represented by a Walsh–Fourier series. The Parseval identity is alsovalid.
1.3.3 The Haar system
The Haar transform, almost 100 years old, was introduced by the Hungarianmathematician Alfred Haar in 1910 (see Fig. 1.13).42,50–53 In the discrete Fouriertransform (DFT) and WHT, each transform coefficient is a function of allcoordinates in the original data space (global), whereas this is true only for
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Classical Hadamard Matrices and Arrays 25
Figure 1.13 Alfréd Haar (1885–1933), Hungarian mathematician (http://www.gap-system.org/~history/Mathematicians/Haar.html).
the first two Haar coefficients. The Haar transform is real, allowing simpleimplementation as well as simple visualization and interpretation. The advantagesof these basis functions are that they are well localized in time and may bevery easily implemented and are by far the fastest among unitary transforms.The Haar transform provides a transform domain in which a type of differentialenergy is concentrated in localized regions. This kind of property is very useful inimage processing applications such as edge detection and contour extraction. TheHaar transform is the simplest example of an orthonormal wavelet transform. Theorthogonal Haar functions are defined as follows:42,46
H00(k) = 1.
Hqi (k) =
⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩
2(i−1)/2, ifq
2(i−1)≤ k <
q + 0.52(i−1)
,
−2(i−1)/2, ifq + 0.52(i−1)
≤ k <q + 12(i−1)
,
0, at all other points,
(1.76)
where i = 1, 2, . . . , n, q = 0, 1, . . . , 2i−1 − 1.Note that for any n there will be 2n Haar functions. Discrete sampling of the
set of Haar functions gives the orthogonal matrix of order 2n. The Haar transformmatrix is defined as
[Haar]2n = H(2n) =
⎛⎜⎜⎜⎜⎜⎜⎜⎝H(2n−1) ⊗ (+1 + 1)√2n−1I(2n−1) ⊗ (+1 − 1)
⎞⎟⎟⎟⎟⎟⎟⎟⎠ , n = 2, 3, . . . , (1.77)
where H(2) =(+1 +1+1 −1
), ⊗ is the Kronecker product, and I(2n) is the identity matrix
of order 2n.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
26 Chapter 1
Below are the Haar matrices of orders 2, 4, 8, and 16 (here s =√
2).
[Haar]2 =
(1 11 −1
), (1.78a)
[Haar]4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 1 11 1 −1 −1s −s 0 00 0 s −s
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (1.78b)
[Haar]8 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 11 1 1 1 −1 −1 −1 −1s s −s −s 0 0 0 00 0 0 0 s s −s −s2 −2 0 0 0 0 0 00 0 2 −2 0 0 0 00 0 0 0 2 −2 0 00 0 0 0 0 0 2 −2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (1.78c)
[Haar]16 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 11 1 1 1 1 1 1 1 −1 −1 −1 −1 −1 −1 −1 −1s s s s −s −s −s −s 0 0 0 0 0 0 0 00 0 0 0 0 0 0 0 s s s s −s −s −s −s2 2 −2 −2 0 0 0 0 0 0 0 0 0 0 0 00 0 0 0 2 2 −2 −2 0 0 0 0 0 0 0 00 0 0 0 0 0 0 0 2 2 −2 −2 0 0 0 00 0 0 0 0 0 0 0 0 0 0 0 2 2 −2 −22s −2s 0 0 0 0 0 0 0 0 0 0 0 0 0 00 0 2s −2s 0 0 0 0 0 0 0 0 0 0 0 00 0 0 0 2s −2s 0 0 0 0 0 0 0 0 0 00 0 0 0 0 0 2s −2s 0 0 0 0 0 0 0 00 0 0 0 0 0 0 0 2s −2s 0 0 0 0 0 00 0 0 0 0 0 0 0 0 0 2s −2s 0 0 0 00 0 0 0 0 0 0 0 0 0 0 0 2s −2s 0 00 0 0 0 0 0 0 0 0 0 0 0 0 0 2s −2s
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
.
(1.78d)Figure 1.14 shows the structure of Haar matrices of different orders, and
Fig. 1.15 shows the structure of continuous Haar functions.The discrete Haar basis system can be generated by sampling Haar systems at
t = 0, 1/N, 2/N, . . . , (N − 1)/N. The 16-point discrete Haar functions are shown inFig. 1.16.
Properties:(1) The Haar transform Y = [Haar]2n X (where X is an input signal) provides a
domain that is both globally and locally sensitive. The first two functions reflectthe global character of the input signal; the rest of the functions reflect the localcharacteristics of the input signal. A local change in the data signal results in alocal change in the Haar transform coefficients.
(2) The Haar transform is real (not complex like a Fourier transform), so real datagive real Haar transform coefficients.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Classical Hadamard Matrices and Arrays 27
Figure 1.14 The structure of Haar matrices of order 2, 4, 8, 16, 32, 64, 128, and 256.
Figure 1.15 The first eight continuous Haar functions in the interval [0, 1).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
28 Chapter 1
0.5
0
–0.5
0.5
0
00 10 20
0 10 20
0 10 20
0 0 010 20
0.5
0
–0.510 20
0.5
0
–0.510 20
0 10 20–0.5
0.5
0
0 10 20–0.5
0.5
0
0 10 20–0.5
0.5
0
0
1
–10 10 20
0
1
–10 10 20
0
1
–1 0 10 20
0
1
–1
0 10 20
0
1
–10 10 20
0
1
–10 10 20
0
1
–10 10 20
0
1
–1
–0.5
0.4
0.2
Figure 1.16 The first 16 discrete Haar functions.
(3) The Haar matrix is orthogonal HN HTN = HT
N HN = IN , where IN is the N × Nidentity matrix. Its rows are sequentially ordered. Whereas the trigonometricbasis functions differ only in frequency, the Haar functions vary in both scale(width) and position.
(4) The Haar transform is the one of the fastest of the orthogonal transforms. Todefine the standard 2D Haar decomposition in terms of the 1D transform, firstapply the 1D Haar transform to each row, then apply the 1D Haar transform toeach column of the result. (See Fig. 1.17 for Haar-transformed images of 2Dinput images.) In other words, the 2D Haar function is defined from the 1DHaar functions as follows:
hm,n(x, y) = hm(x)hn(y), m, n = 0, 1, 2, . . . . (1.79)
(5) Similar to the Hadamard functions, the Haar system can be presented indifferent ways. For example, sequency ordering or Haar ordering (belows =√
2)
[Haar]8 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 11 1 1 1 −1 −1 −1 −1s s −s −s 0 0 0 00 0 0 0 s s −s −s2 −2 0 0 0 0 0 00 0 2 −2 0 0 0 00 0 0 0 2 −2 0 00 0 0 0 0 0 2 −2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (1.80)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Classical Hadamard Matrices and Arrays 29
Figure 1.17 Two images (left) and their 2D Haar transform images (right).
and the natural ordering
[Haar]8 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 12 −2 0 0 0 0 0 0s s −s −s 0 0 0 00 0 2 −2 0 0 0 01 1 1 1 −1 −1 −1 −10 0 0 0 2 −2 0 00 0 0 0 s s −s −s0 0 0 0 0 0 2 −2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (1.81)
1.3.4 The modified Haar “Hadamard ordering”
The modified Haar “Hadamard ordering” matrix of order 2n × 2n is generatedrecursively as
H2n = H(2n) =
⎛⎜⎜⎜⎜⎜⎜⎝H(2n−1) H(2n−1)√2n−1I(2n−1) −
√2n−1I(2n−1)
⎞⎟⎟⎟⎟⎟⎟⎠ , n = 2, 3, . . . , (1.82)
where H(2) =(+ +
+ −), and I(2n) is the identity matrix of order 2n.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
30 Chapter 1
Example: For n = 2 and n = 8, we have (remember that s =√
2)
H(4) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 1 11 −1 1 −1s 0 −s 00 −s 0 −s
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (1.83a)
H(8) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 11 −1 1 −1 1 −1 1 −1s 0 −s 0 s 0 −s 00 s 0 −s 0 s 0 −s2 0 0 0 −2 0 0 00 2 0 0 0 −2 0 00 0 2 0 0 0 −2 00 0 0 2 0 0 0 −2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (1.83b)
1.3.5 Normalized Haar transforms
The unnormalized Haar binary spectrum was used as a tool for detection anddiagnosis of physical faults for practical MOS (metal-oxide semiconductor) digitalcircuits and for self-test purposes.64,77 The normalized forward and inverse Haartransform matrices of order N = 2n can be generated recursively by
[Haar]2n =
⎛⎜⎜⎜⎜⎜⎝[Haar]2n−1 ⊗(+1 +1
)I2n−1 ⊗
(+1 −1
) ⎞⎟⎟⎟⎟⎟⎠ , (1.84a)
[Haar]−12n =
12
([Haar]−1
2n−1 ⊗(+1+1
), I2n−1 ⊗
(+1−1
) ), (1.84b)
where [Haar]2 =(+ +
+ −), [Haar]−1
2 =12
(+ +
+ −). Figure 1.18 shows basis vectors for a
modified Haar matrix.The normalized forward and inverse Haar orthogonal transform matrices of
order 4 and 8 are given as follows:
[Haar]4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
⎛⎜⎜⎜⎜⎜⎜⎝1 1
1 −1
⎞⎟⎟⎟⎟⎟⎟⎠ ⊗ (+1 +1
)⎛⎜⎜⎜⎜⎜⎜⎝1 0
0 1
⎞⎟⎟⎟⎟⎟⎟⎠ ⊗ (+1 −1
)⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + + ++ + − −+ − 0 00 0 + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ,
[Haar]−14 =
12
[12
(1 11 −1
)⊗
(+1+1
);
(1 00 1
)⊗
(+1−1
) ]=
14
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 2 01 1 −2 01 −1 0 21 −1 0 −2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ,(1.85a)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Classical Hadamard Matrices and Arrays 31
Figure 1.18 The modified Haar 8 × 8 basis vectors.
[Haar]8 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + + ++ + − −+ − 0 00 0 + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ⊗(+ +
)⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ 0 0 00 + 0 00 0 + 00 0 0 +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ⊗(+ −)
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 11 1 1 1 −1 −1 −1 −11 1 −1 −1 0 0 0 00 0 0 0 1 1 −1 −11 −1 0 0 0 0 0 00 0 1 −1 0 0 0 00 0 0 0 1 −1 0 00 0 0 0 0 0 1 −1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠,
[Haar]−18 =
12
⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣14
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 2 01 1 −2 01 −1 0 21 −1 0 −2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ⊗(+1+1
),
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 0 0 00 1 0 00 0 1 00 0 0 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ⊗(+1−1
) ⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦
=18
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 2 0 4 0 0 01 1 2 0 −4 0 0 01 1 −2 0 0 4 0 01 1 −2 0 0 −4 0 01 −1 0 2 0 0 4 01 −1 0 2 0 0 −4 01 −1 0 −2 0 0 0 41 −1 0 −2 0 0 0 −4
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠.
(1.85b)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
32 Chapter 1
1.3.6 Generalized Haar transforms
The procedure of Eq. (1.77) presents an elegant method to obtain the Haartransform matrix of order 2n. Various applications have motivated modificationsand generalizations of the Haar transform. In particular, an interesting variation onthe Haar system uses more than one mother function.
1.3.7 Complex Haar transform
In Ref. 44, the authors have developed a so-called complex Haar transform,
[HC]2n = [HC](2n) =
⎛⎜⎜⎜⎜⎜⎝[HC](2n−1) ⊗ (+1 − j)√2n−1I(2n−1) ⊗ (+1 + j)
⎞⎟⎟⎟⎟⎟⎠ , n = 2, 3, . . . , (1.86)
where [HC](2) =(+1 − j+1 + j
), ⊗ is the Kronecker product, and I(2n) is the identity
matrix of order 2n. Note that instead of the initial matrix [HC](2) placed into theabove given recursion, we can also use the matrix
(1 jj 1
).
1.3.8 kn-point Haar transforms
In Ref. 76, Agaian and Matevosyan developed a class of kn-point Haar transforms,using k−1 mother functions, for the arbitrary integer k. The kn-point Haar transformmatrix can be recursively constructed by
[AH](kn) =
⎛⎜⎜⎜⎜⎜⎝[AH](kn−1) ⊗ ek√kn−1I(kn−1) ⊗ A1
k
⎞⎟⎟⎟⎟⎟⎠ , n = 2, 3, 4, . . . , (1.87)
where ek is the entire one-row vector of length k, I(m) is the identity matrix of orderm, ⊗ is the sign of the Kronecker product, and [AH](k) = A(k) is an orthogonalmatrix of order k, which has the following form:
A(k) =(1 1 · · · 1 1
A1k
). (1.88)
In particular, A(k) can be any orthogonal matrix (such as Fourier, Hadamard,cosine, and others), where the first row is a constant discrete function, and theremaining rows are either sinusoidal or nonsinusoidal functions. This method givespossibilities for constructing a new class of Haar-like orthogonal transforms.
Some examples are provided as follows:
(1) If k = 2 and A12 = (+1 − 1), we obtain the classical Haar transform matrices
of order 4, 8, and so on.(2) If k = 2 and A1
2 = [exp( jα) exp( jα)], we can generate the new complex Haartransform matrices of order 2n. [AH](2), [AH](4), and [AH](8) matrices aregiven as follows:
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Classical Hadamard Matrices and Arrays 33
[AH](2) =(1 1exp( jα) − exp( jα)
), (1.89a)
[AH](4) =
⎛⎜⎜⎜⎜⎜⎝[AH](2) ⊗(1 1
)√
2I(2) ⊗(e jα −e jα
) ⎞⎟⎟⎟⎟⎟⎠ =⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 1 1e jα e jα −e jα −e jα√
2e jα −√2e jα 0 00 0
√2e jα −√2e jα
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ,(1.89b)
[AH](8) =
⎛⎜⎜⎜⎜⎜⎝[AH](4) ⊗(1 1
)2I(4) ⊗
(e jα −e jα
) ⎞⎟⎟⎟⎟⎟⎠ =⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 1e jα e jα e jα e jα −e jα −e jα −e jα −e jα√
2e jα√
2e jα −√2e jα −√2e jα 0 0 0 00 0 0 0
√2e jα
√2e jα −√2e jα −√2e jα
2e jα −2e jα 0 0 0 0 0 00 0 2e jα −2e jα 0 0 0 00 0 0 0 2e jα −2e jα 0 00 0 0 0 0 0 2e jα −2e jα
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠.
(1.89c)
(3) A new Haar-like system matrix is generalized based on the Haar matrix oforders 3 and 9,
[AH](3) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1√
22
√2
2−√2
√6
2−√
62
0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠(1.90a)
[AH](9) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 1 1√
22
√2
2
√2
2
√2
2
√2
2
√2
2−√2 −√2 −√2
√6
2
√6
2
√6
2−√
62−√
62
−√
62
0 0 0√
62
√6
2−√6 0 0 0 0 0 0
√182−√
182
0 0 0 0 0 0 0
0 0 0
√6
2
√6
2−√6 0 0 0
0 0 0
√182
−√
182
0 0 0 0
0 0 0 0 0 0
√6
2
√6
2−√6
0 0 0 0 0 0
√182−√
182
0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (1.90b)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
34 Chapter 1
1.4 Hadamard Matrices and Related Problems
More than a hundred years ago, in 1893, the French mathematician JacquesHadamard (see Fig. 1.19) constructed orthogonal matrices of orders 12 and 20with entries ±1. In Ref. 30, it has been shown that for any real matrix B =
(bi, j
)n
i, j=1of order n with −1 ≤ bi, j ≤ +1, the following inequality holds:
(det B)2 ≤n∏
i=1
n∑j=1
b2i, j, (1.91)
where equality is achieved when B is an orthogonal matrix. In the case bi, j = ±1,the determinant will obtain its maximum absolute value, and B will be a Hadamardmatrix. Equality in this bound is attained for a real matrix M if and only if Mis a Hadamard matrix. A square matrix Hn of order n with elements −1 and +1having a maximal determinant is known as a Hadamard matrix.72 The geometricalinterpretation of the maximum determinant problem is to look for n vectors fromthe origin contained within the cubes −1 ≤ bi, j ≤ +1, i, j = 1, 2, . . . , n and forminga rectangular parallelepiped of maximum volume.
Definition: A square matrix Hn of order n with elements −1 and +1 is called aHadamard matrix if the following equation holds:
HnHTn = HT
n Hn = nIn, (1.92)
where HT is the transpose of H, and In is the identity matrix of order n.Equivalently, a Hadamard matrix is a square matrix with elements −1 and +1 inwhich any two distinct rows agree in exactly n/2 positions (and thus disagree inexactly n/2 positions).
Figure 1.19 Jacques Salomon Hadamard: 1865–1963.35
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Classical Hadamard Matrices and Arrays 35
We have seen that the origin of the Hadamard matrix goes back to 1867, to thetime when Sylvester constructed Hadamard matrices of the order 2n. It is obviousthat the Sylvester, Walsh–Hadamard, Cal–Sal, and Walsh matrices are classicalexamples of equivalent Hadamard matrices. Now we provide an example of aHadamard matrix of the order 12, which cannot be constructed from the above-defined classical Hadamard matrices.
H12 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + + + + − − − + − − −− + − + + + + − + + + −− + + − + − + + + − + +− − + + + + − + + + − +
+ − − − + + + + + − − −+ + + − − + − + + + + −+ − + + − + + − + − + ++ + − + − − + + + + − +
+ − − − + − − − + + + +
+ + + − + + + − − + − ++ − + + + − + + − + + −+ + − + + + − + − − + +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (1.93)
The expression in Eq. (1.92) is equivalent to the statement that any two distinctrows (columns) in a matrix Hn are orthogonal. It is clear that rearrangementof rows (columns) in Hn and/or their multiplication by −1 will preserve thisproperty.
Definition of Equivalent Hadamard Matrices: Two matrices H1 and H2 are calledequivalent matrices if H2 = PH1Q, where P and Q are permutation matrices.These matrices have exactly one nonzero element in each row and column, andthis nonzero element is equal to +1 or −1.
It is not difficult to show that for a given Hadamard matrix it is always possibleto find an equivalent matrix having only +1 in the first row and column. Such amatrix is called a normalized Hadamard matrix. On the other hand, it has been aconsiderable challenge to classify the Hadamard matrices by equivalence.
Note that there are five known equivalent Hadamard matrices of order 16,23
three for 20,24 60 for 24,25,26 486 for 28,27 and 109 for 36.28 The lower boundfor the number of some equivalent classes are, for n = 44, at least 500, and forn = 52, at least 638. Below, we present two nonequivalent Hadamard matrices oforder 16:
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
36 Chapter 1
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + + + + + + + + + + + + + + +
+ − + − + − + − + − + − + − + −+ + − − + + − − + + − − + + − −+ − − + + − − + + − − + + − − ++ + + + − − − − + + + + − − − −+ − + − − + − + + − + − − + − ++ + − − − − + + + + − − − − + ++ − − + − + + − + − − + − + + −+ + + + + + + + − − − − − − − −+ − + − + − + − − + − + − + − ++ + − − + + − − − − + + − − + ++ − − + + − − + − + + − − + + −+ + + + − − − − − − − − + + + ++ − + − − + − + − + − + + − + −+ + − − − − + + − − + + + + − −+ − − + − + + − − + + − + − − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + + + + + + + + + + + + + + +
+ + + + + + + + − − − − − − − −+ + + + − − − − + + + + − − − −+ + + + − − − − − − − − + + + ++ + − − + + − − + + − − + + − −+ + − − + + − − − − + + − − + ++ + − − − − + + + + − − − − + ++ + − − − − + + − − + + + + − −+ − + − + − + − + − + − + − + −+ − + − + − + − − + − + − + − ++ − + − − + − + + − + − − + − ++ − + − − + − + − + − + + − + −+ − − + + − − + + − − + + − − ++ − − + + − − + − + + − − + + −+ − − + − + + − + − − + − + + −+ − − + − + + − − + + − + − − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
.
(1.94)
A list of nonequivalent Hadamard matrices can be found, for example, inRef. 29. Particularly, it is known that there is only one equivalent class of Hadamardmatrices for each of the orders n = 4, 8, and 12.
Let us prove that if Hn is a normalized Hadamard matrix of order n, n ≥ 4, thenn = 4t, where t is a positive integer. Three rows of this matrix can be representedas
+ + · · · + + + · · · + + + · · · + + + · · · ++ + · · · + + + · · · + − − · · · − − − · · · −+ + · · · + − − · · · − + + · · · + − − · · · −
.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Classical Hadamard Matrices and Arrays 37
By denoting the number of each type of column as t1, t2, t3, and t4, respectively,according to the orthogonality condition, we have
t1 + t2 + t3 + t4 = n,
t1 + t2 − t3 − t4 = 0,
t1 − t2 + t3 − t4 = 0,
t1 − t2 − t3 + t4 = 0.
(1.95)
The solution gives t1 = t2 = t3 = t4 = n/4. This implies that if Hn is a Hadamardmatrix of order n, then n = 4t, or n = 0 (mod 4). Furthermore, the inverse problemis stated.36
The Hadamard–Paley conjecture: Construct a Hadamard matrix of order n for anynatural number n with n = 0 (mod 4). Despite the efforts of many mathematicians,this conjecture remains unproved, even though it is widely believed to be true. It isthe longest-standing open problem in mathematics and computer sciences.
There are many approaches to the construction of Hadamard matrices.31,33 Thesimplest one is the direct product construction: the Kronecker product of twoHadamard matrices of order m and n is the Hadamard matrix of order mn.
The survey by Seberry and Yamada33 indicates the progress that was madeduring the last 100 years. Here, we give a brief survey on the construction ofHadamard matrices, presented by Seberry.35 At present, the smallest unknownorders are n = 4 · 167 and n = 4 · 179. Currently, several basic infinite classesof Hadamard matrix construction are known, as follows:
• “Plug-in template” methods:31,33,39,47 The basic idea is based on the constrictionof a class of “special-component” matrices that can be plugged into arrays(templates) of variables to generate Hadamard matrices. This is an extremelyproductive method of construction. Several approaches for the construction ofspecial components and templates have been developed. In 1944, Williamson32
first constructed “suitable matrices” (Williamson matrices) that were used toreplace the variables in a formally orthogonal matrix. Generally, the templatesinto which suitable matrices are plugged are orthogonal designs. They haveformally orthogonal rows (and columns), but may have variations such asGoethals–Seidel arrays, Wallis–Whiteman arrays, Spence arrays, generalizedquaternion arrays, and Agayan (Agaian) families.31
• Paley’s methods: Paley’s “direct” construction presented in 193336 givesHadamard matrices of the order
∏i, j
(pi +1)(q j +1), where pi ≡ 3 (mod 4), q j ≡ 1
(mod 4) are prime powers. Paley’s theorem states that Hadamard matrices canbe constructed for all positive orders divisible by 4 except those in the followingsequence: multiples of 4 not equal to a power of 2 multiplied by q + 1, for somepower q of an odd prime.• Multiplicative methods:31,47 Hadamard’s original construction of Hadamard
matrices seems to be a “multiplication theorem” because it uses the fact that the
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
38 Chapter 1
Kronecker product of Hadamard matrices of orders 2am and 2bn is a Hadamardmatrix of order 2a+bmn.47 In Ref. 31, Agayan (Agaian) shows how to multiplythese Hadamard matrices in order to get a Hadamard matrix of order 2a+b−1mn(a result that lowers the curve in our graph except for q, a prime). This resulthas been extended by Craigen et al.,39 who have shown that this astonishingability to reduce the powers of 2 in multiplication could also be extended to themultiplication of four matrices at a time.39
• Sequences approach:31,33,45,77 Several Hadamard matrix construction methodshave been developed based on Turyn, Golay, and m sequences, and also on δ andgeneralized δ codes. For instance, it has been shown that there exist Hadamardmatrices of orders 4 · 3m, 4 · 13m, 4 · 17m, 4 · 29m, 4 · 37m, 4 · 53m, and 4qm, whereq = 2a10b26c + 1, and a, b, and c are nonnegative integers.• Other methods: Kharaghani’s methods, or regular s-sets of regular matrices that
generate new matrices. In 1976, Wallis,37 in her classic paper, “On the existenceof Hadamard matrices,” showed that for any given odd number q, there existsa t ≤ [2 log2(q − 3)], such that there is a Hadamard matrix of order 2tq (andhence for all orders 2sq, s ≥ t). That was the first time a general bound had beengiven for Hadamard matrices of all orders. This result has been improved byCraigen and Kharaghani.38,39 In fact, as it was shown by Seberry and Yamada,33
Hadamard matrices are known to exist, of order 4q, for most q < 3000 (we haveresults up to 40,000 that are similar). In many other cases, there exist Hadamardmatrices of order 23q or 24q. A quick look shows that the most difficult casesare for q = 3 (mod 4).• Computers approach: Seberry and her students have made extensive use of
computers to construct Hadamard matrices of various types.78
Problems for Exploration
(1) Show that if H1 and H2 are Hadamard matrices of order n and m, then thereexist Hadamard matrices of order mn/4.
(2) For any natural number n, how many equivalent classes of Hadamard matricesof order n exist?
(3) For any natural number n, how many equivalent classes of specialized (forexample Williamson) Hadamard matrices of order n exist?
1.5 Complex Hadamard Matrices
In this section, we present some generalizations of the Sylvester matrix. We defineand present three recursive algorithms to construct the complex HTs, such as thecomplex Sylvester–Hadamard, complex Paley, and complex Walsh transforms.
Definition: A matrix C of order n with elements {±1,± j, j =√−1} that satisfies
CC∗ = nIn is called a complex Hadamard matrix, where C∗ is the conjugatetranspose matrix of C.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Classical Hadamard Matrices and Arrays 39
Hadamard’s inequality applies to complex matrices with elements in the unitdisk and thus also to matrices with entries ±1, ± j, and pairwise orthogonal rows(and columns). Complex Hadamard matrices were first studied by Turyn.41 Ingeneral, a complex Hadamard matrix may be obtained from another by using one orseveral of the following procedures: (1) multiply a row or a column by an elementof unit modulus, (2) replace a row or a column by its (elementwise) conjugate, and(3) permutate rows or permutate columns.
It has been shown that if H is a complex Hadamard matrix of order N, thenN = 2t. The problem of constructing complex Hadamard matrices of even ordersis still open.
Complex Hadamard Matrix Problem: Show/construct if n is any even number,then a complex Hadamard matrix of order n exists.
Problems for exploration:
(1) Show that if H1 and H2 are complex Hadamard matrices of order n and m, thenthere exists an Hadamard matrix of order mn/2.
(2) For any natural number n, how many equivalent classes of complex Hadamardmatrices of order n exist?
Properties of complex Hadamard matrices:
• Any two columns or rows in a complex Hadamard matrix are pairwiseorthogonals.• A complex Hadamard matrix can be reduced to a normal form (i.e., the first
row and the first column contain only elements equal to +1) via elementaryoperations.• The sum of the elements in every row and column, except the first ones, of a
normalized complex Hadamard matrix is zero.
1.5.1 Complex Sylvester–Hadamard transform
First, we define a parametric Sylvester (PS) matrix, recursively, by
[PS ]2k (a, b) =
⎛⎜⎜⎜⎜⎜⎝[PS ]2k−1 (a, b) [PS ]2k−1 (a, b)
[PS ]2k−1 (a, b) −[PS ]2k−1 (a, b)
⎞⎟⎟⎟⎟⎟⎠ , (1.96)
where
[PS ]2(a, b) =(1 ab −1
).
Note that if a = b = 1, then the parametric [PS ]2k (a, b) Sylvester matrix is aclassical Sylvester matrix. Also, if a = j, b = − j, and j =
√−1, the parametric[PS ]2k ( j,− j) matrix becomes a so-called complex Sylvester–Hadamard matrix.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
40 Chapter 1
Complex Sylvester–Hadamard matrices of orders 2, 4, and 8 are given asfollows:
[PS ]2 =
(1 1j − j
), (1.97a)
[PS ]4 =
([PS ]2 [PS ]2
[PS ]2 −[PS ]2
)=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 1 1j − j j − j1 1 −1 −1j − j − j j
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (1.97b)
[PS ]8 =
⎛⎜⎜⎜⎜⎝[PS ]4 [PS ]4
[PS ]4 −[PS ]4
⎞⎟⎟⎟⎟⎠ =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 1j − j j − j j − j j − j1 1 −1 −1 1 1 −1 −1j − j − j j j − j − j j1 1 1 1 −1 −1 −1 −1j − j j − j − j j − j j1 1 −1 −1 −1 −1 1 1j − j − j j − j j j − j
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (1.97c)
In analogy with the Sylvester matrix, [PS ]2k ( j,− j) can be represented as aKronecker product of k Sylvester matrices of order 2:
[PS ]2k ( j,− j) = [PS ]2( j,− j) ⊗ [PS ]2( j,− j) ⊗ · · · ⊗ [PS ]2( j,− j). (1.98)
The i’th, k’th elements of the complex Sylvester–Hadamard matrix [PS ]2n may bedefined by
h(i, k) = (−1)n−1∑t=0
it+(it⊕kt)/2, (1.99)
where (in−1, in−2, . . . , i0) and (kn−1, kn−2, . . . , k0) are binary representations of i andk, respectively.40
For instance, from Eq. (1.98), for n = 2, we obtain
[PS ]4 =
(1 j− j −1
)⊗
(1 j− j −1
)=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 j j −1− j −1 1 − j
− j 1 −1 − j
−1 j j 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (1.100)
The element h1,3 of [PS ]4 in the second row [i = (01)] and in the fourth column[k = (11)] is equal to h1,3 = (−1)1+(1⊕1)/2+0+(0⊕1)/2 = (−1)1+1/2 = − j.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Classical Hadamard Matrices and Arrays 41
The following matrix also is a complex Hadamard matrix:
F4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 1 11 −1 1 −11 − j −1 j1 j −1 − j
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (1.101)
1.5.2 Complex WHTConsider the recurrence
[WH]cm =
⎛⎜⎜⎜⎜⎝[WH]cm−1 [WH]c
1 ⊗ [WH]cm−2
[WH]cm−1 −[WH]c
1 ⊗ [WH]cm−2
⎞⎟⎟⎟⎟⎠ , m ≥ 3 (1.102)
where
[WH]c1 =
(1 1− j j
), [WH]c
2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 1 11 −1 −1 j1 1 −1 −11 −1 1 −1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ .
This recurrent relation gives a complex Hadamard matrix of order 2m. (It is alsocalled a complex Walsh–Hadamard matrix.)
Note that if H and Q1 = (A1, A2) and Q2 = (B1, B2, B3, B4) are complexHadamard matrices of orders m and n, respectively, then the matrices C1 and C2
are complex Hadamard matrices of order mn:
C1 = [H ⊗ A1, (HR) ⊗ A2] ,
C2 = [H ⊗ B1, (HR) ⊗ B2,H ⊗ B3, (HR) ⊗ B4] , (1.103)
where R is the back-diagonal identity matrix.
Example: Let
H = [WH]1 =
(1 1j − j
)and Q2 = [WH]2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 1 11 −1 − j j1 1 −1 −11 −1 j − j
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (1.104)
Then, the following matrix is a complex Walsh–Hadamard matrix of order 8:
C2 =
(1 1j − j
)⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1111
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ,(
1 1− j j
)⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1−1
1−1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ,(1 1j − j
)⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1− j−1
j
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ,(
1 1− j j
)⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1j−1− j
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ,(1.105)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
42 Chapter 1
or
[WH]3 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 11 1 −1 −1 − j − j j j1 1 1 1 −1 −1 −1 −11 1 −1 −1 j j − j − jj − j − j j j − j − j jj − j j − j 1 −1 1 −1j − j − j j − j j j − jj − j j − j −1 1 −1 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (1.106)
Figure 1.20 illustrates parts of discrete complex Hadamard functions.
1.5.3 Complex Paley–Hadamard transform
The following matrix is a complex Hadamard matrix of order 8 and is called aPaley complex Hadamard matrix:
W p3 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 11 1 − j − j −1 −1 j j1 − j −1 j 1 − j −1 j1 1 j j −1 −1 − j − j
1 −1 1 −1 1 −1 1 −11 −1 − j j −1 1 j − j1 j −1 − j 1 j −1 − j1 −1 j − j −1 1 − j j
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (1.107)
Figure 1.21 illustrates parts of continuous complex Hadamard functions.
1.5.4 Complex Walsh transform
The complex Walsh transform matrix can be generated as
Wm =
((1 1) ⊗Wm−1
(1 − 1) ⊗ H(m − 1) diag{Im−2, jIm−2}), (1.108)
where m > 1, H(1) =(+ +
+ −), and diag{A, B} =
(A 00 B
).
This recurrent relation gives a complex Hadamard matrix of order 2m. (It is alsocalled the complex Walsh matrix.) For m = 3, we obtain
W3 =
((1 1) ⊗W2
(1 − 1) ⊗ H(2) diag{I1, jI1}), (1.109)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Classical Hadamard Matrices and Arrays 43
1
1 3 5 7 1 3 5 7
0
–1
5
–5
–2
–2
2
2
0
0
Figure 1.20 The first eight real (left) and imaginary (right) parts of discrete complexHadamard functions corresponding to the matrix [WH]3.
10
0
0
0
0
0
0
0
–1
10
0
0
0
0
0
0
0
–1
Figure 1.21 The first eight real (left) and imaginary (right) parts of continuous complexHadamard functions corresponding to the Paley complex Hadamard matrix W p
3 .
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
44 Chapter 1
10
0
0
0
0
0
0
0
–1
10
0
0
0
0
0
0
0
–1
Figure 1.22 The first eight real (left) and imaginary (right) parts of continuous complexWalsh–Hadamard functions corresponding to the complex Walsh matrix W3.
or
W3 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
(1 1) ⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 1 11 −1 1 −11 − j −1 j1 j −1 j
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠(1 − 1) ⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 1 11 −1 1 −11 1 −1 −11 −1 −1 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 0 0 00 1 0 00 0 j 00 0 0 j
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 11 −1 1 −1 1 −1 1 −11 − j −1 j 1 − j −1 j1 j −1 − j 1 j −1 − j
1 1 j j −1 −1 − j − j1 −1 j − j −1 1 − j j1 1 − j − j −1 −1 j j1 −1 − j j −1 1 j − j
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (1.110)
Figure 1.22 illustrates parts of continuous complex Walsh–Hadamard functions.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Classical Hadamard Matrices and Arrays 45
References
1. K. J. Horadam, Hadamard Matrices and Their Applications, illustrated ed.,Princeton University Press, Princeton (2006).
2. W. D. Wallis, A. P. Street, and J. S. Wallis, Combinatorics: Room Squares,Sum-Free Sets, Hadamard Matrices, Lecture Notes in Mathematics, 292,Springer, New York (1972).
3. Y. X. Yang, Theory and Applications of Higher-Dimensional HadamardMatrices, Kluwer Academic and Science Press, Beijing/New York (2001).
4. R. Damaschini, “Binary encoding image based on original Hadamardmatrices,” Opt. Commun. 90, 218–220 (1992).
5. D. C. Tilotta, R. M. Hammaker, and W. G. Fateley, “A visible near-infraredHadamard transform in spectrometer based on a liquid crystal spatial lightmodulator array: a new approach in spectrometry,” Appl. Spectrosc. 41,727–734 (1987).
6. M. Harwit, N. J. A. Sloane, and N. J. A. Sloane, Hadamard Transform Optics,Academic Press, New York (1979).
7. W. D. Wallis, Ed., Designs 2002, Further Computational and ConstructiveDesign Theory, 2nd ed., Kluwer Academic, Dordrecht (2003).
8. J. A. Decker Jr. and M. Harwit, “Experimental operation of a Hadamardspectrometer,” Appl. Opt. 8, 2552–2554 (1969).
9. E. Nelson and M. Fredman, “Hadamard spectroscopy,” J. Opt. Soc. Am. 60,1664–1669 (1970).
10. Ch. Koukouvinos and J. Seberry, “Hadamard matrices orthogonal designs andconstruction algorithms,” available at Research Online, http://ro.uow.edu.au/infopapers/308.
11. K. Beauchamp, Applications of Walsh and Related Functions, Academic Press,New York (1984).
12. W. D. Wallis, Introduction to Combinatorial Designs, 2nd ed., Chapman &Hall/CRC, Boca Raton (2007).
13. R. Gareth, The Remote Sensing Data Book, Cambridge University Press,Cambridge, England (1999).
14. S. Agaian, J. Astola, and K. Egiazarian, Binary Polinomial Transforms andNonlinear Digital Filters, Marcel Dekker, New York (1995).
15. M. Nakahara and T. Ohmi, Quantum Computing, CRC Press, Boca Raton(2008).
16. M. Nakahara, R. Rahimi and A. SaiToh, Eds., Mathematical Aspects ofQuantum Computing, Kinki University Series on Quantum Computing, Japan(2008).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
46 Chapter 1
17. B. Rezaul, T. Daniel, H. Lai, and M. Palaniswami, Computational Intelligencein Biomedical Engineering, CRC Press, Boca Raton (2007).
18. Y. J. Kim and U. Platt, Advanced Environmental Monitoring, Springer, NewYork (2008).
19. M. C. Hemmer, Expert Systems in Chemistry Research, CRC Press, BocaRaton (2007).
20. J. J. Sylvester, “Thoughts on inverse orthogonal matrices, simultaneous signsuccessions and tesselated pavements in two or more colors, with applicationsto Newton’s rule, ornamental til-work, and the theory of numbers,” Phil. Mag.34, 461–475 (1867).
21. L. Walsh, “A closed set of normal orthogonal functions,” Am. J. Math. 55,5–24 (1923).
22. H. F. Harmuth, Sequency Theory: Functions and Applications, AcademicPress, New York (1977).
23. M. Hall Jr., “Hadamard matrices of order 16,” Res. Summary, No. 36-10,pp. 21–26 (1961).
24. M. Hall Jr., “Hadamard matrices of order 20,” Res. Summary, No. 36-12,pp. 27–35 (1961).
25. N. Ito, J. S. Leon, and J. Q. Longiar, “Classification of 3-(24,12,5) designsand 24-dimensional Hadamard matrices,” J. Comb. Theory, Ser. A 31, 66–93(1981).
26. H. Kimura, “New Hadamard matrix of order 24,” Graphs Combin. 5, 235–242(1989).
27. H. Kimura, “Classification of Hadamard matrices of order 28 with Hall sets,”Discrete Math. 128 (1–3), 257–268 (1994).
28. J. Cooper, J. Milas and W. D. Wallis, “Hadamard Equivalence,” inCombinatorial Mathematics, Springer, Berlin/Heidelberg, 686, 126–135(1978).
29. N. J. A. Sloane, A Library of Hadamard Matrices. www.research.att.com/∼njas/hadamard.
30. J. Hadamard, “Resolution d’une question relative aux determinants,” Bull. Sci.Math. 17, 240–246 (1893).
31. S. S. Agaian, Hadamard Matrices and their Applications, Lecture Notes inMathematics, 1168, Springer, New York (1985).
32. J. Williamson, “Hadamard determinant theorem and sum of four squares,”Duke Math. J. 11, 65–81 (1944).
33. J. Seberry and M. Yamada, “Hadamard matrices, sequences and blockdesigns,” in Surveys in Contemporary Design Theory, John Wiley & Sons,Hoboken, NJ (1992).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Classical Hadamard Matrices and Arrays 47
34. J. Seberry and A. L. Whiteman, “A new construction for conference matrices,”Ars. Combinatoria 16, 119–127 (1983).
35. http://www.cs.uow.edu.au/people/jennie/lifework.html.
36. R. E. A. G. Paley, “On orthogonal matrices,” J. Math. Phys. 12 (3), 311–320(1933).
37. J. S. Wallis, “On the existence of Hadamard matrices,” J. Combin. Theory, Ser.A 21, 188–195 (1976).
38. H. Kharaghani, “A construction for block circulant orthogonal designs,” J.Combin. Designs 4 (6), 389–395 (1998).
39. R. Craigen, J. Seberry, and X. Zhang, “Product of four Hadamard matrices,”J. Comb. Theory, Ser. A 59, 318–320 (1992).
40. S. Rahardja and B. J. Falkowski, “Digital signal processing with complexHadamard transform,” in Proc. of Fourth ICSP-98, pp. 533–536 (1998).
41. R. J. Turyn, “Complex Hadamard matrices,” in Combinatorial Structures andApplications, pp. 435–437, Gordon and Breach, London (1970).
42. A. Haar, “Zur Theorie der Orthogonalen Funktionensysteme,” Math. Ann. 69,331–371 (1910).
43. K. R. Rao, M. Narasimhan, and K. Reveluni, “A family of discrete Haartransforms,” Comput. Elect. Eng. 2, 367–368 (1975).
44. K. Rao, K. Reveluni, M. Narasimhan, and N. Ahmed, “Complex Haartransform,” IEEE Trans. Acoust. Speech Signal Process. 2 (1), 102–104 (1976).
45. H. G. Sarukhanyan, “Hadamard matrices: construction methods andapplications,” Proc. 1st Int. Workshop on Transforms and Filter Banks, TICSPSer. 1, Tampere University, Finland, pp. 95–130 (1998).
46. N. Ahmed and K. R. Rao, Orthogonal Transforms for Digital SignalProcessing, Springer-Verlag, New York (1975).
47. S. S. Agaian and H. G. Sarukhanyan, “Recurrent formulae of the constructionWilliamson type matrices,” Math. Notes 30 (4), 603–617 (1981).
48. S. S. Agaian, “Advances and problems of fast orthogonal transforms for signal-images processing applications (Parts 1 and 2),” in Ser. Pattern Recognition,Classification, Forecasting Yearbook, Russian Academy of Sciences, Nauka,Moscow, pp. 146–215 (1990).
49. S. Agaian, J. Astola, and K. Egiazarian, Polynomial Transforms andApplications (Combinatorics, Digital Logic, Nonlinear Signal Processing),Tampere University, Finland (1993).
50. A. Haar, “Zur Theorie der Orthogonalen Funktionensysteme,” Math. Ann. 69,331–371 (1910).
51. B. S. Nagy, Alfréd Haar: Gesammelte Arbeiten, Budapest, Hungary (1959).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
48 Chapter 1
52. A. B. Németh, “On Alfred Haar’s original proof of his theorem on bestapproximation,” in Proc. A. Haar Memorial Conf. I, II, Amsterdam, New York,pp. 651–659 (1987).
53. B. S. Nagy, “Alfred Haar (1885–1933),” Resultate Math. 8 (2), 194–196(1985).
54. K. J. L. Ray, “VLSI computing architectures for Haar transform,” Electron.Lett. 26 (23), 1962–1963 (1990).
55. T. J. Davis, “Fast decomposition of digital curves into polygons using the Haartransform,” IEEE Trans. Pattern Anal. Mach. Intell. 218, 786–790 (1999).
56. B. J. Falkowski and S. Rahardja, “Sign Haar Transform,” in Proc. of IEEE Int.Symp. Circuits Syst., ISCAS ’94 2, 161–164 (1994).
57. K.-W. Cheung, C.-H. Cheung and L.-M. Po, “A novel multi wavelet-basedinteger transform for lossless image coding,” in Proc. Int. Conf. ImageProcessing, ICIP 99 1, 444–447, City Univ. of Hong Kong, Kobe (1999).
58. B. J. Falkowski and S. Rahardja, “Properties of Boolean functions in spectraldomain of sign Haar transform,” Inf. Commun. Signal Process 1, 64–68 (1997).
59. B. J. Falkowski and C.-H. Chang, “Properties and applications of paired Haartransform,” Inf. Commun. Signal Process. 1997, ICICS 1, 48–51 (1997).
60. S. Yu and R. Liu, “A new edge detection algorithm: fast and localizing to asingle pixel,” in Proc. of IEEE Int. Symp. on Circuits and Systems, ISCAS ’931, 539–542 (1993).
61. T. Lonnestad, “A new set of texture features based on the Haar transform,”Proc. 11th IAPR Int. Conf. on Pattern Recognition, Image, Speech and SignalAnalysis, (The Hague, 30 Aug.–3 Sept., 1992), 3, 676–679 (1992).
62. G. M. Megson, “Systolic arrays for the Haar transform,”in IEE Proc. ofComputers and Digital Techniques, vol. 145, pp. 403–410 (1998).
63. G. A. Ruiz and J. A. Michell, “Memory efficient programmable processorchip for inverse Haar transform,” IEEE Trans. Signal Process 46 (1), 263–268(1998).
64. M. A. Thornton, “Modified Haar transform calculation using digitalcircuit output probabilities,” Proc. of IEEE Int. Conf. on Information,Communications and Signal Processing 1, 52–58 (1997).
65. J. P. Hansen and M. Sekine, “Decision diagram based techniques for the Haarwavelet transform,” in Proc. of Int. Conf. on Information, Communications andSignal Processing 1, pp. 59–63 (1997).
66. Y.-D. Wang and M. J. Paulik, “A discrete wavelet model for target recognition,”in Proc. of IEEE 39th Midwest Symp. on Circuits and Systems 2, 835–838(1996).
67. K. Egiazarian and J. Astola, “Generalized Fibonacci cubes and trees for DSPapplications,” in Proc. of IEEE Int. Symp. on Circuits and Systems, ISCAS ’962, 445–448 (1996).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Classical Hadamard Matrices and Arrays 49
68. L. M. Kaplan and J. C.-C. Kuo, “Signal modeling using increments of extendedself-similar processes,” in Proc. of IEEE Int. Conf. on Acoustics, Speech, andSignal Processing, ICASSP-94 4, 125–128 (1994).
69. L. Prasad, “Multiresolutional Fault Tolerant Sensor Integration and ObjectRecognition in Images,” Ph.D. dissertation, Louisiana State University (1995).
70. B. J. Falkowski and C. H. Chang, “Forward and inverse transformationsbetween Haar spectra and ordered binary decision diagrams of Booleanfunctions,” IEEE Trans. Comput. 46 (11), 1272–1279 (1997).
71. G. Ruiz, J. A. Michell and A. Buron, “Fault detection and diagnosis for MOScircuits from Haar and Walsh spectrum analysis: on the fault coverage ofHaar reduced analysis,” in Theory and Application of Spectral Techniques,C. Moraga, Ed., Dortmund University Press, pp. 97–106 (1988).
72. J. Brenner and L. Cummings, “The Hadamard maximum determinantproblem,” Am. Math. Mon. 79, 626–630 (1972).
73. R. E. A. C. Paley, “On orthogonal matrices,” J. Math. Phys. 12, 311–320(1933).
74. W. K. Pratt, J. Kane, and H. C. Andrews, “Hadamard transform imagecoding,” Proc. IEEE 57, 58–68 (1969).
75. R. K. Yarlagadda and J. E. Hershey, Hadamard Matrix Analysis and Synthesis,Kluwer Academic Publishers, Boston (1996).
76. S. Agaian and A. Matevosian, “Haar transforms and automatic quality test ofprinted circuit boards,” Acta Cybernet. 5 (3), 315–362 (1981).
77. S. Agaian and H. Sarukhanyan, “Generalized δ-codes and Hadamardmatrices,” Prob. Inf. Transmission 16 (3), 203–211 (1980).
78. S. Georgiou, C. Koukouvinos and J. Seberry, “Hadamard matrices, orthogonaldesigns and construction algorithms,” available at Research Online, http://ro.uow.edu.au/infopapers/308.
79. G. Ruiz, J. A. Michell and A. Buron, “Fault detection and diagnosis for MOScircuits from Haar and Walsh spectrum analysis: on the fault coverage ofHaar reduced analysis,” in Theory and Application of Spectral Techniques,C. Moraga, Ed., University Dortmund Press, pp. 97–106 (1988).
80. http://www.websters-online-dictionary.org/Gr/Gray+code.html.
81. F. Gray, “Pulse code communication,” U.S. Patent No. 2,632,058 (March 171953).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Chapter 2Fast Classical DiscreteOrthogonal Transforms
The computation of unitary transforms is a complicated and time-consuming task.However, it would not be possible to use orthogonal transforms in signal andimage processing applications without effective algorithms to calculate them. Notethat both complexity issues—efficient software and circuit implementations—arethe heart of the most applications. An important question in many applicationsis how to achieve the highest computation efficiency of the discrete orthogonaltransforms (DOTs).1 The suitability of unitary transforms in each of the aboveapplications depends on the properties of their basis functions as well as on theexistence of fast algorithms, including parallel ones. A fast DOT is an efficientalgorithm for computing the DOT and its inverse with an essentially smallernumber of operations than direct matrix multiplication. The problem of computinga transform has been extensively studied.2–45
Historically, the first efficient DFT algorithm, for length 2M, was describedby Gauss in 1805 and developed by Cooley and Tukey in 1965.45–64 Sincethe introduction of the fast Fourier transform (FFT), Fourier analysis hasbecome one of the most frequently used tools in signal/image processing andcommunication systems; other discrete transforms and different fast algorithmsfor computing transforms have been introduced as well. In the past decade, fastDOTs have been widely used in many areas such as data compression, patternrecognition and image reconstruction, interpolation, linear filtering, spectralanalysis, watermarking, cryptography, and communication systems. The HTs, suchas the WHT and Walsh–Paley transform, are important members of the class ofDOTs.1 These matrices are known as nonsinusoidal orthogonal transform matricesand have found applications in digital signal processing and communicationsystems1–3,7–11,34,36,39,65 because they do not require any multiplication operationsin their computation. A survey of the literature of fast HTs (FHTs) and theirhardware implementations is found in Refs. 2, 4, 14–22, and 66–74. There aremany other practical problems where one needs to have an N-point FHT algorithm,where N is an arbitrary 4t integer. We have seen that despite the efforts of severalmathematicians, the Hadamard conjecture remains unproved even though it iswidely believed to be true.
51
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
52 Chapter 2
This chapter describes efficient (in terms of space and time) computationalprocedures for a commonly used class of 2n-point HT and Haar transforms. Thereare many distinct fast HT algorithms involving a wide range of mathematics. Wewill focus mostly on a matrix approach. Section 2.1 describes a general conceptof matrix-based fast DOT algorithms. Section 2.2 presents the 2n-point WHT.Section 2.3 presents the fast Walsh–Paley transform. Section 2.4 presents fast Cal-Sal transforms. Sections 2.5 and 2.6 describe the complex HTs and the fast Haartransform algorithm.
2.1 Matrix-Based Fast DOT Algorithms
Recall that the DOT of the sequence f (n) is given by
Y[k] =1√N
N−1∑n=0
f [n]φn[k], k = 0, 1, . . . ,N − 1, (2.1)
where {φn[k]} is an orthogonal system. Or, in matrix form, Y = (1/√
N)HN f , andEq. (2.1) can be written as
Y[k] = f [0]φ0 [k] + f [1]φ1 [k] + · · · + f [n − 1]φN−1[k], k = 0, 1, . . . ,N − 1. (2.2)
It follows that the determination of each Y[k] requires N multiplications and N − 1additions. Because we have to evaluate Y[k] for k = 0, 1, . . . ,N − 1, it follows thatthe direct determination of DFT requires N(N − 1) operations, which means thatthe number of multiplications and additions/subtractions is proportional to N2, i.e.,the complexity of DFT is O(N2).
How can one reduce the computational complexity of an orthogonal transform?The choice of a particular algorithm depends on a number of factors, namely,complexity, memory/space, very large scale integration (VLSI) implementation,and other considerations.
Complexity: It is obvious that any practical algorithm for the DOT dependson the transform length N. There are many measures of the efficiency of animplementation. We will use the linear combination of the number of arithmeticoperations [multiplications C×(N) and additions/subtractions C+(N)] needed tocompute the DOT as a measure of computational complexity,
C(N) = μ+C+(N) + μ×C×(N), (2.3)
where μ+ and μ× are weight constant.C+(N) is called the additive complexity, and C×(N) is called multiplicative
complexity. This complexity is very important for VLSI implementation (theimplementation cost of the multiplication is much higher than the implementationcost of the addition operations). This is one of the basic reasons these weights areused.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Fast Classical Discrete Orthogonal Transforms 53
The idea of a fast algorithm is to map the given computational problem intoseveral subproblems, which leads to a reduction of the order of complexity of theproblem:
Cost(problem) = Sum{cost(mapping)} + Sum{cost(subproblem)}.
Usually, the fast DOT algorithm is based on decomposition of the computationof the DOT of a signal into successively smaller DOTs. The procedure, whichreduces the computational complexity of the orthogonal transform, is known as afast discrete unitary (orthogonal) transform algorithm or fast transform.
The main problem when calculating the DOT relates to construction ofthe decomposition, namely, the transition to the short DOT with minimalcomputational complexity. There are several algorithms for efficient evaluation ofthe DOT.2,38–64 The efficiencies of these algorithms are related to the followingquestion: How close are they to the respective lower bound? Realizable lowerbounds (the knowledge of the lower bounds tell us that it is impossible todevelop an algorithm with better performance than the lower bound) are not soeasily obtained. Another point in the comparison of algorithms is the memoryrequirement.
General Concept in the Design of Fast DOT Algorithms: A fast transform TN f maybe achieved by factoring the transform matrix TN by the multiplication of k sparsematrices. Typically, N = 2n, k = log2, N = n, and
T2n = FnFn−1 · · · F2F1, (2.4)
where Fi are very sparse matrices so that the complexity of multiplying by Fi isO(N), I = 1, 2, . . . , n.
An N = 2n-point inverse transform matrix can be represented as
T−12n = T T
2n = (FnFn−1 · · · F2F1)T = FT1 FT
2 · · · FTn−1FT
n . (2.5)
Thus, one can implement the transform TN f via the following consecutivecomputations:
f → F1 f → F2(F1 f )→ · · · → Fn[· · · F2(F1 f )]. (2.6)
On the basis of the factorization of Eq. (2.4), the computational complexity isreduced from O(N2) to O(N log N). Because Fi contains only a few nonzero termsper row, the transformation can be efficiently accomplished by operating on f ntimes. For Fourier, Hadamard, and slant transforms, Fi contains only two nonzeroterms in each row. Thus, an N-point 1D transform with Eq. (2.4) decomposition canbe implemented in O(N log N) operations, which is far fewer than N2 operations.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
54 Chapter 2
Figure 2.1 2D transform flow chart.
The general algorithm for the fast DOT is given as follows:
• Input signal f of length N.• Precompute the constant related with transform TN f 1/
√N.
• For i = 1, 2, . . . , n, compute fi = Fi fi−1, f0 = f .• Multiply fn by 1/
√N.
• Output TN f = (1/√
N) fn.
2D DOTs: The simplest and most common 2D DOT algorithm, known as the row-column algorithm, corresponds to first performing the 1D fast DOTs (by any ofthe DOT algorithms of all the rows and then of all the columns, or vice versa). 2Dtransforms can be performed in two steps, as follows:
Step 1. Compute N-point 1D DOT on the columns of the data.Step 2. Compute N-point DOT on the rows of the intermediate result.This idea can be very easily extended to the multidimensional case (see Fig. 2.1).
2.2 Fast Walsh–Hadamard Transform
The HT, which is known primarily as the WHT, is one of the widely usedtransforms in signal and image processing. Nevertheless, the WHT is just aparticular case of a general class of transforms based on Hadamard matrices.1,2,66,67
Fast algorithms have been developed1,3–11,65–67,75–79 for efficient computation ofthese transforms. It is known that the discrete HT (DHT) is computationallyadvantageous over the FFT. Being orthonormal and taking values +1 or −1 at eachpoint, the Hadamard functions can be used for series expansions of the signal.Because the Walsh–Hadamard matrix consists of ±1s, the computation of theWHT does not require any multiplication operations, and consequently requires nofloating-point operations at all. The WHT is useful in signal and image processing,communication systems, image coding, image enhancement, pattern recognition,etc. The traditional fast N = 2n-dimensional DHT needs N log2 N = n2n additionoperations. Note that the implementation of the WHT with straightforward matrixmultiplication requires N(N − 1) = 22n − 2n additions.
Now, let X = (x0, x1, . . . , xN−1)T be an input signal. Forward and inverse 1DWHTs of a vector X are defined as1,2,11
Y =1√N
HN X, (2.7)
Y =1√N
HNY. (2.8)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Fast Classical Discrete Orthogonal Transforms 55
It has been shown that a fast WHTs algorithm exists with C(N) = N log2 Naddition/subtraction operations.1 To understand the concept of the construction offast transform algorithms, we start with the 8-point WHT
F =1√8
H8 f , (2.9)
where f = (a, b, c, d, e, f , g, h)T is the input signal/vector.Recall that the HT matrix of order 8 is of the following form:
H8 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + + + + + + +
+ − + − + − + −+ + − − + + − −+ − − + + − − ++ + + + − − − −+ − − − − + − ++ + − − − − + ++ − + + − + + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (2.10)
where “+” denotes 1 and “−” denotes −1.It is easy to check the following:
(1) The direct evaluation of the HT F = H8 f requires 7 × 8 = 56 additions:
F = H8 f =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
a + e + c + g + b + f + d + h
a + e + c + g − b − f − d − h
a + e − c − g + b + f − d − h
a + e − c − g − b − f + d + h
a − e + c − g + b − f + d − h
a − e + c − g − b + f − d + h
a − e − c + g + b − f − d + h
a − e − c + g − b + f + d − h
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (2.11)
(2) The Hadamard matrix H8 can be expressed as the product of the followingthree matrices:
H8 = B3B2B1, (2.12)
where
B1 = H2 ⊗ I4, (2.13)
B2 = (H2 ⊗ I2) ⊕ (H2 ⊗ I2), (2.14)
B3 = I4 ⊗ H2, (2.15)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
56 Chapter 2
or
B1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ 0 0 0 + 0 0 00 + 0 0 0 + 0 00 0 + 0 0 0 + 00 0 0 + 0 0 0 ++ 0 0 0 − 0 0 00 + 0 0 0 − 0 00 0 + 0 0 0 − 00 0 0 + 0 0 0 −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (2.16)
B2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ 0 + 0 0 0 0 00 + 0 + 0 0 0 0+ 0 − 0 0 0 0 00 + 0 − 0 0 0 00 0 0 0 + 0 + 00 0 0 0 0 + 0 +0 0 0 0 + 0 − 00 0 0 0 0 + 0 −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (2.17)
B3 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + 0 0 0 0 0 0+ − 0 0 0 0 0 00 0 + + 0 0 0 00 0 + − 0 0 0 00 0 0 0 + + 0 00 0 0 0 + − 0 00 0 0 0 0 0 + +0 0 0 0 0 0 + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (2.18)
(3) Using this factorization, the 8-point 1D HT can be implemented in 24 =8 log2 8 operations. The proof follows.
The FHT algorithm can be realized via the following three steps:
Step 1. Calculate B1 f :
B1 f =
⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣
+ 0 0 0 + 0 0 00 + 0 0 0 + 0 00 0 + 0 0 0 + 00 0 0 + 0 0 0 ++ 0 0 0 − 0 0 00 + 0 0 0 − 0 00 0 + 0 0 0 − 00 0 0 + 0 0 0 −
⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
abcdefgh
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
a + eb + fc + gd + ha − eb − fc − gd − h
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (2.19)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Fast Classical Discrete Orthogonal Transforms 57
Step 2. Calculate B2(B1 f ):
B2(B1 f ) = B2
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
a + eb + fc + gd + ha − eb − fc − gd − h
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠=
⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣
+ 0 + 0 0 0 0 00 + 0 + 0 0 0 0+ 0 − 0 0 0 0 00 + 0 − 0 0 0 00 0 0 0 + 0 + 00 0 0 0 0 + 0 +0 0 0 0 + 0 − 00 0 0 0 0 + 0 −
⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
(a + e) + (c + g)(b + f ) + (d + h)(a + e) − (c + g)(b + f ) − (d + h)(a − e) + (c − g)(b − f ) + (d − h)(a − e) − (c − g)(b − f ) − (d − h)
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (2.20)
Step 3. Calculate B3[B2(B1 f )]:
B3(B2(B1 f )) =
⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣
+ + 0 0 0 0 0 0+ − 0 0 0 0 0 00 0 + + 0 0 0 00 0 + − 0 0 0 00 0 0 0 + + 0 00 0 0 0 + − 0 00 0 0 0 0 0 + +0 0 0 0 0 0 + −
⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
(a + e) + (c + g)(b + f ) + (d + h)(a + e) − (c + g)(b + f ) − (d + h)(a − e) + (c − g)(b − f ) + (d − h)(a − e) − (c − g)(b − f ) − (d − h)
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
(a + e) + (c + g) + [(b + f ) + (d + h)](a + e) + (c + g) − [(b + f ) + (d + h)](a + e) − (c + g) + [(b + f ) − (d + h)](a + e) − (c + g) − [(b + f ) − (d + h)](a − e) + (c − g) + [(b − f ) + (d − h)](a − e) + (c − g) − [(b − f ) + (d − h)](a − e) − (c − g) + [(b − f ) − (d − h)](a − e) − (c − g) − [(b − f ) − (d − h)]
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (2.21)
The comparison of Eq. (2.11) with Eq. (2.12) shows the following:
• That expression Eq. (2.12), which computes the DHT, produces exactly the sameresult as evaluating the DHT definition directly [see Eq. (2.11)].• The direct calculation of an 8-point HT H8 f requires 56 operations. However,
for calculation of H8 f via the fast algorithm, only 24 operations are required.This is because each product of the matrix and vector requires only eightadditions or subtractions, since each sparse matrix has only two nonzeroelements in each row. Thus, all operations (additions, subtractions) that arerequired for the H8 f calculation equal 24 = 8 log2 8. The difference in speed canbe significant, especially for long data sets, where N may be in the thousands ormillions.• To perform the 8-point HT requires only eight storage locations.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
58 Chapter 2
Step1. Step2. Step3.
a
b
c
d
a+e a+e+c+g a+e+c+g+b+f+d+h
a+e+c+g–b–f–d–h
a+e–c–g+b+f–d–h
a+e–c–g–b–f+d+h
a–e+c–g+b–f+d–h
a–e+c–g–b+f–d+h
a–e–c+g+a–e–c+g
a–e–c+g–a+e+c–g
a–e+c–g
b+f+d+h
b–f+d–h
b+f–d–h
a+e–c–g
a–e–c+g
a–e–c+g
b+f
c+g
d+h
a–e
b–f
c–g
d–h
e
f
g
h
Figure 2.2 All steps of the 8-point WHT shown simultaneously.
a
b
c
d
e
f
g
h
(a+e+c+g)–(b+f+d+h)
(a+e+c+g)– (b+f+d+h)
(b+f–d–h)(a+e–c–g)+
(b+f–d–h)(a+e–c–g)–
(a–e+c–g)+(b–f+d–H)
(a–e+c–g)–(b–f+d–h)
(b–f–d+h)(a–e–c+g)+
(b–f–d+h)(a–e–c+g)–
Figure 2.3 The flow graph of the 8-point fast WHT.
• The inverse 8-point HT matrix can be expressed by the product of three matrices:H8 = B1B2B3.
The fast WHT algorithms are best explained using signal flow diagrams, as shownin Fig. 2.2. These diagrams consist of a series of nodes, each representing avariable, which is itself expressed as the sum of other variables originating fromthe left of the diagram, with the node block connected by means of solid lines.A dashed connecting line indicates a term to be subtracted. Figure 2.2 shows thesignal flow graph illustrating the computation of the WHT coefficients for N = 8and shows the flow graph of all steps simultaneously.
In general, the flow graph is used without the node block (Fig. 2.3).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Fast Classical Discrete Orthogonal Transforms 59
The matrices B1, B2, and B3 can be expressed as
B1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ 0 0 0 + 0 0 00 + 0 0 0 + 0 00 0 + 0 0 0 + 00 0 0 + 0 0 0 ++ 0 0 0 − 0 0 00 + 0 0 0 − 0 00 0 + 0 0 0 − 00 0 0 + 0 0 0 −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠=
(+ +
+ −)⊗ I4 = H2 ⊗ I4 = I1 ⊗ H2 ⊗ I4, (2.22)
B2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ 0 + 0 0 0 0 00 + 0 + 0 0 0 0+ 0 − 0 0 0 0 00 + 0 − 0 0 0 00 0 0 0 + 0 + 00 0 0 0 0 + 0 +0 0 0 0 + 0 − 00 0 0 0 0 + 0 −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠=
(H2 ⊗ I2 0
0 H2 ⊗ I2
)= I2 ⊗ H2 ⊗ I2, (2.23)
B3 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + 0 0 0 0 0 0+ − 0 0 0 0 0 00 0 + + 0 0 0 00 0 + − 0 0 0 00 0 0 0 + + 0 00 0 0 0 + − 0 00 0 0 0 0 0 + +0 0 0 0 0 0 + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠= I4 ⊗ H2 = I4 ⊗ H2 ⊗ I1. (2.24)
The order 2n-point WHT matrix H2n can be factored as follows:
H2n = FnFn−1, . . . , F2F1, (2.25)
where Fi = I2i−1 ⊗ (H2 ⊗ I2n−i), i = 1, 2, . . . , n.For instance, the 16-point WHT matrix can be factored as H16 = F4F3F2F1,
where
F1 = H2 ⊗ I8, (2.26)
F2 = I2 ⊗ (H2 ⊗ I4) = (H2 ⊗ I4) ⊕ (H2 ⊗ I4), (2.27)
F3 = I4 ⊗ (H2 ⊗ I2) = (H2 ⊗ I2) ⊕ (H2 ⊗ I2) ⊕ (H2 ⊗ I2) ⊕ (H2 ⊗ I2), (2.28)
F4 = I8 ⊗ H2. (2.29)
In Fig. 2.4, the flow graph of a 1D WHT for N = 16 is given.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
60 Chapter 2
X(0) Y(0)
Y(1)
Y(2)
Y(3)
Y(4)
Y(5)
Y(6)
Y(7)
Y(8)
Y(9)
Y(10)
Y(11)
Y(12)
Y(13)
Y(14)
Y(15)
X(1)
X(2)
X(3)
X(4)
X(5)
X(6)
X(7)
X(8)
X(9)
X(10)
X(11)
X(12)
X(13)
X(14)
X(15)
Figure 2.4 Flow graph of the fast WHT.
Now, using the properties of the Kronecker product, we obtain the desiredresults. From this, it is not difficult to show that the WHT matrix of order 2n canbe factored as
H2n =
n∏m=1
(I2m−1 ⊗ H2 ⊗ I2n−m). (2.30)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Fast Classical Discrete Orthogonal Transforms 61
Lemma 2.2.1: The Walsh–Hadamard matrix of order N = 2n can be representedas
H2n = (H2 ⊗ I2n−1 )(H2n−1 O2n−1
O2n−1 H2n−1
)= (H2 ⊗ I2n−1 )(I2 ⊗ H2n−1 ), (2.31)
where O2n−1 is the zero matrix of order 2n−1.
Proof: Indeed, if we have
H2n =
(H2n−1 H2n−1
H2n−1 −H2n−1
)= H2 ⊗ H2n−1 = (H2I2) ⊗ (I2n−1 H2n−1 ) (2.32)
then
H2n = (H2I2) ⊗ (I2n−1 H2n−1 ) = (H2 ⊗ I2n−1 )(I2 ⊗ H2n−1 ) = (H2 ⊗ I2n−1 )(H2n−1 O2n−1
O2n−1 H2n−1
).
(2.33)
Using the Kronecker product property, we obtain the following:
Theorem 2.2.1: Let f be a signal of length N = 2n. Then,
(1) The Walsh–Hadamard matrix of order N = 2n can be factored as
H2n =
n∏m=1
(I2m−1 ⊗ H2 ⊗ I2n−m). (2.34)
(2) The WHT of the signal f can be computed with n2n addition/subtractionoperations.
Proof: From the definition of the Walsh–Hadamard matrix, we have
H2n =
(H2n−1 H2n−1
H2n−1 −H2n−1
). (2.35)
Using Lemma 2.2.1, we may rewrite this equation in the following form:
H2n = (H2 ⊗ I2n−1 )(I2 ⊗ H2n−1 ) = (I20 ⊗ H2 ⊗ I2n−1 )(I2 ⊗ H2n−1 ), I20 = 1. (2.36)
Using the same procedure with the Walsh–Hadamard matrix of order 2n−1, weobtain
H2n−1 =
(H2n−2 H2n−2
H2n−2 −H2n−2
)= (I20 ⊗ H2 ⊗ I2n−2 )(I2 ⊗ H2n−2 ). (2.37)
Thus, from the above two relations, we get
H2n = (I20 ⊗ H2 ⊗ I2n−1 )(I2 ⊗ H2n−1 )
= (I20 ⊗ H2 ⊗ I2n−1 ) {I2 ⊗ [(I20 ⊗ H2 ⊗ I2n−2 )(I2 ⊗ H2n−2 )]} . (2.38)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
62 Chapter 2
Table 2.1 Additions/subtractions of 1D WHTs.
Order N Addition/subtraction Direct transform (N − 1)/(log2 N)
4 8 12 3/28 24 56 7/3
16 64 240 15/432 160 992 31/564 384 4,032 63/6
128 896 16,256 127/7256 2,048 65,280 255/8512 4,508 261,632 511/9
1025 10,240 125,952 1023/10
Thus, we have
H2n = (I20 ⊗ H2 ⊗ I2n−1 )(I2 ⊗ H2 ⊗ I2n−2 )(I2 ⊗ H2n−2 ). (2.39)
After n iterations, we obtain the desired results. The theorem is proved.
In Table 2.1, the number of additions/subtractions of 1D WHTs is given.
2.3 Fast Walsh–Paley Transform
In this section, we present factorizations of Walsh–Paley transform matrices.
Theorem 2.3.1: The Walsh–Paley matrix of order N = 2n can be factored as H2n :
[WP]2n = (I2 ⊗ [WP]2n−1 )(I2n−1 ⊗ (+ +)I2n−1 ⊗ (+ −)
). (2.40)
Proof: From the definition of a Walsh–Paley matrix of order N = 2n, we have
[WP]2n =
([WP]2n−1 ⊗ (+ +)[WP]2n−1 ⊗ (+ −)
). (2.41)
Note that [WP]1 = (1). Using the properties of the Kronecker product, we obtain
[WP]2n =
⎛⎜⎜⎜⎜⎜⎜⎝[WP]2n−1 ⊗ (+ +)
[WP]2n−1 ⊗ (+ −)
⎞⎟⎟⎟⎟⎟⎟⎠=
⎛⎜⎜⎜⎜⎜⎜⎝[WP]2n−1 I2n−1 ⊗ I1(+ +)
[WP]2n−1 I2n−1 ⊗ I1(+ −)
⎞⎟⎟⎟⎟⎟⎟⎠=
⎛⎜⎜⎜⎜⎜⎜⎝[WP]2n−1 ⊗ I1
[WP]2n−1 ⊗ I1
⎞⎟⎟⎟⎟⎟⎟⎠⎛⎜⎜⎜⎜⎜⎜⎝I2n−1 ⊗ I1(+ +)
I2n−1 ⊗ I1(+ −)
⎞⎟⎟⎟⎟⎟⎟⎠ .(2.42)
Then, from Eq. (2.42) and from the following identity:(ABCD
)=
(A 00 C
) (BD
)(2.43)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Fast Classical Discrete Orthogonal Transforms 63
we obtain
[WP]2n =
([WP]2n−1 0
0 [WP]2n−1
) (I2n−1 ⊗ (+ +)I2n−1 ⊗ (+ −)
). (2.44)
Thus,
[WP]2n = (I2 ⊗ [WP]2n−1 )(I2n−1 ⊗ (+ +)I2n−1 ⊗ (+ −)
). (2.45)
Example 2.3.1: The Walsh–Paley matrices of order 4, 8, and 16 can be factoredas
N = 4: [WP]4 =
[I2 ⊗
(+ +
+ −)]
P1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + 0 0+ − 0 00 0 + +0 0 + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + 0 00 0 + ++ − 0 00 0 + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (2.46)
N = 8: [WP]8 =
[I4 ⊗
(+ +
+ −)]
(I2 ⊗ P1) P2, (2.47)
where
P1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + 0 00 0 + ++ − 0 00 0 + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , P2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + 0 0 0 0 0 00 0 + + 0 0 0 00 0 0 0 + + 0 00 0 0 0 0 0 + ++ − 0 0 0 0 0 00 0 + − 0 0 0 00 0 0 0 + − 0 00 0 0 0 0 0 + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (2.48)
N = 16: [WP]16 =
[I8 ⊗
(+ +
+ −)]
(I4 ⊗ P1) (I2 ⊗ P3) , (2.49)
where
P16 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + 0 0 0 0 0 0 0 0 0 0 0 0 0 00 0 + + 0 0 0 0 0 0 0 0 0 0 0 00 0 0 0 + + 0 0 0 0 0 0 0 0 0 00 0 0 0 0 0 + + 0 0 0 0 0 0 0 00 0 0 0 0 0 0 0 + + 0 0 0 0 0 00 0 0 0 0 0 0 0 0 0 + + 0 0 0 00 0 0 0 0 0 0 0 0 0 0 0 + + 0 00 0 0 0 0 0 0 0 0 0 0 0 0 0 + ++ − 0 0 0 0 0 0 0 0 0 0 0 0 0 00 0 + − 0 0 0 0 0 0 0 0 0 0 0 00 0 0 0 + − 0 0 0 0 0 0 0 0 0 00 0 0 0 0 0 + − 0 0 0 0 0 0 0 00 0 0 0 0 0 0 0 + − 0 0 0 0 0 00 0 0 0 0 0 0 0 0 0 + − 0 0 0 00 0 0 0 0 0 0 0 0 0 0 0 + − 0 00 0 0 0 0 0 0 0 0 0 0 0 0 0 + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (2.50)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
64 Chapter 2
x0 y0
y4
y2
y6
y1
y5
y3
y7
x1
x2
x3
x4
x5
x6
x7
Figure 2.5 Flow graph of a fast 8-point Walsh–Paley transform.
Note that an 8-point Walsh–Paley transform of vector x = (x0, x1, . . . , x7)T ,
y = [WP]8x =
[I4 ⊗
(+ +
+ −)]
(I2 ⊗ P1) P2x, (2.51)
can be calculated using the graph in Fig. 2.5.
Theorem 2.3.2: The Walsh–Paley matrix of order N = 2n can be factored as
[WP]2n =
n−1∏m=0
[I2n−1−m ⊗
(I2m ⊗ (+ +)I2m ⊗ (+ −)
)]. (2.52)
Proof: From Theorem 2.3.1, we have
[WP]2n = (I2 ⊗ [WP]2n−1 )(I2m−1 ⊗ (+ +)I2m−1 ⊗ (+ −)
). (2.53)
Using Theorem 2.3.1 once again, we obtain
[WP]2n−1 = (I2 ⊗ [WP]2n−2 )(I2m−2 ⊗ (+ +)I2m−1 ⊗ (+ −)
). (2.54)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Fast Classical Discrete Orthogonal Transforms 65
From Eqs. (2.53) and (2.54), the results are
[WP]2n = (I2 ⊗ [WP]2n−1 )(I2m−1 ⊗ (+ +)I2m−1 ⊗ (+ −)
)
=
{I2 ⊗
[(I2 ⊗ [WP]2n−2 )
(I2m−2 ⊗ (+ +)I2m−2 ⊗ (+ −)
)]} (I2m−1 ⊗ (+ +)I2m−1 ⊗ (+ −)
)
=
[I4 ⊗ [WP]2n−2
(I2m−2 ⊗ (+ +)I2m−2 ⊗ (+ −)
)] [I20 ⊗
(I2m−1 ⊗ (+ +)I2m−1 ⊗ (+ −)
)]. (2.55)
After performing n iterations, we obtain Eq. (2.52).
Theorem 2.3.3: The Walsh matrix of order N = 2n can be expressed as
W2n = G2n[WP]2n , (2.56)
where G2n is the gray code permutation matrix, i.e.,
G2n =
n−1∏m=0
I2m ⊗ diag {I2n−m−1 ,R2n−m−1} (2.57)
and
R2n =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
0 0 · · · 0 10 0 · · · 1 0....... . .......
0 1 · · · 0 01 0 · · · 0 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (2.58)
This matrix can also be expressed as
W2n = Q2n H2n , (2.59)
where H2n is a Walsh–Hadamard matrix, and
Q2n =
n−1∏m=0
[I2n−m−1 ⊗
(I2m ⊗ (+ 0)I2m ⊗ (0 +)
)]. (2.60)
Example 2.3.2: A factorization of Walsh matrices of orders 4, 8, and 16, using therelation of Eq. (2.56) is obtained as follows:
(1) For N = 4, as [see Eq. (2.52)]
[WP]2 = H2, (2.61)
[WP]4 =
(H2 00 H2
) (I2 ⊗ (+ +)I2 ⊗ (+ −)
)(2.62)
and
G4 =
[I1 ⊗
(I2 00 R2
)] [I2 ⊗
(I1 00 R1
)]=
(I2 00 R2
), (2.63)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
66 Chapter 2
then, we obtain
W4 =
(I2 00 R2
) (H2 00 H2
) (I2 ⊗ (+ +)I2 ⊗ (+ −)
). (2.64)
Adding more detail, we have W4 = A0A1A2, where
A0 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ 0 0 00 + 0 00 0 0 +0 0 + 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (2.65)
A1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + 0 0+ − 0 00 0 + +0 0 + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (2.66)
A2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + 0 00 0 + ++ − 0 00 0 + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (2.67)
(2) For N = 8 from Eq. (2.52), we have
[WP]8 = (I4 ⊗ H2)[I2 ⊗
(I2 ⊗ (+ +)I2 ⊗ (+ −)
)] (I4 ⊗ (+ +)I4 ⊗ (+ −)
)(2.68)
as
G8 =
(I4 00 R4
) [I2 ⊗
(I2 00 R2
)](I4 ⊗ I2) . (2.69)
Then, we obtain W8 = A0A1A2A3, where
A0 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ 0 0 0 0 0 0 00 + 0 0 0 0 0 00 0 0 + 0 0 0 00 0 + 0 0 0 0 00 0 0 0 0 0 + 00 0 0 0 0 0 0 +0 0 0 0 0 + 0 00 0 0 0 + 0 0 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (2.70a)
A1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + 0 0 0 0 0 0+ − 0 0 0 0 0 00 0 + + 0 0 0 00 0 + − 0 0 0 00 0 0 0 + + 0 00 0 0 0 + − 0 00 0 0 0 0 0 + +0 0 0 0 0 0 + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (2.70b)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Fast Classical Discrete Orthogonal Transforms 67
A2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + 0 0 0 0 0 00 0 + + 0 0 0 0+ − 0 0 0 0 0 00 0 + − 0 0 0 00 0 0 0 + + 0 00 0 0 0 + − 0 00 0 0 0 0 0 + +0 0 0 0 0 0 + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (2.70c)
A3 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + 0 0 0 0 0 00 0 + + 0 0 0 00 0 0 0 + + 0 00 0 + 0 0 0 + ++ − 0 0 0 0 0 00 0 + − 0 0 0 00 0 0 0 + − 0 00 0 0 0 0 0 + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (2.70d)
(3) For N = 16 from Eq. (2.52), we obtain
[WP]16 = (I8 ⊗ H2)
⎡⎢⎢⎢⎢⎢⎣I4 ⊗⎛⎜⎜⎜⎜⎜⎝I2 ⊗ (+ +)I2 ⊗ (+ −)
⎞⎟⎟⎟⎟⎟⎠⎤⎥⎥⎥⎥⎥⎦⎡⎢⎢⎢⎢⎢⎣I2 ⊗
⎛⎜⎜⎜⎜⎜⎝I4 ⊗ (+ +)I4 ⊗ (+ −)
⎞⎟⎟⎟⎟⎟⎠⎤⎥⎥⎥⎥⎥⎦⎛⎜⎜⎜⎜⎜⎝I8 ⊗ (+ +)I8 ⊗ (+ −)
⎞⎟⎟⎟⎟⎟⎠ , (2.71)
because
G16 =
(I8 00 R8
) [I2 ⊗
(I4 00 R4
)] [I4 ⊗
(I2 00 R2
)](I8 ⊗ I2) . (2.72)
Then we have W16 = A0A1A2A3A4, where
A0 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 00 + 0 0 0 0 0 0 0 0 0 0 0 0 0 00 0 0 + 0 0 0 0 0 0 0 0 0 0 0 00 0 + 0 0 0 0 0 0 0 0 0 0 0 0 00 0 0 0 0 0 + 0 0 0 0 0 0 0 0 00 0 0 0 0 0 0 + 0 0 0 0 0 0 0 00 0 0 0 0 + 0 0 0 0 0 0 0 0 0 00 0 0 0 + 0 0 0 0 0 0 0 0 0 0 00 0 0 0 0 0 0 0 0 0 0 0 + 0 0 00 0 0 0 0 0 0 0 0 0 0 0 0 + 0 00 0 0 0 0 0 0 0 0 0 0 0 0 0 0 +0 0 0 0 0 0 0 0 0 0 0 0 0 0 + 00 0 0 0 0 0 0 0 0 0 + 0 0 0 0 00 0 0 0 0 0 0 0 0 0 0 + 0 0 0 00 0 0 0 0 0 0 0 0 + 0 0 0 0 0 00 0 0 0 0 0 0 0 + 0 0 0 0 0 0 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
(2.73)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
68 Chapter 2
A1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + 0 0 0 0 0 0 0 0 0 0 0 0 0 0+ − 0 0 0 0 0 0 0 0 0 0 0 0 0 00 0 + + 0 0 0 0 0 0 0 0 0 0 0 00 0 + − 0 0 0 0 0 0 0 0 0 0 0 00 0 0 0 + + 0 0 0 0 0 0 0 0 0 00 0 0 0 + − 0 0 0 0 0 0 0 0 0 00 0 0 0 0 0 + + 0 0 0 0 0 0 0 00 0 0 0 0 0 + − 0 0 0 0 0 0 0 00 0 0 0 0 0 0 0 + + 0 0 0 0 0 00 0 0 0 0 0 0 0 + − 0 0 0 0 0 00 0 0 0 0 0 0 0 0 0 + + 0 0 0 00 0 0 0 0 0 0 0 0 0 + − 0 0 0 00 0 0 0 0 0 0 0 0 0 0 0 + + 0 00 0 0 0 0 0 0 0 0 0 0 0 + − 0 00 0 0 0 0 0 0 0 0 0 0 0 0 0 + +0 0 0 0 0 0 0 0 0 0 0 0 0 0 + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
, (2.74)
A2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + 0 0 0 0 0 0 0 0 0 0 0 0 0 00 0 + + 0 0 0 0 0 0 0 0 0 0 0 0+ − 0 0 0 0 0 0 0 0 0 0 0 0 0 00 0 + − 0 0 0 0 0 0 0 0 0 0 0 00 0 0 0 + + 0 0 0 0 0 0 0 0 0 00 0 0 0 0 0 + + 0 0 0 0 0 0 0 00 0 0 0 + − 0 0 0 0 0 0 0 0 0 00 0 0 0 0 0 + − 0 0 0 0 0 0 0 00 0 0 0 0 0 0 0 + + 0 0 0 0 0 00 0 0 0 0 0 0 0 0 0 + + 0 0 0 00 0 0 0 0 0 0 0 + − 0 0 0 0 0 00 0 0 0 0 0 0 0 0 0 + − 0 0 0 00 0 0 0 0 0 0 0 0 0 0 0 + + 0 00 0 0 0 0 0 0 0 0 0 0 0 0 0 + +0 0 0 0 0 0 0 0 0 0 0 0 + − 0 00 0 0 0 0 0 0 0 0 0 0 0 0 0 + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
, (2.75)
A3 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + 0 0 0 0 0 0 0 0 0 0 0 0 0 00 0 + + 0 0 0 0 0 0 0 0 0 0 0 00 0 0 0 + + 0 0 0 0 0 0 0 0 0 00 0 0 0 0 0 + + 0 0 0 0 0 0 0 0+ − 0 0 0 0 0 0 0 0 0 0 0 0 0 00 0 + − 0 0 0 0 0 0 0 0 0 0 0 00 0 0 0 + − 0 0 0 0 0 0 0 0 0 00 0 0 0 0 0 + − 0 0 0 0 0 0 0 00 0 0 0 0 0 0 0 + + 0 0 0 0 0 00 0 0 0 0 0 0 0 0 0 + + 0 0 0 00 0 0 0 0 0 0 0 0 0 0 0 + + 0 00 0 0 0 0 0 0 0 0 0 0 0 0 0 + +0 0 0 0 0 0 0 0 + − 0 0 0 0 0 00 0 0 0 0 0 0 0 0 0 + − 0 0 0 00 0 0 0 0 0 0 0 0 0 0 0 + − 0 00 0 0 0 0 0 0 0 0 0 0 0 0 0 + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
, (2.76)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Fast Classical Discrete Orthogonal Transforms 69
A4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + 0 0 0 0 0 0 0 0 0 0 0 0 0 00 0 + + 0 0 0 0 0 0 0 0 0 0 0 00 0 0 0 + + 0 0 0 0 0 0 0 0 0 00 0 0 0 0 0 + + 0 0 0 0 0 0 0 00 0 0 0 0 0 0 0 + + 0 0 0 0 0 00 0 0 0 0 0 0 0 0 0 + + 0 0 0 00 0 0 0 0 0 0 0 0 0 0 0 + + 0 00 0 0 0 0 0 0 0 0 0 0 0 0 0 + ++ − 0 0 0 0 0 0 0 0 0 0 0 0 0 00 0 + − 0 0 0 0 0 0 0 0 0 0 0 00 0 0 0 + − 0 0 0 0 0 0 0 0 0 00 0 0 0 0 0 + − 0 0 0 0 0 0 0 00 0 0 0 0 0 0 0 + − 0 0 0 0 0 00 0 0 0 0 0 0 0 0 0 + − 0 0 0 00 0 0 0 0 0 0 0 0 0 0 0 + − 0 00 0 0 0 0 0 0 0 0 0 0 0 0 0 + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (2.77)
Example 2.3.3: A factorization of Walsh matrices of orders 4, 8, and 16, using therelation of Eq. (2.59) is obtained as follows:
Because
H2n =
n−1∏i=0
I2m ⊗ (H2 ⊗ I2n−m−1 ), (2.78)
Q2n =
n−1∏m=0
⎡⎢⎢⎢⎢⎢⎣I2n−m−1 ⊗⎛⎜⎜⎜⎜⎜⎝I2m ⊗ (+ 0)R2m ⊗ (0 +)
⎞⎟⎟⎟⎟⎟⎠⎤⎥⎥⎥⎥⎥⎦ . (2.79)
Then, using Eq. (2.59), the Walsh matrix W2m can be factored as
W2m = B0B1, . . . , Bn−1 A0A1, . . . , An−1, (2.80)
where Am = I2m ⊗ (H2 ⊗ I2n−m−1 ), Bm = I2n−m−1 ⊗(
I2m ⊗ (+ 0)R2m ⊗ (0 +)
), m = 0, 1, 2, . . . , n − 1.
The factorization of Walsh matrices of orders 4 and 8 are given as follows:
W4 = (I2 ⊗ I2)
⎛⎜⎜⎜⎜⎜⎝I2 ⊗ (+ 0)R2 ⊗ (0 +)
⎞⎟⎟⎟⎟⎟⎠ (H2 ⊗ I2)(I2 ⊗ H2)
=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ 0 0 00 0 + 00 0 0 +0 + 0 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ 0 + 00 + 0 ++ 0 − 00 + 0 −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + 0 0+ − 0 00 0 + +0 0 + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (2.81)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
70 Chapter 2
W8 = (I4 ⊗ I2)
⎡⎢⎢⎢⎢⎣I2 ⊗⎛⎜⎜⎜⎜⎝I2 ⊗ (+ 0)R2 ⊗ (0 +)
⎞⎟⎟⎟⎟⎠⎤⎥⎥⎥⎥⎦ ⎛⎜⎜⎜⎜⎝I4 ⊗ (+ 0)R4 ⊗ (0 +)
⎞⎟⎟⎟⎟⎠ (H2 ⊗ I4) [I2 ⊗ (H2 ⊗ I2)] (I4 ⊗ H2)
=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ 0 0 0 0 0 0 00 0 + 0 0 0 0 00 0 0 + 0 0 0 00 + 0 0 0 0 0 00 0 0 0 + 0 0 00 0 0 0 0 0 + 00 0 0 0 0 0 0 +0 0 0 0 0 + 0 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ 0 0 0 0 0 0 00 0 + 0 0 0 0 00 0 0 0 + 0 0 00 0 0 0 0 0 + 00 0 0 0 0 0 0 +0 0 0 0 0 + 0 00 0 0 + 0 0 0 00 + 0 0 0 0 0 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ 0 0 0 + 0 0 00 + 0 0 0 + 0 00 0 + 0 0 0 + 00 0 0 + 0 0 0 ++ 0 0 0 − 0 0 00 + 0 0 0 − 0 00 0 + 0 0 0 − 00 0 0 + 0 0 0 −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
×
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ 0 + 0 0 0 0 00 + 0 + 0 0 0 0+ 0 − 0 0 0 0 00 + 0 − 0 0 0 00 0 0 0 + 0 + 00 0 0 0 0 + 0 +0 0 0 0 + 0 − 00 0 0 0 0 + 0 −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + 0 0 0 0 0 0+ − 0 0 0 0 0 00 0 + + 0 0 0 00 0 + − 0 0 0 00 0 0 0 + + 0 00 0 0 0 + − 0 00 0 0 0 0 0 + +0 0 0 0 0 0 + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (2.82)
2.4 Cal–Sal Fast Transform
The elements of a Cal–Sal transform matrix of order N (N = 2n) HHcs = (hu,v)N−1u,v=0
can be defined as74
hu,v = (−1)p0v0+p1v1+···+pn−1vn−1 , (2.83)
where u = 2n−1un−1 + 2n−2un−2 + · · · + u0, v = 2n−1vn−1 + 2n−2vn−2 + · · · + v0, andpn−1 = u0, pi = un−i−1 + un−i−2, i = 0, 1, . . . , n − 2.
Let x = (x0, x1, . . . , xN−1)T be an input signal vector; then, the forward andinverse Cal–Sal transform can be expressed as
y =1N
Hcsx, x = Hcsy. (2.84)
The Cal–Sal matrices of order 4 and 8 are given as follows:
Hcs(4) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 1 11 −1 −1 11 −1 1 −11 1 −1 −1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠wal(0, t)cal(1, t)sal(2, t)sal(1, t)
0121.
(2.85)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Fast Classical Discrete Orthogonal Transforms 71
Hcs(8) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 11 1 −1 −1 −1 −1 1 11 −1 −1 1 1 −1 −1 11 −1 1 −1 −1 1 −1 11 −1 1 −1 1 −1 1 −11 −1 −1 1 −1 1 1 −11 1 −1 −1 1 1 −1 −11 1 1 1 −1 −1 −1 −1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
wal(0, t)cal(1, t)cal(2, t)cal(3, t)sal(4, t)sal(3, t)sal(2, t)sal(1, t)
01234321.
(2.86)
Similar to other HT matrices, the Cal–Sal matrix Hcs(N) of order N can befactored into some sparse matrices leading to the fast algorithm. For example, wehave
Hcs(4) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
⎛⎜⎜⎜⎜⎝+ 00 +
⎞⎟⎟⎟⎟⎠ ⎛⎜⎜⎜⎜⎝+ 00 +
⎞⎟⎟⎟⎟⎠⎛⎜⎜⎜⎜⎝0 ++ 0
⎞⎟⎟⎟⎟⎠ ⎛⎜⎜⎜⎜⎝0 −− 0
⎞⎟⎟⎟⎟⎠
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
⎛⎜⎜⎜⎜⎝+ ++ −
⎞⎟⎟⎟⎟⎠ O2
O2
⎛⎜⎜⎜⎜⎝+ +− +⎞⎟⎟⎟⎟⎠
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠=
⎛⎜⎜⎜⎜⎝ I2 I2
R2 −R2
⎞⎟⎟⎟⎟⎠ ⎛⎜⎜⎜⎜⎝H2 O2
O2 H2R2
⎞⎟⎟⎟⎟⎠ , (2.87)
Hcs(8) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ 0 0 00 0 + 00 + 0 00 0 0 +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ 0 0 00 0 + 00 + 0 00 0 0 +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝0 0 0 +0 + 0 00 0 + 0+ 0 0 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝0 0 0 −0 − 0 00 0 − 0− 0 0 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
⎛⎜⎜⎜⎜⎝I2 I2
I2 −I2
⎞⎟⎟⎟⎟⎠ O4
O4
⎛⎜⎜⎜⎜⎝ I2 I2
−I2 I2
⎞⎟⎟⎟⎟⎠
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
×
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
⎛⎜⎜⎜⎜⎝+ ++ −
⎞⎟⎟⎟⎟⎠ O2 O2 O2
O2
⎛⎜⎜⎜⎜⎝+ ++ −
⎞⎟⎟⎟⎟⎠ O2 O2
O2 O2
⎛⎜⎜⎜⎜⎝+ ++ −
⎞⎟⎟⎟⎟⎠ O2
O2 O2 O2
⎛⎜⎜⎜⎜⎝+ ++ −
⎞⎟⎟⎟⎟⎠
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
(2.88)
Hcs(16) = B1B2B3B4, (2.89)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
72 Chapter 2
where
B1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ 0 0 0 0 0 0 00 0 0 0 + 0 0 00 0 + 0 0 0 0 00 0 0 0 0 0 + 00 + 0 0 0 0 0 00 0 0 0 0 + 0 00 0 0 + 0 0 0 00 0 0 0 0 0 0 +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ 0 0 0 0 0 0 00 0 0 0 + 0 0 00 0 + 0 0 0 0 00 0 0 0 0 0 + 00 + 0 0 0 0 0 00 0 0 0 0 + 0 00 0 0 + 0 0 0 00 0 0 0 0 0 0 +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
0 0 0 0 0 0 0 +0 0 0 + 0 0 0 00 0 0 0 0 + 0 00 + 0 0 0 0 0 00 0 0 0 0 0 + 00 0 + 0 0 0 0 00 0 0 0 0 + 0 0+ 0 0 0 0 0 0 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
0 0 0 0 0 0 0 −0 0 0 − 0 0 0 00 0 0 0 0 − 0 00 − 0 0 0 0 0 00 0 0 0 0 0 − 00 0 − 0 0 0 0 00 0 0 0 0 − 0 0− 0 0 0 0 0 0 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
, (2.90)
B2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝(I4 I4
I4 −I4
)O8
O8
(I4 I4
−I4 I4
)⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (2.91)
B3 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
(I2 I2
I2 − I2
)O4 O4 O4
O4
(I2 I2
−I2 I2
)O4 O4
O4 O4
(I2 I2
I2 − I2
)O4
O4 O4 O4
(I2 I2
−I2 I2
)
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (2.92)
B4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
(+ +
+ −)
O2 O2 O2 O2 O2 O2 O2
O2
(+ +
− +)
O2 O2 O2 O2 O2 O2
O2 O2
(+ +
+ −)
O2 O2 O2 O2 O2
O2 O2 O2
(+ +
− +)
O2 O2 O2 O2
O2 O2 O2 O2
(+ +
+ −)
O2 O2 O2
O2 O2 O2 O2 O2
(+ +
− +)
O2 O2
O2 O2 O2 O2 O2 O2
(+ +
+ −)
O2
O2 O2 O2 O2 O2 O2 O2
(+ +
− +)
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (2.93)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Fast Classical Discrete Orthogonal Transforms 73
We will now introduce the column bit reversal (CBR) operation. Let A be anm × m (m is the power of 2) matrix. [CBR](A) is the m × m matrix obtained frommatrix A whose columns are rearranged in bit reversal order. For example, considerthe following 4 × 4 matrix:
A =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝a11 a12 a13 a14
a21 a22 a23 a24
a31 a32 a33 a34
a41 a42 a43 a44
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , then [CBR](A) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝a11 a13 a12 a14
a21 a23 a22 a24
a31 a33 a32 a34
a41 a43 a42 a44
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (2.94)
The horizontal reflection (HR) operation for any size matrix is defined as
[HR](A) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝a14 a13 a12 a11
a24 a23 a22 a21
a34 a33 a32 a31
a44 a43 a42 a41
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (2.95)
Similarly, we can define the block horizontal reflection (BHR) operation. Usingthese notations, we can represent the Cal–Sal matrices Hcs(4), Hcs(8), and Hcs(16)as follows:
For n = 4, we have
Hcs(4) = B1B2, (2.96)
where
B1 =
([CBR](I2) [CBR](I2)
[HR]{[CBR](I2)} −[HR]{[CBR](I2)}), B2 =
(H2 O2
O2 [HR](H2)
). (2.97)
For n = 8, we have
Hcs(8) = B1B2B3, (2.98)
where
B1 =
([CBR](I4) [CBR](I4)
[HR]([CBR](I4)) −[HR]{[CBR](I4)}),
B2 =
(H2 ⊗ I2 O4
O4 [BHR](H2 ⊗ I2)
),
B3 = (I4 ⊗ H2) .
(2.99)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
74 Chapter 2
For n = 16, we have
Hcs(16) = B1B2B3B4, (2.100)
where
B1 =
⎛⎜⎜⎜⎜⎜⎜⎝ [CBR](I8) [CBR](I8)
[HR]{[CBR](I8)} −[HR]{[CBR](I8)}
⎞⎟⎟⎟⎟⎟⎟⎠ ,B2 =
⎛⎜⎜⎜⎜⎜⎜⎝H2 ⊗ I4 O8
O8 [BHR](H2 ⊗ I4)
⎞⎟⎟⎟⎟⎟⎟⎠ ,B3 = I2 ⊗
⎛⎜⎜⎜⎜⎜⎜⎝H2 ⊗ I2 O4
O4 [BHR](H2 ⊗ I2)
⎞⎟⎟⎟⎟⎟⎟⎠ ,B4 = I4 ⊗
⎛⎜⎜⎜⎜⎜⎜⎝H2 O2
O2 [HR](H2)
⎞⎟⎟⎟⎟⎟⎟⎠ .
(2.101)
It can be shown that a Cal–Sal matrix of order N = 2n can be factored as follows:For even n, n ≥ 2,
Hcs(2n) = B1B2, . . . , Bn, (2.102)
where
B1 =
⎛⎜⎜⎜⎜⎜⎜⎝ [CBR](I2n−1 ) [CBR](I2n−1 )
[HR]{[CBR](I2n−1 )} −[HR]{[CBR](I2n−1 )}
⎞⎟⎟⎟⎟⎟⎟⎠ ,Bi = I2i−2 ⊗
⎛⎜⎜⎜⎜⎜⎜⎝H2 ⊗ I2n−i O2n−i+1
O2n−i+1 [BHR](H2 ⊗ I2n−i)
⎞⎟⎟⎟⎟⎟⎟⎠ , i = 2, 3, . . . , n.
(2.103)
For odd n, n ≥ 3,
Hcs(2n) = B1B2, . . . , Bn, (2.104)
where
B1 =
⎛⎜⎜⎜⎜⎜⎜⎝ [CBR](I2n−1 ) [CBR](I2n−1 )
[HR]{[CBR](I2n−1 )} −[HR]{[CBR](I2n−1 )}
⎞⎟⎟⎟⎟⎟⎟⎠ , Bn = I2n−1 ⊗ H2,
Bi = I2i−2 ⊗⎛⎜⎜⎜⎜⎜⎜⎝H2 ⊗ I2n−i O2n−i+1
O2n−i+1 [BHR](H2 ⊗ I2n−i)
⎞⎟⎟⎟⎟⎟⎟⎠ , i = 2, 3, . . . , n − 1.
(2.105)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Fast Classical Discrete Orthogonal Transforms 75
2.5 Fast Complex HTs
In this section, we present the factorization of complex Hadamard matrices. Asmentioned above, the complex Hadamard matrix H is a unitary matrix withelements ±1, ± j, i.e.,
HH∗ = H∗H = NIN , (2.106)
where H∗ represents the complex conjugate transpose of the matrix H, and j =√−1.It can be proved that if H is a complex Hadamard matrix of order N, then N is
even. The matrix [CS ]2 =(
1 j− j −1
)is an example of a complex Hadamard matrix of
order 2. Complex Hadamard matrices of higher orders can be generated recursivelyby using the Kronecker product, i.e.,
[CS ]2n = H2 ⊗ [CS ]2n−1 , n = 2, 3, . . . . (2.107)
Theorem 2.5.1: The complex Sylvester matrix of order 2n [see Eq. (2.107)] can befactored as
[CS ]2n =
⎡⎢⎢⎢⎢⎢⎢⎣n−1∏m=1
(I2m−1 ⊗ H2 ⊗ I2n−m)
⎤⎥⎥⎥⎥⎥⎥⎦ (I2n−1 ⊗ [CS ]2). (2.108)
Proof: Indeed, from the definition of complex Sylvester matrix in Eq. (2.107), wehave
[CS ]2n =
([CS ]2n−1 [CS ]2n−1
[CS ]2n−1 −[CS ]2n−1
)= H2 ⊗ [CS ]2n−1 . (2.109)
Rewriting Eq. (2.109) in the following form:
[CS ]2n = (H2I2) ⊗ (I2n−1 [CS ]2n−1 ) (2.110)
and using the Kronecker product, we obtain
[CS ]2n = (H2 ⊗ I2n−1 )(I2 ⊗ [CS ]2n−1 ). (2.111)
Using Eq. (2.109) once again, we obtain
[CS ]2n = (H2 ⊗ I2n−1 ) ⊗ [I2 ⊗ (H2 ⊗ [CS ]2n−2 )]. (2.112)
After performing n − 1 iterations, we obtain the required results.
Note that [CS ]2n is a Hermitian matrix, i.e., [CS ]∗2n = [CS ]2n .
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
76 Chapter 2
Because [CS ]2 =(
1 j− j −1
), it follows from Eq. (2.109) that the complex
Sylvester–Hadamard matrix of orders 4 and 8 are of the form
[CS ]4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 j 1 j− j −1 − j −1
1 j −1 − j− j −1 j 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (2.113)
[CS ]8 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 j 1 j 1 j 1 j− j −1 − j −1 − j −1 − j −1
1 j −1 − j 1 j −1 − j− j −1 j 1 − j −1 j 1
1 j 1 j −1 − j −1 − j− j −1 − j −1 j 1 j 1
1 j −1 − j −1 − j 1 j− j −1 j 1 j 1 − j −1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (2.114)
Now, according to Eq. (2.112), the matrix in Eq. (2.114) can be expressed as theproduct of two matrices,
[CS ]8 = AB = A(B1 + jB2), (2.115)
where
A =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ 0 0 0 + 0 0 00 + 0 0 0 + 0 00 0 + 0 0 0 + 00 0 0 + 0 0 0 ++ 0 0 0 − 0 0 00 + 0 0 0 − 0 00 0 + 0 0 0 − 00 0 0 + 0 0 0 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠,
B1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ 0 + 0 0 0 0 00 − 0 − 0 0 0 0+ 0 − 0 0 0 0 00 − 0 + 0 0 0 00 0 0 0 + 0 + 00 0 0 0 0 − 0 −0 0 0 0 + 0 − 00 0 0 0 0 − 0 +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, B2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
0 + 0 + 0 0 0 0− 0 − 0 0 0 0 00 + 0 − 0 0 0 0− 0 + 0 0 0 0 00 0 0 0 0 + 0 +0 0 0 0 − 0 − 00 0 0 0 0 + 0 −0 0 0 0 − 0 + 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (2.116)
Let F = (a, b, c, d, e, f , g, h)T be a vector column of length 8. The fast complexSylvester–Hadamard transform algorithm can be realized via the following steps:
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Fast Classical Discrete Orthogonal Transforms 77
Step 1. Calculate B1F:
B1F =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ 0 + 0 0 0 0 00 − 0 − 0 0 0 0+ 0 − 0 0 0 0 00 − 0 + 0 0 0 00 0 0 0 + 0 + 00 0 0 0 0 − 0 −0 0 0 0 + 0 − 00 0 0 0 0 − 0 +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
abcdefgh
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
a + c−b − d
a − c−b + d
e + g− f − h
e − g− f + h
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (2.117)
Step 2. Calculate A(B1F):
A(B1F) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ 0 0 0 + 0 0 00 + 0 0 0 + 0 00 0 + 0 0 0 + 00 0 0 + 0 0 0 ++ 0 0 0 − 0 0 00 + 0 0 0 − 0 00 0 + 0 0 0 − 00 0 0 + 0 0 0 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
a + c−b − d
a − c−b + d
e + g− f − h
e − g− f + h
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
(a + c) + (e + g)−(b + d) − ( f + h)
(a − c) + (e − g)−(b − d) − ( f − h)
(a + c) − (e + g)−(b + d) + ( f + h)
(a − c) − (e − g)−(b − d) + ( f − h)
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (2.118)
Step 3. Calculate B2F:
B2F =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
0 + 0 + 0 0 0 0− 0 − 0 0 0 0 00 + 0 − 0 0 0 0− 0 + 0 0 0 0 00 0 0 0 0 + 0 +0 0 0 0 − 0 − 00 0 0 0 0 + 0 −0 0 0 0 − 0 + 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
abcdefgh
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
b + d−a − c
b − d−a + c
f + h−e − g
f − h−e + g
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (2.119)
Step 4. Calculate A(B2F):
A(B2F) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ 0 0 0 + 0 0 00 + 0 0 0 + 0 00 0 + 0 0 0 + 00 0 0 + 0 0 0 ++ 0 0 0 − 0 0 00 + 0 0 0 − 0 00 0 + 0 0 0 − 00 0 0 + 0 0 0 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
b + d−a − c
b − d−a + c
f + h−e − g
f − h−e + g
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
(b + d) + ( f + h)−(a + c) − (e + g)
(b − d) + ( f − h)−(a − c) − (e − g)
(b + d) − ( f + h)−(a + c) + (e + g)
(b − d) − ( f − h)−(a − c) + (e − g)
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (2.120)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
78 Chapter 2
Figure 2.6 Flow graph of fast 8-point complex Sylvester–Hadamard transform: (a) realpart; (b) imaginary part.
Step 5. Output the 8-point complex Sylvester–Hadamard transform coefficients,real and imaginary parts (i.e., see Fig. 2.6):
A (B1F) + jA(B2F) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
(a + c) + (e + g)−(b + d) − ( f + h)
(a − c) + (e − g)−(b − d) − ( f − h)
(a + c) − (e + g)−(b + d) + ( f + h)
(a − c) − (e − g)−(b − d) + ( f − h)
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠+ j
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
(b + d) + ( f + h)−(a + c) − (e + g)
(b − d) + ( f − h)−(a − c) − (e − g)
(b + d) − ( f + h)−(a + c) + (e + g)
(b − d) − ( f − h)−(a − c) + (e − g)
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (2.121)
The flow graph of the 8-point complex Sylvester–Hadamard transform of thevector (a, b, c, d, e, f , g, h)T , with split real and imaginary parts, is given in Fig. 2.6.
From Eq. (2.109), it follows that to perform [CS ]N(N = 2n) transforms, it isnecessary to perform two N/2-point Sylvester–Hadamard transforms. Hence, thecomplexity of the complex Sylvester–Hadamard transform is
C+ ([CS ]N) = N log2(N/2) = (n − 1)2n. (2.122)
For example, C+ ([CS ]4) = 4, C+ ([CS ]8) = 16, and C+ ([CS ]16) = 48. FromTheorem 2.5.1, it follows that the complex Hadamard matrix of order 16 can berepresented as
[CS ]16 = A1A2A3B1 + jA1A2A3B2, (2.123)
where
A1 = H2 ⊗ I8,
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Fast Classical Discrete Orthogonal Transforms 79
x0 y0
x1
x2
x3
x4
x5
x6
x7
x8
x9
x10
x11
x12
x13
x14
x15
y12
y13
y14
y15
y1
y2
y3
y4
y5
y6
y7
y8
y9
y10
y11
Figure 2.7 Flow graph of the fast 16-point complex Sylvester–Hadamard transform (realpart).
A2 = (H2 ⊗ I4) ⊕ (H2 ⊗ I4),
A3 = (H2 ⊗ I2) ⊕ (H2 ⊗ I4) ⊕ (H2 ⊗ I4),
B1 = I8 ⊗ T1, B2 = I8 ⊗ T2, where T1 =
(+ 00 −
), T2 =
(0 +− 0
). (2.124)
Flow graphs of the 16-point complex Sylvester–Hadamard transform of thevector (x0, x1, . . . , x15), with split real and imaginary parts, are given in Figs. 2.7and 2.8.
2.6 Fast Haar Transform
This section presents a fast Haar transform computation algorithm,
X =1N
[Haar]N f =1N
H(N) f , (2.125)
where [Haar]N = H(N) is the Haar transform matrix of order N, and f is the signalvector of length N.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
80 Chapter 2
x0
x1
x2
x3
x4
x5
x6
x7
x8
x9
x10
x11
x12
x13
x14
x15
y0
y12
y13
y14
y15
y1
y2
y3
y4
y5
y6
y7
y8
y9
y10
y11
i
i
i
i
i
i
i
i
i
i
i
i
i
i
i
i
Figure 2.8 Flow graph of the fast 16-point complex Sylvester–Hadamard transform(imaginary part).
First, consider an example. Let N = 8, and let the input data vector bef = (a, b, c, d, e, f , g, h)T . It is easy to check that the direct evaluation of the Haartransform (below s =
√2)
H(8) f =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 11 1 1 1 −1 −1 −1 −1s s −s −s 0 0 0 00 0 0 0 s s −s −s2 −2 0 0 0 0 0 00 0 2 −2 0 0 0 00 0 0 0 2 −2 0 00 0 0 0 0 0 2 −2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
abcdefgh
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
a + b + c + d + e + f + g + ha + b + c + d − e − f − g − h
s(a + b − c − d)s(e + f − g − h)
2(a − b)2(c − d)2(e − f )2(g − h)
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠(2.126)
requires 56 operations.The Haar matrix H(8) order N = 8 may be expressed by the product of three
matrices
H(8) = H1H2H3, (2.127)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Fast Classical Discrete Orthogonal Transforms 81
where
H1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 0 0 0 0 0 01 −1 0 0 0 0 0 00 0 s 0 0 0 0 00 0 0 s 0 0 0 00 0 0 0 1 0 0 00 0 0 0 0 1 0 00 0 0 0 0 0 1 00 0 0 0 0 0 0 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, H2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 0 0 0 0 0 00 0 1 1 0 0 0 01 −1 0 0 0 0 0 00 0 1 −1 0 0 0 00 0 0 0 2 0 0 00 0 0 0 0 2 0 00 0 0 0 0 0 2 00 0 0 0 0 0 0 2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠,
H3 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 0 0 0 0 0 00 0 1 1 0 0 0 00 0 0 0 1 1 0 00 0 0 0 0 0 1 11 −1 0 0 0 0 0 00 0 1 −1 0 0 0 00 0 0 0 1 −1 0 00 0 0 0 0 0 1 −1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (2.128)
Consider the fast Haar transform algorithm step by step.
Step 1. Calculate
H3 f =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 0 0 0 0 0 00 0 1 1 0 0 0 00 0 0 0 1 1 0 00 0 0 0 0 0 1 11 −1 0 0 0 0 0 00 0 1 −1 0 0 0 00 0 0 0 1 −1 0 00 0 0 0 0 0 1 −1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
abcdefgh
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
a + bc + de + fg + ha − bc − de − fg − h
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (2.129)
Step 2. Calculate
H2(H3 f ) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 0 0 0 0 0 00 0 1 1 0 0 0 01 −1 0 0 0 0 0 00 0 1 −1 0 0 0 00 0 0 0 2 0 0 00 0 0 0 0 2 0 00 0 0 0 0 0 2 00 0 0 0 0 0 0 2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
a + bc + de + fg + ha − bc − de − fg − h
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
a + b + (c + d)e + f + (g + h)a + b − (c + d)e + f − (g + h)
2(a − b)2(c − d)2(e − f )2(g − h)
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (2.130)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
82 Chapter 2
1/8
1/8
2/8
2/8
2/8
2/8
sqrt(2)8
sqrt(2)8
A
B
C
D
E
F
G
H
–1
–1
–1
–1
–1
–1
–1
Figure 2.9 Signal flow diagram of the fast 8-point 1D Haar transform.
Step 3. Calculate
H1[H2(H3 f )] =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 0 0 0 0 0 01 −1 0 0 0 0 0 00 0 s 0 0 0 0 00 0 0 s 0 0 0 00 0 0 0 1 0 0 00 0 0 0 0 1 0 00 0 0 0 0 0 1 00 0 0 0 0 0 0 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
a + b + (c + d)e + f + (g + h)a + b − (c + d)e + f − (g + h)
2(a − b)2(c − d)2(e − f )2(g − h)
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
a + b + c + d + (e + f + g + h)a + b + c + d − (e + f + g + h)
s[a + b − (c + d)]s[e + f − (g + h)]
2(a − b)2(c − d)2(e − f )2(g − h)
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (2.131)
Thus, the 8-point Haar transform may be performed via 14 = 8 + 4 + 2operations (additions and subtractions), and two multiplications by
√2, and four
multiplications by 2, which can be done via the binary shift operation. By ananalogy of the HT, the Haar transform can be represented by Fig. 2.9.
We can see that the matrices H1, H2, and H3 can be expressed as
H1 = diag{(
1 11 −1
),√
2I2, 2I4
}, (2.132)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Fast Classical Discrete Orthogonal Transforms 83
H2 = diag
⎧⎪⎪⎪⎪⎨⎪⎪⎪⎪⎩⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝I2 ⊗
(1 1
)I2 ⊗
(1 −1
)⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , I4
⎫⎪⎪⎪⎪⎬⎪⎪⎪⎪⎭ , (2.133)
H3 =
⎛⎜⎜⎜⎜⎜⎜⎜⎝I4 ⊗(1 1
)I4 ⊗
(1 −1
)⎞⎟⎟⎟⎟⎟⎟⎟⎠ . (2.134)
Theorem 2.6.1: The Haar matrix of order 2n can be generated recursively as
H(2n) =
⎛⎜⎜⎜⎜⎜⎝H(2n−1) 0
0√
2n−1I(2n−1)
⎞⎟⎟⎟⎟⎟⎠⎛⎜⎜⎜⎜⎝I(2n−1) ⊗ (+1 + 1)
I(2n−1) ⊗ (+1 − 1)
⎞⎟⎟⎟⎟⎠ , n = 2, 3, . . . , (2.135)
where
H = H(2) =(1 11 −1
), (2.136)
⊗ is the Kronecker product, and I(2n−1) is the identity matrix of order 2n.
Proof: From the definition of the Haar matrix, we have
H(2n) =
⎛⎜⎜⎜⎜⎜⎝ H(2n−1) ⊗ (+1 + 1)√2n−1I(2n−1) ⊗ (+1 − 1)
⎞⎟⎟⎟⎟⎟⎠ , n = 2, 3, . . . . (2.137)
Using the property of the Kronecker product from Eq. (2.137), we obtain
H(2n) =
⎛⎜⎜⎜⎜⎜⎜⎜⎝[H(2n−1) ⊗ I(20)
] [I(2n−1) ⊗ (+1 + 1)
][√
2n−1I(2n−1) ⊗ I(20)] [
I(2n−1) ⊗ (+1 − 1)]⎞⎟⎟⎟⎟⎟⎟⎟⎠ , n = 2, 3, . . . . (2.138)
Then, from Eq. (2.138) and from the following property of matrix algebra:(ABCD
)=
(A 00 C
) (BD
),
we obtain
H(2n) =(H(2n−1) 0
0√
2n−1I(2n−1)
) (I(2n−1) ⊗ (+1 + 1)I(2n−1) ⊗ (+1 − 1)
), n = 2, 3, . . . . (2.139)
Examples:(1) Let n = 2; then, the Haar matrix of order four can be represented as a product
of two matrices:
H(4) =(H(2) 0
0√
2I(2)
) (I(2) ⊗ (+1 + 1)I(2) ⊗ (+1 − 1)
)= H1H2; (2.140)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
84 Chapter 2
where (s =√
2),
H1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 0 01 −1 0 00 0 s 00 0 0 s
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , H2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 0 00 0 1 11 −1 0 00 0 1 −1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (2.141)
(2) Let n = 3; then, the Haar matrix of order 8 can be expressed as a product ofthree matrices,
H(8) = H1H2H3, (2.142)
where
H1 = diag{(
1 11 −1
),√
2I2, 2I4
},
H2 = diag
⎧⎪⎪⎪⎪⎨⎪⎪⎪⎪⎩⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝I2 ⊗
(1 1
)I2 ⊗
(1 −1
)⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , I4
⎫⎪⎪⎪⎪⎬⎪⎪⎪⎪⎭ ,
H3 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝I4 ⊗
(1 1
)I4 ⊗
(1 −1
)⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (2.143)
To prove this statement from Eq. (2.139), we have
H(8) =(H(4) 0
0 2I(4)
) (I(4) ⊗ (+1 + 1)I(4) ⊗ (+1 − 1)
). (2.144)
Now, using Eq. (2.140), it can be presented as
H(8) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎝(H(2) 0
0√
2I(2)
) (I(2) ⊗ (+1 + 1)I(2) ⊗ (+1 − 1)
)0
0 2I(4)
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎠(I(4) ⊗ (+1 + 1)I(4) ⊗ (+1 − 1)
). (2.145)
Now, from Eq. (2.145), and following the property of matrix algebra,(AB 00 αI(M)
)=
(A 00 αI(M)
) (B 00 I(M)
),
we obtain
H(8) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎝H(2) 0 0
0√
2I(2) 00 0 2I(4)
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎠⎛⎜⎜⎜⎜⎜⎜⎜⎝I(2) ⊗ (+1 + 1) 0I(2) ⊗ (+1 − 1) 0
0 I(4)
⎞⎟⎟⎟⎟⎟⎟⎟⎠(I(4) ⊗ (+1 + 1)I(4) ⊗ (+1 − 1)
)
= H1H2H3. (2.146)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Fast Classical Discrete Orthogonal Transforms 85
Now we can formulate the general theorem.
Theorem 2.6.2: Let H(N) = H(2n) be a Haar transform matrix of order N = 2n.Then,
(1) The Haar matrix of order N = 2n can be represented as a product of n sparsematrices:
H(2n) = HnHn−1 · · ·H1, (2.147)
where
Hn = diag{H(2), 21/2I(2), 2I(4), 23/2I(8), . . . , 2(n−1)/2I(2n−1)
}, (2.148)
H1 =
(I(2n−1) ⊗ (1 1)
I(2n−1) ⊗ (1 −1)
), (2.149)
Hm =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝I(2m−1) ⊗
(1 1
)0
I(2m−1) ⊗(1 −1
)0
0 I(2n − 2m)
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , m = 2, 3, . . . , n − 1. (2.150)
(2) The Haar transform may be calculated via 2(2n − 1) operations or via O(N)operations.
(3) Only 2n storage locations are returned to perform the 2n-point Haar transform.(4) The inverse 2n-point Haar transform matrix be represented as
H−1(2n) = HT (2n) = HT1 HT
2 · · ·HTn . (2.151)
Note that each Hm [see Eq. (2.150)] has the 2m rows with only two nonzeroelements and 2n − 2m rows with only one nonzero element, so the product of amatrix Hm by a vector requires only 2n − 4 addition operations, an H1 transform[see, Eq. (2.148)] requires only 2n additions, and an Hn transform requires 2additions and 2n − 2 multiplications.
So a 2n-point Haar transform requires 2 ·2n−2 addition and 2n−2 multiplicationoperations.
From Eqs. (2.148) and (2.150), we obtain the following factors of the Haartransform matrix of order 16:
H1 =
(I8 ⊗ (+ +)I8 ⊗ (+ −)
), H2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎝I2 ⊗ (+ +) O2×12
I2 ⊗ (+ −) O2×12
O12×4 I12
⎞⎟⎟⎟⎟⎟⎟⎟⎠ , H3 =
⎛⎜⎜⎜⎜⎜⎜⎜⎝I4 ⊗ (+ +) O4×8
I4 ⊗ (+ −) O4×8
O8×8 I8
⎞⎟⎟⎟⎟⎟⎟⎟⎠ ,H4 = diag
{(+ +
+ −),√
2I2, 2I4,√
8I8
}, (2.152)
where Om×n is the zero matrix of size m × n.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
86 Chapter 2
References
1. S. Agaian, Advances and problems of the fast orthogonal transformsfor signal-image processing applications (Part 1), Pattern Recognition,Classification, Forecasting, Yearbook, 3, Russian Academy of Sciences,Nauka, Moscow (1990) 146–215 (in Russian).
2. N. Ahmed and K. R. Rao, Orthogonal Transforms for Digital SignalProcessing, Springer-Verlag, New York (1975).
3. G. R. Reddy and P. Satyanarayana, “Interpolation algorithm usingWalsh–Hadamard and discrete Fourier/Hartley transforms,” Circuits andSystems 1, 545–547 (1991).
4. C.-F. Chan, “Efficient implementation of a class of isotropic quadratic filtersby using Walsh–Hadamard transform,” in Proc. of IEEE Int. Symp. on Circuitsand Systems, June 9–12, Hong Kong, 2601–2604 (1997).
5. R. K. Yarlagadda and E. J. Hershey, Hadamard Matrix Analysis and Synthesiswith Applications and Signal/Image Processing, Kluwer Academic Publishers,Boston (1996).
6. L. Chang and M. Wu, “A bit level systolic array for Walsh–Hadamardtransforms,” IEEE Trans. Signal Process 31, 341–347 (1993).
7. P. M. Amira and A. Bouridane, “Novel FPGA implementations ofWalsh–Hadamard transforms for signal processing,” IEE Proc. of Vision,Image and Signal Processing 148, 377–383 (2001).
8. S. K. Bahl, “Design and prototyping a fast Hadamard transformer forWCDMA,” in Proc. of 14th IEEE Int. Workshop on Rapid SystemsPrototyping, 134–140 (2003).
9. S. V. J. C. R. Hashemian, “A new gate image encoder; algorithm, designand implementation,” in Proc. of 42nd IEEE Midwest Symp. Circuits andSystems 1, 418–421 (1999).
10. B. J. Falkowski and T. Sasao, “Unified algorithm to generate Walsh functionsin four different orderings and its programmable hardware implementations,”IEE Proc.-Vis. Image Signal Process. 152 (6), 819–826 (2005).
11. S. Agaian, Advances and problems of the fast orthogonal transformsfor signal-image processing applications (Part 2), Pattern Recognition,Classification, Forecasting, Yearbook, 4, Russian Academy of Sciences,Nauka, Moscow (1991) 156–246 (in Russian).
12. S. Agaian, K. Tourshan, and J. Noonan, “Generalized parametric slant-Hadamard transforms,” Signal Process 84, 1299–1307 (2004).
13. S. Agaian, H. Sarukhanyan, and J. Astola, “Skew Williamson–Hadamardtransforms,” Multiple Valued Logic Soft Comput. J. 10 (2), 173–187 (2004).
14. S. Agaian, K. Tourshan, and J. Noonan, “Performance of parametricSlant-Haar transforms,” J. Electron. Imaging 12 (3), 539–551 (2003)[doi:10.1117/1.1580494].
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Fast Classical Discrete Orthogonal Transforms 87
15. S. Agaian, K. P. Panetta, and A. M. Grigoryan, “Transform based imageenhancement algorithms with performance measure,” IEEE Trans. ImageProcess 10 (3), 367–380 (2001).
16. A. M. Grigoryan and S. Agaian, “Method of fast 1-D paired transforms forcomputing the 2-D discrete Hadamard transform,” IEEE Trans. Circuits Syst.II 47 (10), 1098–1104 (2000).
17. S. Agaian and A. Grigorian, “Discrete unitary transforms and their relationto coverings of fundamental periods. Part 1,” Pattern Recog. Image Anal. 1,16–24 (1994).
18. S. Agaian and A. Grigorian, “Discrete unitary transforms and their relation tocoverings of fundamental periods. Part 2,” Pattern Recogn. Image Anal. 4 (1),25–31 (1994).
19. S. Agaian and D. Gevorkian, “Synthesis of a class of orthogonal transforms,parallel SIMD algorithms and specialized processors,” Pattern Recogn. ImageAnal. 2 (4), 396–408 (1992).
20. S. Agaian, D. Gevorkian, and H. Bajadian, “Stability of orthogonal series,”Kibernetica VT (Cybernet. Comput. Technol.) 6, 132–170 (1991).
21. S. Agaian and D. Gevorkian, “Complexity and parallel algorithms of thediscrete orthogonal transforms,” Kibernetika VT (Cybernet. Comput. Technol.)5, 124–171 (1990).
22. S. Agaian and A. Petrosian, Optimal Zonal Compression Method Using Ortho-gonal Transforms, Armenian National Academy Publisher, 3–27 (1989).
23. S. Agaian and H. Bajadian, “Stable summation of Fourier–Haar series withapproximate coefficients,” Mat. Zametky (Math. Note) 39 (1), 136–146 (1986).
24. S. Agaian, “Adaptive images compression via orthogonal transforms,” in Proc.of Colloquium on Coding Theory between Armenian Academy of Sciences andOsaka University, Yerevan, 3–9 (1986).
25. S. Agaian and D. Gevorkian, “Parallel algorithms for orthogonal transforms,”Colloquium Math. Soc. Janos Bolyai, Theory of Algorithms (Hungary) 44,15–26 (1984).
26. S. Agaian, A. Matevosian, and A. Muradian, “Digital filters with respect to afamily of Haar systems,” Akad. Nauk. Arm. SSR. Dokl. 77, 117–121 (1983).
27. S. Agaian and A. Matevosian, “Fast Hadamard transform,” Math. Prob.Cybernet. Comput. Technol. 10, 73–90 (1982).
28. S. Agaian and A. Matevosian, “Haar transforms and automatic quality test ofprinted circuit boards,” Acta Cybernetica 5 (3), 315–362 (1981).
29. S. S. Agaian, C. L. Philip and C. Mei-Ching, “Fibonacci Fourier transform andsliding window filtering,” in Proc. of IEEE Int. Conf. on System of SystemsEngineering, SoSE’07, April 16–18, 1–5 (2007).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
88 Chapter 2
30. S. S. Agaian and O. Caglayan, “Super fast Fourier transform,” presented atIS&T/SPIE 18th Annual Symp. on Electronic Imaging Science and Techno-logy, Jan. 15–19, San Jose, CA (2006).
31. S. S. Agaian and O. Caglayan, “Fast encryption method based on new FFTrepresentation for the multimedia data system security,” presented at IEEESMC, Taiwan (Oct. 2006).
32. H. Sarukhanyan, S. Agaian, K. Egiazarian and J. Astola, “ReversibleHadamard transforms,” in Proc. of 2005 Int. TICSP Workshop on SpectralMethods and Multirate Signal Processing, June 20–22, Riga, Latvia, 33–40(2005).
33. S.S. Agaian and O. Caglayan, “New fast Fourier transform with linearmultiplicative complexity I,” in IEEE 39th Asilomar Conf. on Signals, Systemsand Computers, Oct. 30–Nov. 2, Pacific Grove, CA (2005).
34. S. Agaian, K. Tourshan, and J. Noonan, “Parametric Slant-Hadamardtransforms with applications,” IEEE Trans. Signal Process. Lett. 9 (11),375–377 (2002).
35. N. Brenner and C. Rader, “A new principle for fast Fourier transformation,”IEEE Acoust. Speech Signal Process 24, 264–266 (1976).
36. E. O. Brigham, The Fast Fourier Transform, Prentice-Hall, Englewood Cliffs,NJ (2002).
37. J. W. Cooley and J. W. Tukey, “An algorithm for the machine calculation ofcomplex Fourier series,” Math. Comput. 19, 297–301 (1965).
38. T. H. Cormen, C. E. Leiserson, R. L. Rivest and C. Stein, Introduction toAlgorithms, 2nd ed., MIT Press, Cambridge, MA and McGraw-Hill, New York(especially Ch. 30, Polynomials and the FFT) (2001).
39. P. Duhamel, “Algorithms meeting the lower bounds on the multiplicativecomplexity of length-2n DFTs and their connection with practical algorithms,”IEEE Trans. Acoust. Speech Signal Process. 38, 1504–1511 (1990).
40. P. Duhamel and M. Vetterli, “Fast Fourier transforms: a tutorial review and astate of the art,” Signal Process 19, 259–299 (1990).
41. A. Edelman, P. McCorquodale, and S. Toledo, “The future fast Fouriertransforms,” SIAM J. Sci. Comput. 20, 1094–1114 (1999).
42. M. Frigo and S. G. Johnson, “The design and implementation of FFTW3,”Proc. of IEEE 93 (2), 216–231 (2005).
43. W. M. Gentleman and G. Sande, “Fast Fourier transforms: for fun and profit,”Proc. AFIPS 29 (ACM), 563–578 (1966).
44. H. Guo and C. S. Burrus, “Fast approximate Fourier transform via waveletstransform,” Proc. SPIE 2825, 250–259 (1996) [doi:10.1117/12.255236].
45. H. Guo and G. A. Sitton, “The quick discrete Fourier transform,” Proc. of IEEEConf. Acoust. Speech and Signal Processing (ICASSP) 3, pp. 445–448 (1994).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Fast Classical Discrete Orthogonal Transforms 89
46. M. T. Heideman, D. H. Johnson, and C. S. Burrus, “Gauss and the history ofthe fast Fourier transform,” IEEE ASSP Mag. 1 (4), 14–21 (1984).
47. M. T. Heideman and C. S. Burrus, “On the number of multiplicationsnecessary to compute a length-2n DFT,” IEEE Trans. Acoust. Speech. SignalProcess 34, 91–95 (1986).
48. S. G. Johnson and M. Frigo, “A modified split-radix FFT with fewer arithmeticoperations,” IEEE Trans. Signal Process 55 (1), 111–119 (2007).
49. T. Lundy and J. Van Buskirk, “A new matrix approach to real FFTs andconvolutions of length 2k,” Computing 80 (1), 23–45 (2007).
50. J. Morgenstern, “Note on a lower bound of the linear complexity of the fastFourier transform,” J. ACM 20, 305–306 (1973).
51. C. H. Papadimitriou, “Optimality of the fast Fourier transform,” J. ACM 26,95–102 (1979).
52. D. Potts, G. Steidl, and M. Tasche, “Fast Fourier transforms for nonequispaceddata: A tutorial,” in Modern Sampling Theory: Mathematics and Applications,J. J. Benedetto and P. Ferreira, Eds., 247–270 Birkhauser, Boston (2001).
53. V. Rokhlin and M. Tygert, “Fast algorithms for spherical harmonicexpansions,” SIAM J. Sci. Comput. 27 (6), 1903–1928 (2006).
54. J. C. Schatzman, “Accuracy of the discrete Fourier transform and the fastFourier transform,” SIAM J. Sci. Comput. 17, 1150–1166 (1996).
55. O. V. Shentov, S. K. Mitra, U. Heute, and A. N. Hossen, “Subband DFT.I. Definition, interpretations and extensions,” Signal Process 41, 261–277(1995).
56. S. Winograd, “On computing the discrete Fourier transform,” Math. Comput.32, 175–199 (1978).
57. H. G. Sarukhanyan, “Hadamard matrices: construction methods and appli-cations,” in Proc. of Workshop on Transforms and Filter Banks, Feb. 21–27,Tampere, Finland, 95–130 (1998).
58. H. G. Sarukhanyan, “Decomposition of the Hadamard matrices and fastHadamard transform,” in Computer Analysis of Images and Patterns, LectureNotes in Computer Science, 1296 (1997).
59. J. Seberry and M. Yamada, “Hadamard matrices, sequences and blockdesigns,” in Surveys in Contemporary Design Theory, John Wiley & Sons,Hoboken, NJ (1992).
60. S. S. Agaian, “Apparatus for Walsh–Hadamard Transform,” in cooperationwith D. Gevorkian and A. Galanterian, USSR Patent No. SU 1832303 A1(1992).
61. S. S. Agaian, “Parallel Haar Processor,” in cooperation with D. Gevorkian,A. Galanterian, and A. Melkumian, USSR Patent No. SU 1667103 A1 (1991).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
90 Chapter 2
62. S. S. Agaian, “Hadamard Processor for Signal Processing,” Certificate ofAuthorship No. 1098005, USSR (1983).
63. S. S. Agaian, “Haar Type Processor,” in cooperation with K. Abgarian,Certificate of Authorship No. 1169866, USSR (1985).
64. S. S. Agaian, “Haar Processor for Signal Processing,” in cooperation withA. Sukasian, Certificate of Authorship No. 1187176, USSR (1985).
65. S. S. Agaian, “Generalized Haar Processor for Signal Processing,” incooperation with A. Matevosian and A. Melkumian, Certificate of AuthorshipNo. 1116435, USSR (1984).
66. K. R. Rao, V. Devarajan, V. Vlasenko, and M. A. Arasimhan, “CalSalWalsh–Hadamard transform,” IEEE Transactions on ASSP ASSP-26, 605–607(1978).
67. J. J. Sylvester, “Thoughts on inverse orthogonal matrices, simultaneous signsuccessions and tesselated pavements in two or more colors, with applicationsto Newton’s Rule, ornamental till-work, and the theory of numbers,” Phil.Mag. 34, 461–475 (1867).
68. Z. Li, H. V. Sorensen, and C. S. Burus, “FFT and convolution algorithms anDSP micro processors,” in Proc. of IEEE Int. Conf. Acoust., Speech, SignalProcessing, 289–294 (1986).
69. R. K. Montoye, E. Hokenek, and S. L. Runyon, “Design of the IBM RISCSystem/6000 floating point execution unit,” IBM J. Res. Dev. 34, 71–77 (1990).
70. S. S. Agaian and H. G. Sarukhanyan, “Hadamard matrices representation by(−1,+1)-vectors,” in Proc. of Int. Conf. Dedicated to Hadamard Problem’sCentenary, Australia (1993).
71. S. Y. Kung, VLSI Array Processors, Prentice-Hall, Englewood Cliffs, NJ(1988).
72. D. Coppersmith, E. Feig, and E. Linzer, “Hadamard transforms on multiply/add architectures,” IEEE Trans. Signal Process. 42 (4), 969–970 (1994).
73. S. Samadi, Y. Suzukake and H. Iwakura, “On automatic derivation of fastHadamard transform using generic programming,” in Proc. of 1998 IEEEAsia–Pacific Conf. on Circuit and Systems, Thailand, 327–330 (1998).
74. M. Barazande-Pour and J. W. Mark, “Adaptive MHDCT coding of images,” inProc. IEEE Image Proces. Conf., ICIP-94 1, Austin, TX, 90–94 (Nov. 1994).
75. S. S. Agaian, Hadamard Matrices and Their Applications, Lecture Notes inMathematics, 1168, Springer-Verlag, New York (1985).
76. R. Stasinski and J. Konrad, “A new class of fast shape-adaptive orthogonaltransforms and their application to region-based image compression,” IEEETrans. Circuits Syst. Video Technol. 9 (1), 16–34 (1999).
77. B. K. Harms, J. B. Park, and S. A. Dyer, “Optimal measurement techniquesutilizing Hadamard transforms,” IEEE Trans. Instrum. Meas. 43 (3), 397–402(June 1994).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Fast Classical Discrete Orthogonal Transforms 91
78. C. Anshi, Li Di and Zh. Renzhong, “A research on fast Hadamard transform(FHT) digital systems,” in Proc. of IEEE TENCON 93, Beijing, 541–546(1993).
79. S. Agaian, Optimal algorithms for fast orthogonal transforms and theirrealization, Cybernetics and Computer Technology, Yearbook, 2, Nauka,Moscow (1986) 231–319.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Chapter 3Discrete Orthogonal Transformsand Hadamard Matrices
The increasing importance of large vectors in processing and parallel computingin many scientific and engineering applications requires new ideas for designingsuperefficient algorithms of the transforms and their implementations. In thepast decade, fast orthogonal transforms have been widely used in areas suchas data compression, pattern recognition and image reconstruction, interpolation,linear filtering, spectral analysis, watermarking, cryptography, and communicationsystems. The computation of unitary transforms is complicated and timeconsuming. However, it would not be possible to use orthogonal transforms insignal and image processing applications without effective algorithms to calculatethem. The increasing requirements of speed and cost in many applications havestimulated the development of new fast unitary transforms such as Fourier, cosine,sine, Hartley, Hadamard, and slant transforms.1–100
A class of HTs (such as the Hadamard matrices ordered by Walsh andPaley) plays an imperfect role among these orthogonal transforms. Thesematrices are known as nonsinusoidal orthogonal transform matrices and havebeen applied in digital signal processing.1–9,12,14,20–23,25–27,31,32,38,39,41,43,50,54
Recently, HTs and their variations have been widely used in audio andvideo processing.2,10,12,19,33,70,73,74,80,82,83,85,87,89,100 For efficient computation ofthese transforms, fast algorithms were developed.3,7,9,11,15,28,41,42,45,51,53,54,59–61,79,81
These algorithms require only N log2 N addition and subtraction operations (N =2k, N = 12 · 2k, N = 4k, and several others). Alternatively, the achievementof commonly used transforms has motivated many researchers in recent years togeneralize and parameterize these transforms in order to expand the range of theirapplications and provide more flexibility in representing, encrypting, interpreting,and processing signals.
Many of today’s advanced workstations (for example, IBM RISC/system6000, model 530) and other signal processors are designed for efficient, fusedmultiply/add operations15–17 in which the primitive operation is a multiply/add±a±bc operation, where a, b, and c are real numbers.
In Ref. 17, the decimation-in-time “radix-4” HT was developed with the supportof multiply/add instruction. The authors have shown that the routine of the new“radix-4” algorithm is 5.6–7.4% faster than a regular “radix-4” algorithm routine.15
93
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
94 Chapter 3
In this chapter, we present the WHT based on the fast discrete orthogonalalgorithms such as Fourier, cosine, sine, slant, and others. The basic idea of thesealgorithms is the following: first we compute the WHT coefficients, then usingthe so-called correction matrix, we convert these coefficients to transform domaincoefficients. These algorithms are useful for development of integer-to-integerDOTs and for new applications, such as data hiding and signal/image encryption.
3.1 Fast DOTs via the WHT
An N-point DOT can be defined as
X = FN x, (3.1)
where x = (x0, x1, . . . , xN−1) and X = (X0, X1, . . . , XN−1) denote the input andoutput column vectors, respectively, and FN is an arbitrary DOT matrix of order N.
We can represent Eq. (3.1) in the following form:
X = FN x =1N
FN HN HTN x, (3.2)
where HN is an HT matrix of order N = 2n. Denote AN = (1/N)FN HN orFN = AN HN (recall that HN is a symmetric matrix). Then, Eq. (3.2) takes theform
X = AN HN x. (3.3)
In other words, the HT coefficients are computed first and then they are used toobtain the coefficients of discrete transform FN . This is achieved by the transformmatrix AN , which is orthonormal and has a block-diagonal structure. We willcall AN a correction transform. Thus, any transform can be decomposed into twoorthogonal transforms, namely, (1) an HT and (2) a correction transform.
Lemma 3.1.1: Let the orthogonal transform matrix FN = F2n have the followingrepresentation:
F2n =
(F2n−1 F2n−1
B2n−1 −B2n−1
), (3.4)
where F2n−1 stands for an appropriate permutation of F2n−1 and B2n−1 is an2n−1 × 2n−1 submatrix of F2n . Then,
A2n = 2n−1I2 ⊕ 2n−2B2H2 ⊕ 2n−3B4H4 ⊕ · · · ⊕ 2B2n−2 H2n−2 ⊕ B2n−1 H2n−1 , (3.5)
that is, the AN matrix has a block-diagonal structure, where ⊕ denotes the directsum of matrices.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Discrete Orthogonal Transforms and Hadamard Matrices 95
Proof: Clearly, this is true for n = 1. Let us assume that Eq. (3.5) is valid forN = 2k−1; i.e.,
A2k−1 = 2k−2I2 ⊕ 2k−3B2H2 ⊕ 2k−4B4H4 ⊕ · · · ⊕ 2B2k−3 H2k−3 ⊕ B2k−2 H2k−2 , (3.6)
and show that it takes place for N = 2k.From the definition of the correction transform matrix AN , we have
A2k = F2k H2k =
(F2k−1 F2k−1
B2k−1 −B2k−1
) (H2k−1 H2k−1
H2k−1 −H2k−1
)= 2F2k−1 H2k−1 ⊕ 2B2k−1 H2k−1 . (3.7)
Using the definitions of F2k−1 and H2k−1 once again, we can rewrite Eq. (3.7) as
A2k = 4F2k−2 H2k−2 ⊕ 4B2k−2 H2k−2 ⊕ B2k−1 H2k−1 . (3.8)
From Eq. (3.6), we conclude that
A2k = 2k−1I2 ⊕ 2k−2B2H2 ⊕ 2k−3B4H4 ⊕ · · · ⊕ 2B2k−2 H2k−2 ⊕ B2k−1 H2k−1 . (3.9)
For example, Hadamard-based discrete transforms of order 16 can be representedas the following (see Refs. 18 and 79–82), where X denotes nonzero elements:⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
X 0 0 0 0 0 0 0 0 0 0 0 0 0 0 00 X 0 0 0 0 0 0 0 0 0 0 0 0 0 00 0 X X 0 0 0 0 0 0 0 0 0 0 0 00 0 X X 0 0 0 0 0 0 0 0 0 0 0 00 0 0 0 X X X X 0 0 0 0 0 0 0 00 0 0 0 X X X X 0 0 0 0 0 0 0 00 0 0 0 X X X X 0 0 0 0 0 0 0 00 0 0 0 X X X X 0 0 0 0 0 0 0 00 0 0 0 0 0 0 0 X X X X X X X X0 0 0 0 0 0 0 0 X X X X X X X X0 0 0 0 0 0 0 0 X X X X X X X X0 0 0 0 0 0 0 0 X X X X X X X X0 0 0 0 0 0 0 0 X X X X X X X X0 0 0 0 0 0 0 0 X X X X X X X X0 0 0 0 0 0 0 0 X X X X X X X X0 0 0 0 0 0 0 0 X X X X X X X X
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
.
3.2 FFT Implementation
Now we want to compute the Fourier transform9,33,40,80–82 using the HT. TheN-point DFT can be defined as
X = FN x, (3.10)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
96 Chapter 3
where x = (x0, x1, . . . , xN−1)T and X = (X0, X1, . . . , XN−1)T denote the input andoutput column vectors, respectively, and
FN =(Wkm
N
)N−1
k,m=0(3.11)
is the Fourier transform matrix of order N = 2k, where
WN = exp(− j
2πN
)= cos
2πN− j sin
2πN, j =
√−1. (3.12)
We can check that for any integers r, p, and for k, m = 0, 1, . . . ,N/2 − 1,
Wk+N/2N = Wk
N , Wk+N/2N = Wk
N , (3.13)
W2kmN = W2k(m+N/2)
N , W (2k+1)mN = −W (2k+1)(m+N/2)
N . (3.14)
Now we represent Eq. (3.10) in the following form:
X = FN IN x =1N
FN HN HTN x, (3.15)
where HN in Eq. (3.13) is a Sylvester–Hadamard matrix of order N = 2n, i.e.,
HTN = HN , HN HT
N = NIN , (3.16)
and
H2n =
(H2n−1 H2n−1
H2n−1 −H2n−1
), H1 = (1). (3.17)
Denoting AN = (1/N)FN HN or FN = AN HN and using Eq. (3.15), we obtain
X = AN HN x = AN(HN x). (3.18)
This means that first, the HT coefficients are computed, and then they are used toobtain the DFT coefficients. Using Eqs. (3.13) and (3.14), we can represent theDFT matrix by Eq. (3.4). Hence, according to Lemma 3.1.1, the matrix
AN = (1/N)FN HN (3.19)
can be represented as a block-diagonal structure [see Eq. (3.5)].We show the procedure in Fig. 3.1 as a generalized block diagram.
Remark: The expression in Eq. (3.15) is true for any orthogonal transform with
K2n =
(B2n−1 B2n−1
S 2n−1 −S 2n−1
), K1 = (1). (3.20)
(See, for example, the modified Haar transform.)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Discrete Orthogonal Transforms and Hadamard Matrices 97
Figure 3.1 Generalized block diagram of the procedure for obtaining HT coefficients.
Without losing the generalization, we prove it for the cases N = 4, 8, and 16.Case N = 4: The Fourier matrix of order 4 is
F4 =(Wkm
4
)3
k,m=0=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 1 11 − j −1 j1 −1 1 −11 j −1 − j
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (3.21)
Using the permutation matrix
P1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 0 0 00 0 1 00 1 0 00 0 0 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (3.22)
we can represent the matrix F4 in the following equivalent form:
F4 = P1F4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 0 0 00 0 1 00 1 0 00 0 0 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 1 11 − j −1 j1 −1 1 −11 j −1 − j
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ =⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 1 11 −1 1 −11 − j −1 j1 j −1 − j
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ =(H2 H2
B2 −B2
). (3.23)
Then, we obtain
A4 = (1/4)F4H4 = (1/4) (2H2H2 ⊕ 2H2B2) =(1 00 1
)⊕
(1 00 − j
), (3.24)
i.e., A4 is the block-diagonal matrix.Case N = 8: The Fourier matrix of order 8 is
F8 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
W08 W0
8 W08 W0
8 W08 W0
8 W08 W0
8
W08 W1
8 W28 W3
8 −W08 −W1
8 −W28 −W3
8
W08 W2
8 −W08 −W2
8 W08 W2
8 −W08 −W2
8
W08 W3
8 −W28 W1
8 −W08 −W3
8 W28 −W1
8
W08 −W0
8 W08 −W0
8 W08 −W0
8 W08 −W0
8
W08 −W1
8 W28 −W3
8 −W08 W1
8 −W28 W3
8
W08 −W2
8 −W08 W2
8 W08 −W2
8 −W08 W2
8
W08 −W3
8 −W28 −W1
8 −W08 W3
8 W28 W1
8
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (3.25)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
98 Chapter 3
Using the permutation matrix
Q1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 0 0 0 0 0 0 00 0 1 0 0 0 0 00 0 0 0 1 0 0 00 0 0 0 0 0 1 00 1 0 0 0 0 0 00 0 0 1 0 0 0 00 0 0 0 0 1 0 00 0 0 0 0 0 0 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
, (3.26)
represent the matrix F8 in the following equivalent form:
F8 =
(F4 F4
B4 −B4
), (3.27)
where
F4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
W08 W0
8 W08 W0
8
W08 W2
8 −W08 −W2
8
W08 −W0
8 W08 −W0
8
W08 −W2
8 −W08 W2
8
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, B4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
W08 W1
8 W28 W3
8
W08 W3
8 −W28 W1
8
W08 −W1
8 W28 −W3
8
W08 −W3
8 −W28 −W1
8
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (3.28)
Now, using the permutation matrix Q2 = P1 ⊕ I4, represent the matrix F8 in thefollowing equivalent form:
F8 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎝F2 F2 F2 F2
B2 −B2 B2 −B2
B4 −B4
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (3.29)
where
F2 = H2 =
(1 11 −1
), B2 =
(1 − j1 j
), B4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 a − j −a∗
1 −a∗ j a1 −a − j a∗
1 a∗ j −a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , a =
√2
2(1 − j).
(3.30)
We can show that the correction matrix of order 8 has the following form:
A8 =18
(D0 ⊕ D1 ⊕ D2) , (3.31)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Discrete Orthogonal Transforms and Hadamard Matrices 99
where
D0 = 8I2, D1 = 4(1 − j 1 + j1 + j 1 − j
),
D2 = 2
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝(1 − j) + (a − a∗) (1 − j) − (a − a∗) (1 + j) + (a + a∗) (1 + j) − (a + a∗)(1 + j) + (a − a∗) (1 + j) − (a − a∗) (1 − j) − (a + a∗) (1 − j) + (a + a∗)(1 + j) − (a − a∗) (1 − j) + (a − a∗) (1 + j) − (a + a∗) (1 + j) + (a + a∗)(1 − j) − (a − a∗) (1 + j) + (a − a∗) (1 − j) + (a + a∗) (1 − j) − (a + a∗)
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠.(3.32)
Because a − a∗ = − j√
2 and a + a∗ =√
2,
D2 = 2
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 1 +
√2 1 − √2
1 1 1 − √2 1 +√
21 1 1 − √2 1 +
√2
1 1 1 +√
2 1 − √2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ − 2 j
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 +√
2 1 − √2 −1 −1−1 +
√2 −1 − √2 1 1
1 − √2 1 +√
2 −1 −1−1 − √2 −1 +
√2 1 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (3.33)
We introduce the notations: b = (1/4) + (√
2/4), c = (1/4) − (√
2/4). Now thecorrection matrix A8 = Ar
8 + jAi8 can be written as
Ar8 =
(1 00 1
)⊕
(1/2 1/21/2 1/2
)⊕
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1/4 1/4 b c1/4 1/4 c b1/4 1/4 c b1/4 1/4 b c
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (3.34)
Ai8 =
(0 00 0
)⊕
(−1/2 1/21/2 −1/2
)⊕
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝b c −1/4 −1/4−c −b 1/4 1/4
c b −1/4 −1/4−b −c 1/4 1/4
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (3.35)
Now, the 8-point Fourier transform z = F8x can be realized as follows. First, weperform the 8-point HT y = H8x, then we compute the 8-point correction transformz = Ar
8y + jAi8y. The flow graphs corresponding to the real and imaginary parts of
the correction transform are given in Fig. 3.2.In Fig. 3.2, we see that an 8-point correction transform needs 14 real addition, 8
real multiplication, and 4 shift operations.
Case N = 16: Let F16 =(Wmn
16
)15
m,n=0be a Fourier transform matrix of order 16.
Denoting the rows of the Fourier matrix F16 by 0, 1, . . . , 15, we can representthe matrix F16 in the following equivalent form with rows 0, 2, . . . , 12, 14;1, 3, . . . , 13, 15:
F16 =
(F8 F8
B8 −B8
), (3.36)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
100 Chapter 3
y4
y1
y2
y3
y5
y7
y0
y6
y4
y1
y2
y3
y5
y7
y0
y6
rz2
1/2 1/2
1/4
1/4
r= z3
rz4r= z7
rz5r= z6
z0i
z0r = 0
z1iz1
r = 0
z4i
z5i
z6i
z7i
z2i = z3
i–
a
a
b b
a
b
a b
Figure 3.2 Flow graph (real and imaginary parts) of an 8-point correction transform.
where
F8 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
W016 W0
16 W016 W0
16 W016 W0
16 W016 W0
16
W016 W2
16 W416 W6
16 −W016 −W2
16 −W416 −W6
16
W016 W4
16 −W016 −W4
16 W016 W4
16 −W016 −W4
16
W016 W6
16 −W416 W2
16 −W016 −W6
16 W416 −W2
16
W016 −W0
16 W016 −W0
16 W016 −W0
16 W016 −W0
16
W016 −W0
16 W416 −W6
16 −W016 W2
16 −W416 W6
16
W016 −W2
16 −W016 W4
16 −W016 −W4
16 −W016 W4
16
W016 −W6
16 −W416 −W2
16 W016 W6
16 W416 W2
16
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
,
B8 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
W016 W1
16 W216 W3
16 W416 W5
16 W616 W7
16
W016 W3
16 W616 −W1
16 −W416 −W7
16 W216 W5
16
W016 W5
16 −W216 −W7
16 W416 −W1
16 −W616 W3
16
W016 W7
16 −W616 W5
16 −W416 W3
16 −W216 W1
16
W016 −W1
16 W216 −W3
16 W416 −W5
16 W616 −W7
16
W016 −W3
16 W616 W1
16 −W416 W7
16 W216 −W5
16
W016 −W5
16 −W216 W7
16 W416 W1
16 −W616 −W3
16
W016 −W7
16 −W616 −W5
16 −W416 −W3
16 −W216 −W1
16
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
.
(3.37)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Discrete Orthogonal Transforms and Hadamard Matrices 101
Similarly, we obtain
F8 =
(F4 F4
B4 −B4
), F4 =
(F2 F2
B2 −B2
), F2 =
⎛⎜⎜⎜⎜⎜⎜⎝W016 W0
16
W016 −W0
16
⎞⎟⎟⎟⎟⎟⎟⎠ ,
B2 =
⎛⎜⎜⎜⎜⎜⎜⎝W016 W4
16
W016 −W4
16
⎞⎟⎟⎟⎟⎟⎟⎠ , B4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
W016 W2
16 W416 W6
16
W016 W6
16 −W416 W2
16
W016 −W2
16 W416 −W6
16
W016 −W6
16 −W416 −W2
16
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠.
(3.38)
Therefore, the Fourier transform matrix of order 16 from Eq. (3.36) can berepresented in the following equivalent form:
F16 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝F2 F2 F2 F2 F2 F2 F2 F2
B2 −B2 B2 −B2 B2 −B2 B2 −B2
B4 −B4 B4 −B4
B8 −B8
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (3.39)
Using the properties of the exponential function W, we obtain
W116 = cos
π
8− j sin
π
8= c − js = b,
W216 = cos
π
4− j sin
π
4=
√2
2(1 − j) = a,
W316 = cos
3π8− j sin
3π8= s − jc = − jb∗,
W416 = cos
π
2− j sin
π
2= − j,
W516 = cos
5π8− j sin
5π8= s + jc = jb,
W616 = cos
3π4− j sin
3π4=
√2
2(1 + j) = a∗,
W716 = cos
7π8− j sin
7π8= −c − js = −b∗.
(3.40)
Using Eq. (3.40), we obtain
B2 = B12 + jB2
2, (3.41)
where
B12 =
(1 01 0
), B2
2 =
(0 −10 1
), F2 =
(1 11 −1
). (3.42)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
102 Chapter 3
We can also check that
B4 = B14 + jB2
4, (3.43)
where
B14 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1
√2
20
√2
2
1
√2
20
√2
2
1 −√
22
0 −√
22
1 −√
22
0 −√
22
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, B2
4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
0 −√
22−1
√2
2
0
√2
21 −√
22
0
√2
2−1 −
√2
2
0 −√
22
1
√2
2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (3.44)
B8 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 b a − jb∗ − j jb a∗ −b∗
1 − jb∗ a∗ −b j b∗ a jb1 jb −a b∗ − j −b −a∗ − jb∗
1 −b∗ −a∗ jb j − jb∗ −a b1 −b a jb∗ − j − jb a∗ b∗
1 jb∗ a∗ b j −b∗ a − jb1 − jb −a −b∗ − j b −a∗ jb∗
1 b∗ −a∗ − jb j jb∗ −a −b
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (3.45)
We can see that
B8 = B18 + jB2
8, (3.46)
where
B18 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 c
√2
2s 0 s
√2
2−c
1 s
√2
2−c 0 c
√2
2s
1 s −√
22
c 0 −c −√
22
s
1 −c −√
22
s 0 s −√
22
c
1 −c
√2
2−s 0 −s
√2
2c
1 −s
√2
2c 0 −c
√2
2−s
1 −s −√
22−c 0 c −
√2
2−s
1 c −√
22−s 0 −s −
√2
2−c
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
,
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Discrete Orthogonal Transforms and Hadamard Matrices 103
B18 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
0 −s −√
22
c −1 c
√2
2−s
0 −c
√2
2s 1 s −
√2
2c
0 c
√2
2s −1 s −
√2
2−c
0 −s −√
22
c 1 −c
√2
2−s
0 s −√
22
c −1 −c
√2
2s
0 c
√2
2−s 1 −s −
√2
2−c
0 −c
√2
2−s −1 −s −
√2
2c
0 s −√
22−c 1 c
√2
2s
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (3.47)
Now, using Eq. (3.39), the Fourier transform matrix can be represented in thefollowing equivalent form:
F16 = F116 + jF2
16, (3.48)
where
F116 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝H2 H2 H2 H2 H2 H2 H2 H2
B12 −B1
2 B12 −B1
2 B12 −B1
2 B12 −B1
2
B14 −B1
4 B14 −B1
4
B18 −B1
8
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠,
F216 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝O2 O2 O2 O2 O2 O2 O2 O2
B22 −B2
2 B22 −B2
2 B22 −B2
2 B22 −B2
2
B24 −B2
4 B24 −B2
4
B28 −B2
8
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠,
(3.49)
where O2 is the zero matrix of order 2.According to Lemma 3.1.1, the correction transform matrix takes the following
form:
A16 = A116 + jA2
16, (3.50)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
104 Chapter 3
where
A116 =
116
(8I2 ⊕ 8B1
2H2 ⊕ 4B14H4 ⊕ 2B1
8H8
),
A216 =
116
(O2 ⊕ 8B2
2H2 ⊕ 4B24H4 ⊕ 2B2
8H8
).
(3.51)
Now, using the following notations:
u = 1 +√
2, v = 1 − √2, s1 = 1 + 2s, s2 = 1 − 2s,
c1 = 1 + 2c, c2 = 1 − 2c, e = u + 2s, f = v + 2s,
g = u − 2s, h = v − 2s, r = u + 2c, t = v + 2c,
p = u − 2c, q = v − 2c,
(3.52)
we can represent the blocks of the correction matrix as
B12H2 =
(1 11 1
), B2
2H2 =
(−1 11 −1
),
B14H4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝u v 1 1u v 1 1v u 1 1v u 1 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , B24H4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝−1 −1 v u
1 1 −v −u−1 −1 u v
1 1 −u −v
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ,
B18H8 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
e g t q c1 c2 s2 s1
e g t q c2 c1 s1 s2
f h p r c1 c2 s1 s2
f h p r c2 c1 s2 s1
g e q t c2 c1 s1 s2
g e q t c1 c2 s2 s1
h f r p c2 c1 s2 s1
h f r p c1 c2 s1 s2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠,
B28H8 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
−s1 −s2 −c2 −c1 q t g es1 s2 c2 c1 −t −q − f −h−s2 −s1 −c2 −c1 r p h f
s2 s1 c2 c1 −p −r − f −h−s2 −s1 −c1 −c2 t q e g
s2 s1 c1 c2 −q −t −g −e−s1 −s2 −c1 −c2 p r f h
s1 s2 c1 c2 −r −p −h − f
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠.
(3.53)
Now we want to show that the transform can be realized via fast algorithm. Wedenote y = H16x. Then, X = (1/16)A16y. We perform the transform as
H16y = A116y + jA2
16y. (3.54)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Discrete Orthogonal Transforms and Hadamard Matrices 105
Let z = (z0, z1, . . . , z15) and y = (y0, y1, . . . , y15). First we compute a real part ofthis transform. Using the following notations:
A1 = y2 + y3, B1 = y6 + y7, B2 = uy4 + vy5, B3 = vy4 + uy5,
C1= c1y12 + c2y13, C2 = c2y12 + c1y13, S 1= s1y14 + s2y15, S 2 = s2y14 + s1y15,
(3.55)
E = ey8 + gy9 + ty10 + qy11, F = f y8 + hy9 + py10 + ry11,
G = gy8 + ey9 + qy10 + ty11, H = hy8 + f y9 + ry10 + py11,
we obtain
zr0 = y0, zr
1 = y1, zr2 = A1, zr
3 = A1,
zr4 = B1 + B2, zr
5 = zr4, zr
6 = B1 + B3, zr7 = zr
6,
(3.56)
zr8 = E +C1 + S 2, zr
9 = E +C2 + S 1, zr10 = F +C1 + S 2, zr
11 = F +C2 + S 2,
zr12 = G +C2 + S 1, zr
13 = G +C1+ S 2, zr14 = H +C2 + S 2, zr
15 = H +C1+ S 1.
The imaginary part of a 16-point Fourier correction transform can be realized asfollows: Denoting
A1 = y2 − y3, Bi1 = y4 + y5,
Bi2 = uy6 + vy7, Bi
3 = vy6 + uy7,
Ci1 = c1y10 + c2y11, Ci
2 = c2y10 + c1y11,
S i1 = s1y8 + s2y9, S i
2 = s2y8 + s1y9,
(3.57)
Q = qy12 + ty13 + gy14 + ey15,
T = ty12 + qy13 + hy14 + f y15,
R = ry12 + py13 + hy14 + f y15,
P = py12 + ry13 + f y14 + hy15,
(3.58)
we obtain
zi0 = 0, zi
1 = 0, zi2 = −Ai
1, zi3 = Ai
1,
zi4 = −Ai
1 + Bi2, zi
5 = −zi4, zi
6 = −Ai1 + Bi
1, zi7 = −zi
6,
zi8 = Q − S i
1 −Ci2, zi
9 = −T + S i1 +Ci
2, zi10 = R − S i
2 −Ci2, zi
11 = −P + S i2 +Ci
2,
zi12 = T − S i
2 −Ci1, zi
13 = −Q + S i2 +Ci
1, zi14 = P + S i
1 −Ci1, zi
15 = −R + S i1 +Ci
1.
(3.59)
It can be shown that the complexity of a 16-point correction transform is 68real addition and 56 real multiplication operations. Therefore, the 16-point Fourier
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
106 Chapter 3
Figure 3.3 Flow graph of the real part of 16-point Fourier correction transform.
transform, if using the correction transform, needs only 68+64 = 132 real additionand 56 real multiplication operations (see Figs. 3.3 and 3.4).
3.3 Fast Hartley Transform
Let [Hart]N =(ak,n
)N−1k,n=0 be a discrete Hartley transform9,33,66,75,81,82 matrix of order
N = 2r, with
ak,n =1√N
Cas
(2πN
kn
), (3.60)
where
Cas (α) = cos(α) + sin(α). (3.61)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Discrete Orthogonal Transforms and Hadamard Matrices 107
Figure 3.4 Flow graph of the imaginary part of a 16-point Fourier correction transform.
The N-point forward discrete Hartley transform of vector x can be expressed as
z = [Hart]N x =1N
[Hart]N HN HN x = BN x, (3.62)
where
BN = (1/N)[Hart]N HN , (3.63)
and HN is the Sylvester–Hadamard matrix of order N,
H2n =
(H2n−1 H2n−1
H2n−1 −H2n−1
), H1 = (1), n ≥ 2. (3.64)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
108 Chapter 3
Thus, an N-point Hartley transform can be calculated by two steps as follows:Step 1: y = H2n−1 x.Step 2: z = B2n−1y.
Denote
[Hart]N = CN + S N , CN =(ck,n
)N−1k,n=0 , S N =
(sk,n
)N−1k,n=0 , (3.65)
where
ck,n = cos(2π/N)kn, sk,n = sin(2π/N)kn, k, n = 0, 1, . . . ,N − 1. (3.66)
We can check that
c2k,N/2+n = c2k,n, c2k+1,N/2+n = −c2k+1,n,
S 2k,N/2+n = s2k,n, s2k+1,N/2+n = −s2k+1,n.(3.67)
Using Eq. (3.67), we can represent a discrete Hartley transform matrix byEq. (3.4). Hence, according to Lemma 3.1.1, the matrix
BN =1N
[Hart]N HN (3.68)
can be represented as a block-diagonal structure [see Eq. (3.5)]. Without losing thegeneralization, we can prove it for the cases N = 4, 8, and 16.Case N = 4: The discrete Hartley transform matrix of order 4 is
[Hart]4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝c0,0 + s0,0 c0,1 + s0,1 c0,2 + s0,2 c0,3 + s0,3
c1,0 + s1,0 c1,1 + s1,1 c1,2 + s1,2 c1,3 + s1,3
c2,,0 + s2,0 c2,1 + s2,1 c2,2 + s2,2 c2,3 + s2,3
c3,0 + s3,0 c3,1 + s3,1 c3,2 + s3,2 c3,3 + s3,3
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (3.69)
By using the relations in Eq. (3.67) and ordering the rows of [Hart]4 as 0, 2, 1, 3,we obtain
[Hart]4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝c0,0 + s0,0 c0,1 + s0,1 c0,0 + s0,0 c0,1 + s0,1
c2,0 + s2,0 c2,1 + s2,1 c2,0 + s2,0 c2,1 + s2,1
c1,0 + s1,0 c1,1 + s1,1 −(c1,0 + s1,0) −(c1,1 + s1,1)c3,0 + s3,0 c3,1 + s3,1 −(c3,0 + s3,0) −(c3,1 + s3,1)
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ =(A2 A2
P2 −P2
), (3.70)
where
A2 = P2 = H2, (3.71)
i.e., [Hart]4 is the Hadamard matrix; therefore, the correction transform in this case(B4) is the identity matrix.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Discrete Orthogonal Transforms and Hadamard Matrices 109
Case N = 8: The discrete Hartley transform matrix of order 8 can be representedas
[Hart]8 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝H2 H2 H2 H2
H2 −H2 H2 −H2
P4 −P4
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (3.72)
where
H2 =
(1 11 −1
), P4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1√
2 1 0
1 0 −1√
2
1 −√2 1 0
1 0 −1 −√2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (3.73)
Note that
[Hart]8 = Q2Q1[Hart]8, (3.74)
where
Q1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 0 0 0 0 0 0 00 0 1 0 0 0 0 00 0 0 0 1 0 0 00 0 0 0 0 0 1 00 1 0 0 0 0 0 00 0 0 1 0 0 0 00 0 0 0 0 1 0 00 0 0 0 0 0 0 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, Q2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 0 0 0 0 0 0 00 0 1 0 0 0 0 00 1 0 0 0 0 0 00 0 0 1 0 0 0 00 0 0 0 1 0 0 00 0 0 0 0 1 0 00 0 0 0 0 0 1 00 0 0 0 0 0 0 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (3.75)
The correction matrix in this case will be B8 = (1/8)[4I2 ⊕ 4I2 ⊕ 2P4H4], i.e.,
B8 =18
⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣I4 ⊕
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝b a s −ss −s a ba b −s s−s s b a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦ , (3.76)
where
s =√
2, a = 2 − s, b = 2 + s. (3.77)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
110 Chapter 3
Figure 3.5 Flow graph of the 8-point Hartley correction transform.
We can see that the third block of matrix B8 may be factorized as (see Fig. 3.5)
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝b a s −ss −s a ba b −s s−s s b a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ =⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 0 10 1 1 −11 −1 0 −10 −1 1 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝2 0 0 00 s 0 00 0 2 00 0 0 s
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 0 01 −1 0 00 0 1 −10 0 1 −1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (3.78)
Case N = 16: Using the properties of the elements of a Hartley matrix [seeEq. (3.67) ], the Hartley transform matrix of order 16 can be represented as
A16 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝A2 A2 A2 A2 A2 A2 A2 A2
P2 −P2 P2 −P2 P2 −P2 P2 −P2
P4 −P4 P4 −P4
P8 −P8
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (3.79)
where
A2 = C2 + S 2 =
(1 11 −1
),
P2 = Pc2 + Ps
2 =
(1 11 −1
),
P4 = Pc4 + Ps
4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1√
2 1 01 0 −1
√2
1 −√2 1 01 0 −1 −√2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ,(3.80)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Discrete Orthogonal Transforms and Hadamard Matrices 111
and P8 = Pc8 + Ps
8 [here we use the notations ci = cos(iπ/8) and si = sin(iπ/8)]:
Pc8 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 c1 c2 c3 0 −c3 −c2 −c1
1 c3 −c2 −c1 0 c1 c2 −c3
1 −c3 −c2 c1 0 −c1 c2 c3
1 −c1 c2 −c3 0 c3 −c2 c1
1 −c1 c2 −c3 0 c3 −c2 c1
1 −c3 −c2 c1 0 −c1 c2 c3
1 c3 −c2 −c1 0 c1 c2 −c3
1 c1 c2 c3 0 −c3 −c2 −c1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (3.81)
Pc8 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
0 s1 s2 s3 1 s3 s2 s1
0 s3 s2 −s1 1 −s1 s2 s3
0 s3 −s2 −s1 1 −s1 −s2 s3
0 s1 −s2 s3 1 s3 −s2 s1
0 −s1 s2 −s3 1 −s3 s2 −s1
0 −s3 s2 s1 1 s1 s2 −s3
0 −s3 −s2 s1 1 s1 −s2 −s3
0 −s1 −s2 −s3 1 −s3 −s2 −s1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (3.82)
From Eq. (3.79) and Lemma 3.1.1, we obtain the Hartley correction matrix as
B16 =1
16
(8A2H2 ⊕ 8P2H2 ⊕ 4P4H4 ⊕ 2P8H8
); (3.83)
denoted by
s =√
2, a = 2 − s, b = 2 + s,
e = 1 − s, f = 1 + s,
c+ = 2(cosπ
8+ cos
3π8
), c− = 2
(cosπ
8− cos
3π8
),
s+ = 2(sinπ
8+ sin
3π8
), s− = 2
(sinπ
8− sin
3π8
).
(3.84)
And using Eqs. (3.80)–(3.82), we can compute
A2H2 = P2H2 = 2I2,
P4H4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝b a s −ss −s a ba b −s s−s s b a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ,P8H8 = Pc
8H8 + Ps8H8.
(3.85)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
112 Chapter 3
After several mathematical manipulations, we obtain
Pc8H8 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 + c− 1 − c− f + c+ f − c+ e e
1 1 1 + c+ 1 − c+ e − c− e + c− f f
1 1 1 − c+ 1 + c+ e + c− e − c− f f
1 1 1 − c− 1 + c− f − c+ f + c+ e e
1 1 1 − c− 1 + c− f − c+ f + c+ e e
1 1 1 − c+ 1 + c+ e + c− e − c− f f
1 1 1 + c+ 1 − c+ e − c− e + c− f f
1 1 1 + c− 1 − c− f + c+ f − c+ e e
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
, (3.86)
Ps8H8 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
f + s+ f − s+ e e −1 −1 −1 + s− −1 − s−
f − s− f + s− e e −1 −1 −1 + s+ −1 − s+
e − s− e + s− f f −1 −1 −1 + s+ −1 − s+
e + s+ e − s+ f f −1 −1 −1 + s− −1 − s−
f − s+ f + s+ e e −1 −1 −1 − s− −1 + s−
f + s− f − s− e e −1 −1 −1 − s+ −1 + s+
e + s− e − s− f f −1 −1 −1 − s+ −1 + s+
e − s+ e + s+ f f −1 −1 −1 − s− −1 + s−
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
, (3.87)
P8H8 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
b + s+ b − s+ a + c− a − c− s + c+ s − c+ −s + s− −s − s−
b − s− b + s− a + c+ a − c+ −s − c− −s + c− s + s+ s − s+
a − s− a + s− b − c+ b + c+ −s + c− −s − c− s + s+ s − s+
a + s+ a − s+ b − c− b + c− s − c+ s + c+ −s + s− −s − s−
b − s+ b + s+ a − c− a + c− s − c+ s + c+ −s − s− −s + s−
b + s− b − s− a − c+ a + c+ −s + c− −s − c− s − s+ s + s+
a + s− a − s− b + c+ b − c+ −s − c− −s + c+ s − s+ s + s+
a − s+ a + s+ b + c− b − c− s + c+ s − c+ −s − s− −s + s−
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
.
(3.88)
Now, we wish to show that the Hartley transform can be realized via fastalgorithms. The 16-point Hartley transform z = [Hart]16x can be realized asfollows. First, we perform the 16-point HT y = H16x, then we compute the 16-pointcorrection transform. Using Eq. (3.83), we find that
z = 8A2H2
(y0
y1
)⊕ 8P2H2
(y2
y3
)⊕ 4P4H4
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝y4
y5
y6
y7
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ⊕ 2P8H8
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝y8
y9...y15
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (3.89)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Discrete Orthogonal Transforms and Hadamard Matrices 113
From Eq. (3.22), we obtain(z0
z1
)= A2H2
(y0
y1
)=
(2y0
2y1
),(
z2
z3
)= P2H2
(y2
y3
)=
(2y2
2y3
),⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
z4
z5
z6
z7
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ = P4H4
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝y4
y5
y6
y7
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ =⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝2(y4 + y5) + s(y4 − y5 + y6 − y7)2(y6 + y7) + s(y4 − y5 − y6 + y7)2(y4 + y5) − s(y4 − y5 + y6 − y7)2(y6 + y7) − s(y4 − y5 − y6 + y7)
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ .(3.90)
The coefficients ⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝z8
z9...z15
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ = P8H8
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝y8
y9...y15
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ (3.91)
can be calculated by the following formulas:
z8 = A1 + B1 +C1 + D,
z9 = A3 + B3 −C3 + D,
z10 = A5 + B5 +C3 − D,
z11 = A7 + B7 −C4 + D,
z12 = A2 + B2 −C2 + D,
z13 = A4 + B4 +C2 − D,
z14 = A6 + B6 −C3 − D,
z15 = A8 + B8 +C4 + D,
(3.92)
where
A1 = b(y8 + y9) + s+(y8 − y9), A2 = b(y8 + y9) − s+(y8 − y9),A3 = b(y8 + y9) − s−(y8 − y9), A4 = b(y8 + y9) + s−(y8 − y9),A5 = b(y10 + y11) − c+(y10 − y11), A6 = b(y10 + y11) + c+(y10 − y11),A7 = b(y10 + y11) − c−(y10 − y11), A8 = b(y10 + y11) + c−(y10 − y11),
B1 = a(y10 + y11) + c−(y10 − y11), B2 = a(y10 + y11) − c−(y10 − y11),B3 = a(y10 + y11) + c+(y10 − y11), B4 = a(y10 + y11) − c+(y10 − y11),B5 = a(y8 + y9) − s−(y8 − y9), B6 = a(y8 + y9) + s−(y8 − y9),B7 = a(y8 + y9) + s+(y8 − y9), B8 = a(y8 + y9) − s+(y8 − y9),
C1 = c+(y12 − y13) + s−(y14 − y15), C2 = c−(y12 − y13) − s+(y14 − y15),C3 = c−(y12 − y13) + s+(y14 − y15), C4 = c+(y12 − y13) − s−(y14 − y15),D = s(y12 + y13 − y14 − y15).
(3.93)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
114 Chapter 3
Figure 3.6 Flow graph for P4H4y transform and computation of Ai, Bi+4, i = 1, 4.
A5
A6
B1
B4
B2
A7
A8
B3
y12 C1
C3
C2
C4
D
y13
y14
y15
y12
y13
y8
y9
y14
y15
b
as
sc+
s+
c+
c
c
–1
–1
Figure 3.7 Flow graphs for computation of Ai+4, Bi, D, and Ci, i = 1, 4.
Algorithm: The 16-point Hartley transform algorithm using HT is formulated asfollows:
Step 1. Input column vector x = (x0, x1, . . . , x15).Step 2. Perform 16-point HT y = H16x.Step 3. Compute zi, i = 0, 1, . . . , 15 using Eqs. (3.90)–(3.93).Step 4. Output spectral coefficients zi, i = 0, 1, . . . , 15.
It can be shown that the complexity of a 16-point correction transform is61 additions, 15 multiplications, and 6 shifts. Therefore, the 16-point Hartleytransform using a correction transform needs only 61 + 64 = 125 addition, 15multiplication, and 6 shift operations. Figures 3.6–3.8 are the flow graphs forHartley correction coefficients calculation.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Discrete Orthogonal Transforms and Hadamard Matrices 115
Figure 3.8 Flow graphs of the 16-point Hartley correction transform.
3.4 Fast Cosine Transform
Let CN be the N × N transform matrix of a discrete cosine transform of type 2(DCT-2), i.e.,
CN =
(ak cos
(2n + 1)kπ2N
)N−1
k,n=0, (3.94)
where
a0 =
√2
2, ak = 1, k � 0. (3.95)
For more detail on DCT transforms, see also Refs. 9, 19, 32, 33, 40, 49, 80–82,and 98.
We can check that CN is an orthogonal matrix, i.e., CNCTN = (N/2)IN . We denote
the elements of the DCT-2 matrix (without normalizing coefficients ak) by
ck,n = cos(2n + 1)kπ
2N, k, n = 0, 1, . . . ,N − 1. (3.96)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
116 Chapter 3
One can show that
c2k,n = c2k,N−n−1, c2k+1,n = c2k+1,N−n−1, k, n = 0, 1, . . . ,N2− 1. (3.97)
From Eq. (3.97), it follows that the matrix CN can be represented as
CN ≡(CN/2 CN/2
DN/2 −DN/2
). (3.98)
Hence, according to Lemma 3.1.1, the matrix AN = (1/N)CN HN has a block-diagonal structure, where HN is a Sylvester–Hadamard matrix of order N. Withoutlosing the generalization, we can prove it for the cases N = 4, 8, and 16.
Case N = 4: The DCT matrix of order 4 has the form [here we use the notationci = cos(iπ/8)]
C4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 1 1c1 c3 −c3 −c1
c2 −c2 −c2 c2
c3 c1 −c1 −c3
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (3.99)
Using the following permutation matrices,
P1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 0 0 00 0 1 00 1 0 00 0 0 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , P2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 0 0 00 1 0 00 0 0 10 0 1 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (3.100)
we obtain
C4 = P1C4P2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 1 1c2 −c2 c2 −c2
c1 c3 −c1 −c3
c3 c1 −c3 −c1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ =(C2 C2
D2 −D2
). (3.101)
Therefore, the correction matrix in this case takes the form
A4 =14
(2C2H2 ⊕ 2D2H2
)=
(2 00 2c2
)⊕
(2(c1 + c3) 2(c1 − c3)2(c1 + c3) −2(c1 − c3)
). (3.102)
A flow graph of a 4-point cosine correction transform is given in Fig. 3.9.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Discrete Orthogonal Transforms and Hadamard Matrices 117
Figure 3.9 Flow graph of the 4-point cosine correction transform (r1 = c1 + c3, r2 = c1 − c3,s =√
2).
Case N = 8: The DCT matrix of order 8 has the form [here we use the notationci = cos(iπ/16)]
C8 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 1c1 c3 c5 c7 −c7 −c5 −c3 −c1
c2 c6 −c6 −c2 −c2 −c6 c6 c2
c3 −c7 −c1 −c5 c5 c1 c7 −c3
c4 −c4 −c4 c4 c4 −c4 −c4 c4
c5 −c1 c7 c3 −c3 −c7 c1 −c5
c6 −c2 c2 −c6 −c6 c2 −c2 c6
c7 −c5 c3 −c1 c1 −c3 c5 −c7
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (3.103)
Let
P1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 0 0 0 0 0 0 00 0 1 0 0 0 0 00 0 0 0 1 0 0 00 0 0 0 0 0 1 00 1 0 0 0 0 0 00 0 0 1 0 0 0 00 0 0 0 0 1 0 00 0 0 0 0 0 0 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
, P2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 0 0 0 0 0 0 00 1 0 0 0 0 0 00 0 1 0 0 0 0 00 0 0 1 0 0 0 00 0 0 0 0 0 0 10 0 0 0 0 0 1 00 0 0 0 0 1 0 00 0 0 0 1 0 0 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
,
P3 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 0 0 0 0 0 0 00 0 1 0 0 0 0 00 1 0 0 0 0 0 00 0 0 1 0 0 0 00 0 0 0 1 0 0 00 0 0 0 0 1 0 00 0 0 0 0 0 1 00 0 0 0 0 0 0 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
, Q =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 0 0 00 1 0 00 0 0 10 0 1 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , P4 =
(Q 00 Q
). (3.104)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
118 Chapter 3
Using the above-given matrices, we obtain the block representation for the DCTmatrix of order 8 as
C8 = P3P1C8P2P4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎝C2 C2 C2 C2
B2 −B2 B2 −B2
D4Q −D4Q
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (3.105)
where
C2 =
(1 1c4 −c4
), B2 =
(c2 c6
c6 −c2
), D4Q =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝c1 c3 c7 c5c3 −c7 −c5 −c1c5 −c1 c3 c7c7 −c5 −c1 c3
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (3.106)
Therefore, the correction matrix can take the following block-diagonal form:
A8 =18
⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣8(1 00 c4
)⊕ 4
(r1 r2
−r2 r1
)⊕
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝a1 a2 a3 a4
−b1 b2 b3 −b4
−b4 b3 −b2 b1
−a4 −a3 a2 a1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦ , (3.107)
where
a1 = c1 + c3 + c5 + c7, a2 = c1 − c3 − c5 + c7,
a3 = c1 + c3 − c5 − c7, a4 = c1 − c3 + c5 − c7,
b1 = c1 − c3 + c5 + c7, b2 = c1 + c3 − c5 + c7,
b3 = c1 + c3 + c5 − c7, b4 = c1 − c3 − c5 − c7,
r1 = c2 + c6, r2 = c2 − c6.
(3.108)
Case N = 16: Denote rk = cos(kπ/32). From the cosine transform matrix C16 oforder 16 we generate a new matrix by the following operations:
(1) Rewrite the rows of the matrix C16 in the following order: 0, 2, 4, 6, 8, 10, 14,1, 3, 5, 7, 9, 11, 13, 15.
(2) Rewrite the first eight rows of the new matrix as 0, 2, 4, 6, 1, 3, 5, 7.(3) Reorder the columns of this matrix as follows: 0, 1, 3, 2, 4, 5, 7, 6, 8, 9, 11, 10,
12, 13, 15, 14.
Finally, the DCT matrix of order 16 can be represented by the equivalent blockmatrix as
C16 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
C2 C2 C2 C2 C2 C2 C2 C2
A2 −A2 A2 −A2 A2 −A2 A2 −A2
B11 B12 −B11 −B12 B11 B12 −B11 −B12
B21 B22 −B21 −B22 B21 B22 −B21 −B22
B31 B32 B34 B33 −B31 −B32 −B34 −B33
B41 B42 B44 B43 −B41 −B42 −B44 −B43
B51 B52 B54 B53 −B51 −B52 −B54 −B53
B61 B62 B64 B63 −B61 −B62 −B64 −B63
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (3.109)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Discrete Orthogonal Transforms and Hadamard Matrices 119
where
C2 =
(1 1r8 −r8
), A2 =
(r4 r12r12 −r4
);
B11 =
(r2 r6r6 −r14
), B12 =
(r14 r10−r10 −r2
), B21 =
(r10 −r2r14 −r10
), B22 =
(r6 r14−r2 r6
),
B31 =
(r1 r3r3 r9
), B32 =
(r7 r5−r11 r15
), B33 =
(r9 r11−r5 −r1
), B34 =
(r15 r13−r13 −r7
),
B41 =
(r5 r15r7 −r11
), B42 =
(−r3 −r7r15 −r3
), B43 =
(−r13 r9r1 r13
), B22 =
(r11 r1−r9 −r5
), (3.110)
B51 =
(r9 −r5r11 −r1
), B52 =
(r1 −r13r13 r9
), B53 =
(−r15 −r3−r3 r7
), B54 =
(r7 r11−r5 r15
),
B61 =
(r13 −r7r15 −r13
), B62 =
(−r5 r1−r9 r11
), B63 =
(r11 r15r7 −r5
), B64 =
(r3 −r9−r1 r3
).
Now, the matrix in Eq. (3.109) can be presented as
C16 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝C2 C2 C2 C2 C2 C2 C2 C2
A2 −A2 A2 −A2 A2 −A2 A2 −A2
D4 −D4 D4 −D4
D8 −D8
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (3.111)
where
D4 =
(B11 B12
B21 B22
), D8 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝B31 B32 B34 B33
B41 B42 B44 B43
B51 B52 B54 B53
B61 B62 B64 B63
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (3.112)
Now, according to Lemma 3.1.1, we have
A16 =1
16
[8C2H2 ⊕ 8A2H2 ⊕ 4D4H4 ⊕ 2D8H8
]. (3.113)
We introduce the notations
q1 = r2 + r14, q2 = r6 + r10, t1 = r2 − r14, t2 = r6 − r10;
a1 = r1 + r15, a2 = r3 + r13, a3 = r5 + r11, a4 = r7 + r9;
b1 = r1 − r15, b2 = r3 − r13, b3 = r5 − r11, b4 = r7 − r9.
(3.114)
Using Eq. (3.112) and the above-given notations, we find that
C2H2 =
(2 00 2r8
), D2H2 =
(r4 + r12 r4 − r12
−r4 + r12 r4 + r12
),
D4H4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝d1,1 d1,2 d1,3 d1,4
d2,1 d2,2 d2,3 d2,4
d2,4 d2,3 −d2,2 −d2,1
−d1,4 −d1,3 d1,2 d1,1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ,(3.115)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
120 Chapter 3
where
d1,1 = q1 + q2, d1,2 = q1 − q2, d1,3 = t1 + t2, d1,4 = t1 − t2,
d2,1 = −q1 + t2, d2,2 = q1 + t2, d2,3 = t1 + q2, d2,4 = −t1 + q2.(3.116)
The elements of matrix D8H8 can be presented as
P1,1 = a1 + a2 + a3 + a4, P1,2 = a1 − a2 − a3 + a4,
P1,3 = a1 + a2 − a3 − a4, P1,4 = a1 − a2 + a3 − a4,
P1,5 = b1 + b2 + b3 + b4, P1,6 = b1 − b2 − b3 + b4,
P1,7 = b1 + b2 − b3 − b4, P1,8 = b1 − b2 + b3 − b4;
P2,1 = −b1 + b2 − b4 − a3, P2,2 = b1 + b2 + b4 − a3,
P2,3 = b1 + b2 − b4 + a3, P2,4 = −b1 + b2 + b4 + a3,
P2,5 = a1 + a2 + a4 + b3, P2,6 = −a1 + a2 − a4 + b3,
P2,7 = −a1 + a2 + a4 − b3, P2,8 = a1 + a2 − a4 − b3;
P3,1 = a1 − a2 + a3 − b4, P3,2 = −a1 − a2 + a3 + b4,
P3,3 = a1 + a2 + a3 + b4, P3,4 = −a1 + a2 + a3 − b4,
P3,5 = −b1 − b2 + b3 − a4, P3,6 = b1 − b2 + b3 + a4,
P3,7 = −b1 + b2 + b3 + a4, P3,8 = b1 + b2 + b3 − a4;
P4,1 = a1 − a3 − b2 + b4, P4,2 = a1 + a2 + b2 + b4,
P4,3 = −a1 − a3 + b2 + b4, P4,4 = −a1 + a3 − b2 + b4,
P4,5 = −b1 + b3 − a2 + a4, P4,6 = −b1 − b3 + a2 + a4,
P4,7 = b1 + b3 + a2 + a4, P4,8 = b1 − b3 − a2 + a4;
P5,1 = P4,8, P5,2 = P4,7, P5,3 = P4,6, P5,4 = P4,5,
P5,5 = −P4,4, P5,6 = −P4,3, P5,7 = −P4,2, P5,8 = −P4,1;
P6,1 = −P3,8, P6,2 = −P3,7, P6,3 = −P3,6, P6,4 = −P3,5,
P6,5 = P3,4, P6,6 = P3,3, P6,7 = P3,2, P6,8 = P3,1;
P7,1 = P2,8, P7,2 = P2,7, P7,3 = P2,6, P7,4 = P2,5,
P7,5 = −P2,4, P6,6 = −P2,3, P7,7 = −P2,2, P7,8 = −P2,1;
P8,1 = −P1,8, P8,2 = −P1,7, P8,3 = −P1,6, P8,4 = −P1,5,
P8,5 = P1,4, P8,6 = P1,3, P8,7 = P1,2, P8,8 = P1,1.
(3.117)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Discrete Orthogonal Transforms and Hadamard Matrices 121
Therefore, the matrix P = D8H8 is given by
P =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
P1,1 P1,2 P1,3 P1,4 P1,5 P1,6 P1,7 P1,8
P2,1 P2,2 P2,3 P2,4 P2,5 P2,6 P2,7 P2,8
P3,1 P3,2 P3,3 P3,4 P3,5 P3,6 P3,7 P3,8
P4,1 P4,2 P4,3 P4,4 P4,5 P4,6 P4,7 P4,8
P4,8 P4,7 P4,6 P4,5 −P4,4 −P4,3 −P4,2 −P4,1
−P3,8 −P3,7 −P3,6 −P3,5 P3,4 P3,3 P3,2 P3,1
P2,8 P2,7 P2,6 P2,5 −P2,4 −P2,3 −P2,2 −P2,1
−P1,8 −P1,7 −P1,6 −P1,5 P1,4 P1,3 P1,2 P1,1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (3.118)
The following shows that the cosine transform can be done via a fast algorithm.Denote y = H16x. Then, z = A16y. Using Eq. (3.113), we find that
z = 8C2H2
(y0
y1
)⊕ 8D2H2
(y2
y3
)⊕ 4D4H4
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝y4
y5
y6
y7
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ⊕ 2D8H8
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝y8
y9...y15
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (3.119)
From Eqs. (3.115) and (3.116), we obtain (here s =√
2)
z0 = 2y0,
z1 = sy1,
z2 = r4(y2 + y3) + r12(y2 − y3),
z3 = r4(y2 − y3) + r12(y2 + y3),
z4 = q1(y4 + y5) + q2(y4 − y5) + t1(y6 + y7) + t2(y6 − y7),
z5 = −q1(y4 − y5) + t2(y4 + y5) + t1(y6 − y7) + q2(y6 + y7),
z6 = −t1(y4 − y5) + q2(y4 + y5) − q1(y6 − y7) − t2(y6 + y7),
z7 = −t1(y4 + y5) + t2(y4 − y5) + q1(y6 + y7) − q2(y6 − y7).
(3.120)
Now, using the following notations:
n1 = y8 + y9, n2 = y10 + y11, n3 = y12 + y13, n4 = y14 + y15,
m1 = y8 − y9, m2 = y10 − y11, m3 = y12 − y13, m4 = y14 − y15,(3.121)
we obtain
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
122 Chapter 3
z8 = a1(n1 + n2) + a2(m1 + m2) + a3(m1 − m2) + a4(n1 − n2)
+ b1(n3 + n4) + b2(m3 + m4) + b3(m3 − m4) + b4(n3 − n4),
z9 = −b1(m1 − m2) + b2(n1 + n2) − a3(n1 − n2) − b4(m1 + m2)
+ a1(m3 − m4) + a2(n3 + n4) + b3(n3 − n4) + a4(m3 + m4),
z10 = a1(m1 + m2) − a2(n1 − n2) + a3(n1 + n2) − b4(m1 − m2)
− b1(m3 + m4) − b2(n3 − n4) + b3(n3 + n4) − a4(m3 − m4),
z11 = a1(n1 − n2) + b4(n1 + n2) − a3(m1 + m2) − b2(m1 − m2)
− b1(n3 − n4) + a4(n3 + n4) + b3(m3 + m4) − a2(m3 − m4),
z12 = b1(n1 − n2) − b3(m1 + m2) − a2(m1 − m2) + a4(n1 + n2)
+ a1(n3 − n4) − a3(m3 + m4) + b2(m3 − m4) − b4(n3 + n4)
z13 = −b1(m1 + m2) − b2(n1 − n2) − b3(n1 + n2) + a4(m1 − m2)
− a1(m3 + m4) + a2(n3 − n4) + a3(n3 + n4) − b4(m3 − m4),
z14 = a1(m1 − m2) + a2(n1 + n2) − a4(m1 + m2) − b3(n1 − n2)
+ b1(m3 − m4) − b2(n3 + n4) − b4(m3 + m4) − a3(n3 − n4),
z15 = −b1(n1 + n2) + b2(m1 + m2) − b3(m1 − m2) + b4(n1 − n2)
+ a1(n3 + n4) − a2(m3 + m4) + a3(m3 − m4) − a4(n3 − n4).
(3.122)
Algorithm: The 16-point cosine transform algorithm using the HT is formulatedas follows:
Step 1. Input column vector x = (x0, x1, . . . , x15).Step 2. Perform a 16-point HT, y = H16x.Step 3. Compute the coefficients zi, I = 0, 1, . . . , 15.Step 4. Perform shift operations (three bits for z0, . . . , z3, two bits for z4, . . . , z7,
and one bit for z8, . . . , z15).Output the results of Step 4.
The flow graphs of the cosine correction transform are given in Figs. 3.10–3.13.
3.5 Fast Haar Transform
Let H2 be a Hadamard matrix of order 2. The Haar matrix9,33,81,82,84 of order 2n+1
can be represented as
X2n+1 =
(X2n ⊗ (1 1)2n/2I2n ⊗ (1 − 1)
). (3.123)
We can check that XN is an orthogonal matrix of order N (N = 2n), i.e.,
XN XTN = NIN . (3.124)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Discrete Orthogonal Transforms and Hadamard Matrices 123
y0
r4
r12
r12
–r4
y1
y2
q1
q1t1
t1
t1
t2
t2
t1
t2
t2q2
q2
q2
q2
q1
q1
y3
y4
y5
y6
y7
z0
z1
z2
z3
z4
z6
z5
z7
2
s
Figure 3.10 Flow graph of the computation of components zi, i = 0, 1, . . . , 7.
From Eq. (3.123), it follows that the matrix X2n can be represented as
X2n ≡⎛⎜⎜⎜⎜⎜⎜⎝X2n−1 X2n−1
2(n−1)/2I2n−1 −2(n−1)/2I2n−1
⎞⎟⎟⎟⎟⎟⎟⎠ . (3.125)
Hence, according to Lemma 3.1.1, the correction matrix
AN =1N
XN HN (3.126)
has a block-diagonal structure, where HN is a Sylvester–Hadamard matrix oforder N.
Without losing the generalization, we can prove it for the cases N = 4, 8,and 16.
Case N = 4: The discrete Haar transform matrix of order 4 has the form
X4 =
⎛⎜⎜⎜⎜⎜⎜⎝X2 X2√2I2 −
√2I2
⎞⎟⎟⎟⎟⎟⎟⎠ , (3.127)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
124 Chapter 3
Figure 3.11 Flow graph of the computation of Ai, Bi, i = 1, 2, 3, 4, and components z8
and z9.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Discrete Orthogonal Transforms and Hadamard Matrices 125
Figure 3.12 Flow graph of the computation of components zi, i = 10, 11, 12, 13.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
126 Chapter 3
Figure 3.13 Flow graph of the computation of components z14 and z15.
where X2 = H2. We can see that the correction matrix, i.e., A4 = X4H4, has ablock-diagonal form:
A4 =14
(2I2 ⊕ 2
√2H2
)=
12
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 0 0 00 1 0 00 0
√2√
20 0
√2 −√2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (3.128)
Case N = 8: From Eq. (3.125), we obtain
X8 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎝H2 H2 H2 H2√
2I2 −√2I2
√2I2 −√2I2
2I4 −2I4
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (3.129)
In this case, the correction matrix is represented as
A8 =18
(4I2 ⊕ 4
√2H2 ⊕ 4H4
)=
12
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 0 0 0 0 0 0 00 1 0 0 0 0 0 00 0
√2√
2 0 0 0 00 0
√2 −√2 0 0 0 0
0 0 0 0 1 1 1 10 0 0 0 1 −1 1 −10 0 0 0 1 1 −1 −10 0 0 0 1 −1 −1 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (3.130)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Discrete Orthogonal Transforms and Hadamard Matrices 127
Case N = 16: Consider a Haar matrix of order 16. For n = 4 from Eq. (3.33), weobtain
X16 =
⎛⎜⎜⎜⎜⎜⎜⎝X8 X8
23/2I8 −23/2I8
⎞⎟⎟⎟⎟⎟⎟⎠ ,X8 =
⎛⎜⎜⎜⎜⎝X4 X4
2I4 −2I4
⎞⎟⎟⎟⎟⎠ ,X4 =
⎛⎜⎜⎜⎜⎜⎝X2 X2√2I2 −
√2I2
⎞⎟⎟⎟⎟⎟⎠ .(3.131)
Note that
X2 = H2 =
(+ +
+ −). (3.132)
Hence, using Eq. (3.131), the Haar transform matrix X16 of order 16 is representedas
X16 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
H2 H2 H2 H2 H2 H2 H2 H2√2I2 −√2I2
√2I2 −√2I2
√2I2 −√2I2
√2I2 −√2I2
2I4 −2I4 2I4 −2I4
23/2I8 −23/2I8
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠,
(3.133)
or as (here s =√
2)
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 11 −1 1 −1 1 −1 1 −1 1 −1 1 −1 1 −1 1 −1s 0 −s 0 s 0 −s 0 s 0 −s 0 s 0 −s 00 s 0 −s 0 s 0 −s 0 s 0 −s 0 s 0 −s2 0 0 0 −2 0 0 0 2 0 0 0 −2 0 0 00 2 0 0 0 −2 0 0 0 2 0 0 0 −2 0 00 0 2 0 0 0 −2 0 0 0 2 0 0 0 −2 00 0 0 2 0 0 0 −2 0 0 0 2 0 0 0 −22s 0 0 0 0 0 0 0 −2s 0 0 0 0 0 0 00 2s 0 0 0 0 0 0 0 −2s 0 0 0 0 0 00 0 2s 0 0 0 0 0 0 0 −2s 0 0 0 0 00 0 0 2s 0 0 0 0 0 0 0 −2s 0 0 0 00 0 0 0 2s 0 0 0 0 0 0 0 −2s 0 0 00 0 0 0 0 2s 0 0 0 0 0 0 0 −2s 0 00 0 0 0 0 0 2s 0 0 0 0 0 0 0 −2s 00 0 0 0 0 0 0 2s 0 0 0 0 0 0 0 −2s
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
128 Chapter 3
x0
x15
H16
l2
H2
H4
H8
y0 z0
z1
z2
z3
z4
z7
z8
z15
y1
y2
y3
y4
y7
y8
y15
8
4
8 √2
4 √2
Figure 3.14 Flow graph of a 16-point Haar transform algorithm.
The corresponding correction matrix takes the following block-diagonal form:
A16 =14
(4I2 ⊕ 2sH2 ⊕ 2H4 ⊕ sH8) . (3.134)
Now we want to show that the Haar transform can be realized via a fastalgorithm. Denote y = H16x and z = A16y. Using Eq. (3.134), we find that
z =14
⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣4(y0
y1
)⊕ 2sH2
(y2
y3
)⊕ 2H4
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝y4...y7
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ⊕ sH8
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝y8...y15
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦ . (3.135)
Algorithm: A 16-point Haar transform algorithm using an HT is formulated asfollows:
Step 1. Input the column vector x = (x0, x1, . . . , x15).Step 2. Perform the 16-point HT y = H16x.Step 3. Perform the 2-, 4-, and 8-point HT of vectors (y2, y3), (y4, . . . , y7), and
(y8, . . . , y15), respectively.Step 4. Output the results of step 3 (see Fig. 3.14).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Discrete Orthogonal Transforms and Hadamard Matrices 129
3.6 Integer Slant Transforms
In the past decade, fast orthogonal transforms have been widely used in such areasas data compression, pattern recognition and image reconstruction, interpolation,linear filtering, and spectral analysis. The increasing requirements of the speedand cost in many applications have stimulated the development of new fast unitarytransforms such as HT and slant transforms. We can observe a considerable interestin many applications of the slant transform.
A phenomenon characteristic of digital images is the presence of approximatelyconstant or uniformly changing gray levels over a considerable distance orarea. The slant transform is specifically defined for efficient representation ofsuch images. Intel uses the slant transform in their “Indeo” video compressionalgorithm.
Historically, Enomoto and Shibata conceived the first 8-point slant transformin 1971. The slant vector, which can properly follow gradual changes in thebrightness of natural images, was a major innovation.19 Since its development, theslant transform has been generalized by Pratt, Kane, and Welch,20 who presentedthe procedure for computing the slant transform matrix of order 2n. The slanttransform has the best compaction performance among the nonsinusoidal fastorthogonal transforms—Haar, Walsh–Hadamard, and slant—but is not optimumin its performance measured among the all-sinusoidal transforms such as theDCT and the KLT9 for the first-order Markov models. In general, there is atradeoff between the performance of an orthogonal transform and its computationalcomplexity. The KLT is an optimal transform but has a high computationalcomplexity. Therefore, the need arises for slant transform improvement schemesthat yield performance comparable to that of the KLT and the DCT, withoutincurring their computational complexity.
To improve the performance of the slant HT, we introduce in this chapter aconstruction concept for a class of parametric slant transforms that includes, as aspecial case, the commonly used slant transforms and HTs. Many applications havemotivated modifications and generalizations of the slant transform. Agaian andDuvalian21 developed two new classes of kn-point slant transforms for the arbitraryinteger k. The first class was constructed via HTs and the second one via Haartransforms. The same authors have also investigated Walsh–Hadamard, dyadic-ordered, and sequency-ordered slant transforms. Recently, Agaian27 introduced anew class of transforms called the multiple β slant-HTs. This class of transformsincludes, as a special case, the Walsh–Hadamard and the classical slant HTs.Agaian and Duvalian have shown that this new technique outperforms the classicalslant-HT.
In the application of wavelet bases to denoising and image compression, forexample, the time localization and the approximation order (number of vanishingmoments) of the basis are both important. Good time-localization properties leadto the good representation of edges. The slant transform is the prototype of theslantlet wavelet transform, which could be based on the design of wavelet filterbanks with regard to both time localization and approximation order.22
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
130 Chapter 3
Most linear transforms, however, yield noninteger outputs even when the inputsare integers, making them unsuitable for many applications such as losslesscompression. In general, the transformed coefficients require theoretically infinitebits for perfect representation. In such cases, the transform coefficients must berounded or truncated to a finite precision that depends on the number of bitsavailable for their representation. This, of course, introduces an error, which ingeneral degrades the performance of the transform. Recently, reversible integer-to-integer wavelet transforms have been introduced.23 An integer-to-integer transformis an attractive approach to solving the rounding problem, and it offers easierhardware implementation. This is because integer transforms can be exactlyrepresented by finite bits.
The purpose of Section 3.6 is to show how to construct an integer slant transformand reduce the computational complexity of the algorithm for computing the 2Dslant transform. An effective algorithm for computing the 1D slant transform viaHadamard is also introduced.
3.6.1 Slant HTsThis subsection briefly reviews slant HTs. Currently, slant transforms are usuallyconstructed via Hadamard or Haar transforms.9,33–35,90 The slant transformsatisfies the following properties:9,86
• Its first-row vector is of constant value.• Its second-row vector represents the parametric slant vector.• Its basis vectors are orthonormal.• It can be calculated using a fast algorithm.
The forward and inverse slant HTs of order N = 2n, n = 1, 2, . . ., are defined as
X = S 2n x, x = S T2n X, (3.136)
where S 2n is generated recursively:20
S 2n =1√2
Q2n
(S 2n−1 O2n−1
O2n−1 S 2n−1
)=
1√2
Q2n (I2 ⊗ S 2n−1 ) , (3.137)
where Om denotes the m×m zero matrix, S 2 = (1/√
2)(+ +
+ −), and Q2n is the 2n×2n
matrix defined as
Q2n =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 0 O0 1 0 O0
a2n b2n O0 −a2n b2n O0
O0 O0 I2n−1−2 O0 O0 I2n−1−2
0 1 O0 0 −1 O0
−b2n a2n O0 b2n a2n O0
O0 O0 I2n−1−2 O0 O0 −I2n−1−2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (3.138)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Discrete Orthogonal Transforms and Hadamard Matrices 131
where O0 and O0 are row and column zero vectors, respectively, and the parametersa2n and b2n are defined recursively by
b2n =(1 + 4a2
2n−1
)−(1/2), a2n = 2b2na2n−1 , a2 = 1. (3.139)
From Eq. (3.138), it follows that Q2n QT2n is the diagonal matrix, i.e.,
Q2n QT2n = diag
{2, 2(a2
2n + a22n), 2I2n−1−2, 2, 2(a2
2n + a22n), 2I2n−1−2
}. (3.140)
Because a22n + b2
2n = 1, Q2n is the orthogonal matrix and Q2n QT2n = 2I2n .
Example 3.6.1: Sequency-ordered slant-Hadamard matrices of orders 4 and 8:
(1) Slant Hadamard matrix of order 4:
S 4 =12
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 1 13 1 −1 −31 −1 −1 11 −3 3 −1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠· 1√
5
· 1√5
. (3.141)
(2) Slant Hadamard matrix of order 8:
S 8 =1√8
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 1
7 5 3 1 −1 −3 −5 −7
3 1 −1 −3 −3 −1 1 3
7 −1 −9 −17 17 9 1 −7
1 −1 −1 1 1 −1 −1 1
1 −1 −1 1 −1 1 1 −1
1 −3 3 −1 −1 3 −3 1
1 −3 3 −1 1 −3 3 −1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
· 1√21
· 1√5
· 1√105
· 1√5
· 1√5
. (3.142)
3.6.2 Parametric slant HT
Construction 191–93: We introduce the following matrices:
[PS ]4(a, b) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 1 1a b −b −a1 −1 −1 1b −a a −b
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (3.143)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
132 Chapter 3
[PS ]8(a, b, c, d, e, f ) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 1a b c d −d −c −b −ae f − f −e −e − f f eb −d −a −c c a d −b1 −1 −1 1 1 −1 −1 1c −a d b −b −d a −cf −e e − f − f e −e fd −c b −a a −b c −d
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (3.144)
We call the matrices in Eqs. (3.143) and (3.144) parametric slant Hadamardmatrices. Note that [PS ]4(1, 1) and [PS ]8(1, 1, 1, 1, 1, 1) are Hadamard matrices oforder 4 and 8, respectively. Note also that the matrix in Eq. (3.144) is a slant-typematrix if it satisfies the following conditions:
a ≥ b ≥ c ≥ d, e ≥ f , and ab = ac + bd + cd.
We can check that
[PS ]4(a, b)[PS ]T4 (a, b) = I2 ⊗
[4 ⊕ 2(a2 + b2)
],
[PS ]8(a, . . . , f )[PS ]T8 (a, . . . , f )
= I2 ⊗[8 ⊕ 2(a2 + · · · + d2) ⊕ 4(e2 + f 2) ⊕ 2(a2 + · · · + d2)
].
(3.145)
Note that a parametric orthonormal slant Hadamard matrix of order 8 can bedefined as
[PS ]8(a, b, c, d, e, f ) =1√8
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 1
a b c d −d −c −b −a
e f − f −e −e − f f e
b −d −a −c c a d −b
1 −1 −1 1 1 −1 −1 1
c −a d b −b −d a −c
f −e e − f − f e −e f
d −c b −a a −b c −d
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
· 2√a2 + · · · + d2
·√
2e2 + f 2
· 2√a2 + · · · + d2
· 2√a2 + · · · + d2√
2e2 + f 2
· 2√a2 + · · · + d2
. (3.146)
It is not difficult to verify if a = 4, b = c = e = 2, f = 1, and d = 0, because theorthonormal slant Hadamard matrix obtained from Eq. (3.146) has the followingform:
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Discrete Orthogonal Transforms and Hadamard Matrices 133
1√8
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 1
4 2 2 0 0 −2 −2 −4
2 1 −1 −2 −2 −1 1 2
2 0 −4 −2 2 4 0 −2
1 −1 −1 1 1 −1 −1 1
2 −4 0 2 −2 0 4 −2
1 −2 2 −1 −1 2 −2 1
0 −2 2 −4 4 −2 2 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
· 1√6
·√
25
· 1√6
· 1√6
·√
25
· 1√6
. (3.147)
Construction 2:25–27 Introduce the following expressions for a2n and b2n [see Eq.(3.139)] to construct parametric slant HTs of order 2n:
a2n =
√3 · 22n−2
4 · 22n−2 − β2n, b2n =
√22n−2 − β2n
4 · 22n−2 − β2n, (3.148)
where
a2 = 1 and − 22n−2 ≤ β2n ≤ 22n−2. (3.149)
It can be shown that for β2n >∣∣∣22n−2
∣∣∣, slant HTs lose their orthogonality.The parametric slant-transform matrices fulfill the requirements of the classicalslant-transform matrix outlined in previous sections (see also Ref. 27). However,the parametric slant-transform matrix is a parametric matrix with parametersβ4, β8, . . . , β2n .
Properties:
(1) The parametric slant transform falls into one of at least four differentcategories, depending on the β2n value chosen, and they include as special casesthe slant-HT and HTs of order 2n. Particularly,• For β4 = β8 = · · · = β2n = 1, we obtain the classical slant transform.20,26,27
• For β2n = 22n−2 for all n ≥ 2, we obtain the WHT.9
• For β4 = β8 = · · · = β2n = β, β ≤ |4|, we refer to this case as the constant-βslant transform.27
• For β4 � β8 � · · · � β2n , −22n−2 ≤ β2n ≤ 22n−2, n = 2, 3, 4, . . ., we referto this case as the multiple-β slant transform. In this case, some of the β2n
values can be equal, but not all of them.(2) Parametric slant-transform matrices fulfill the following requirements of the
classical slant transform:• Its first-row vector is of constant value.• Its second-row vector represents the parametric slant vector.• Its basis vectors are orthonormal.• It has a fast algorithm (see Section 1.2).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
134 Chapter 3
(3) It is easily verified that the parametric slant-transform matrix can berepresented as
S 2n =
{M4H4 for n = 2C2n H2n for n > 2
(3.150)
with
C2n = M2n
(C2n−1 O2n−1
O2n−1 C2n−1
),
and
M2n =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 0 O0 0 0 O0
0 b2n O0 a2n 0 O0
O0 O0 I2n−1−2 O0 O0 O2n−1−2
0 0 O0 0 1 O0
0 a2n O0 −b2n 0 O0
O0 O0 O2n−1−2 O0 O0 I2n−1−2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (3.151)
where M2 = I2 · Om denotes a zero matrix of order m, Im denotes an identitymatrix of order m, H2n is the Hadamard-ordered Walsh–Hadamard matrix ofsize 2n, the parameters a2n and b2n are given in Eq. (3.148), and O0 and O0
denote the zero row and zero column, both of length 2n−1 − 2, respectively.
Example: For 2n = 8 we have, respectively, classical case (β2n = 1), constant-βcase (β2n = 1.7), multiple-β case (β4 = 1.7 and β8 = 8.1), and Hadamard case(β4 = 4, β8 = 16). Figure 3.15 shows the basis vectors for this example.
(1) Classical case (β2n = 1):
S Classical =1√8
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.00001.5275 1.0911 0.6547 0.2182 −0.2182 −0.6547 −1.0911 −1.52751.0000 −1.0000 −1.0000 1.0000 1.0000 −1.0000 −1.0000 1.00000.4472 −1.3416 1.3416 −0.4472 0.4472 −1.3416 1.3416 −0.44721.3416 0.4472 −0.4472 −1.3416 −1.3416 −0.4472 0.4472 1.34160.6831 −0.0976 −0.8783 −1.6590 1.6590 0.8783 0.0976 −0.68311.0000 −1.0000 −1.0000 1.0000 −1.0000 1.0000 1.0000 −1.00000.4472 −1.3416 1.3416 −0.4472 −0.4472 1.3416 −1.3416 0.4472
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠.
(3.152)
(2) Constant-β case (β2n = 1.7):
S Const =1√8
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.00001.5088 1.1245 0.6310 0.2467 −0.2467 −0.6310 −1.1245 −1.50881.0000 −1.0000 −1.0000 1.0000 1.0000 −1.0000 −1.0000 1.00000.5150 −1.3171 1.3171 −0.5150 0.5150 −1.3171 1.3171 −0.51501.3171 0.5150 −0.5150 −1.3171 −1.3171 −0.5150 0.5150 1.31710.6770 −0.0270 −0.9312 −1.6352 1.6352 0.9312 0.0270 −0.67701.0000 −1.0000 −1.0000 1.0000 −1.0000 1.0000 1.0000 −1.00000.5150 −1.3171 1.3171 −0.5150 −0.5150 1.3171 −1.3171 0.5150
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠.
(3.153)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Discrete Orthogonal Transforms and Hadamard Matrices 135
Figure 3.15 Parametric slant-transform basis vectors for (2n = 8): (a) classical case,(b) constant-β case (β2n = 1.7), (c) multiple-β case (β4 = 1.7 and β8 = 8.1), and (d) Hadamardcase (β4 = 4, β8 = 16).
(3) Multiple-β case (β4 = 1.7, β8 = 8.1):
S Multiple =1√8
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.00001.4218 1.1203 0.7330 0.4315 −0.4315 −0.7330 −1.1203 −1.42181.0000 −1.0000 −1.0000 1.0000 1.0000 −1.0000 −1.0000 1.00000.5150 −1.3171 1.3171 −0.5150 0.5150 −1.3171 1.3171 −0.51501.3171 0.5150 −0.5150 −1.3171 −1.3171 −0.5150 0.5150 1.31710.8446 0.1013 −0.8532 −1.5964 1.5964 0.8532 −0.1013 −0.84461.0000 −1.0000 −1.0000 1.0000 −1.0000 1.0000 1.0000 −1.00000.5150 −1.3171 1.3171 −0.5150 −0.5150 1.3171 −1.3171 0.5150
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠.
(3.154)
(4) Hadamard case (β4 = 4, β8 = 16):
S Had =1√8
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + + + + + + +
+ + + + − − − −+ + − − − − + ++ + − − + + + −+ − − + + − − ++ − + + − − + −+ − + − − + − ++ − + − + − + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (3.155)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
136 Chapter 3
3.7 Construction of Sequential Integer Slant HTs
This section presents a new class of sequential integer slant HTs. The sequentialnumber of a function is the number of sign inversions or “zero crossings” on theinterval of definition. A matrix is said to be sequential if the sequential number ofits rows grows with the number of the rows.
Sequency ordering, used originally by Ref. 23, is most popular in signal theorybecause it ranks the transform coefficients roughly according to their variances forsignal statistics commonly encountered in practice and because of the analogieswith the frequency ordering of the Fourier functions.
Lemma 3.7.1: 25,26 Let S N and S −1N be forward and inverse sequential slant-
transform matrices. Then
S 2N = [H2 ⊗ A1,H1 ⊗ A2, . . . ,H2 ⊗ AN−1,H1 ⊗ AN] ,
S −12N =
⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣
H−12 ⊗ B1
H−11 ⊗ B2...
H−12 ⊗ BN−1
H−11 ⊗ BN
⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦(3.156)
are the forward and inverse sequential slant HT matrices of order 2N, where Ai
and Bi are the i’th column and i’th row of the S N and S −1N matrices, respectively,
H2 =(+ +
+ −), and H1 =
(+ +
− +).
The construction will be based on the parametric sequential integer slantmatrices and Lemma 3.7.1. Examples of parametric sequential slant matrices andtheir inverse matrices of order 3 and 5 are given below:
[PS ]3(a, b) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 1a 0 −ab −2b b
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎠ , [PS ]−13 (a, b) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
13
12a
16b
13
0 − 13b
13− 1
2a1
6b
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠,
[PS ]5(a, b, c) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 1 1 12b b 0 −b −2ba 0 −2a 0 a−b 2b 0 −2b b
2c −3c 2c −3c 2c
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, [PS ]−1
5 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
15
15b
16a− 1
10b1
15c15
110b
015b
− 110c
15
0 − 13a
01
15c15− 1
10b0 − 1
5b− 1
10c15− 1
5b1
6a1
10b1
15c
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
.
(3.157)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Discrete Orthogonal Transforms and Hadamard Matrices 137
Remark 1: The slant transform matrices in Eqs. (3.143), (3.144), and (3.157)possess the sequency property in ordered form.
Remark 2: One can construct a class of slant HTs of order 3 · 2n, 4 · 2n, 5 · 2n,and 8 · 2n, for n = 1, 2, . . ., by utilizing Lemma 3.7.1 and the parametric integerslant-transform matrices in Eqs. (3.143), (3.144), (3.156), and (3.157).
Example 3.7.1: (a) Using Eqs. (3.156) and (3.157), for N = 3 and n = 1, we havethe forward integer slant HT matrix [PS ]6 and inverse slant HT matrix [PS ]−1
6of order 6:
[PS ]6(a, b) =
⎡⎢⎢⎢⎢⎢⎢⎢⎢⎣(+ +
+ −)⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎝1ab
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎠ ,(+ +
− +)⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎝10−2b
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎠ ,(+ +
+ −)⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎝1−a
b
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎠⎤⎥⎥⎥⎥⎥⎥⎥⎥⎦
=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1a a 0 0 −a −ab b −2b −2b b b1 −1 −1 1 1 −1a −a 0 0 −a ab −b 2b −2b b −b
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (3.158)
[PS ]−16 (a, b) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
(+ +
+ −)⊗
(13
12a
16b
)(+ −+ +
)⊗
(13
0 − 13b
)(+ +
+ −)⊗
(13− 1
2a1
6b
)
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠=
12
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
13
12a
16b
13
12a
16b
13
12a
16b−1
3− 1
2a− 1
6b13
0 − 13b−1
30
13b
13
0 − 13b
13
0 − 13b
13− 1
2a1
6b13− 1
2a1
6b13− 1
2a1
6b−1
31
2a− 1
6b
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
.
(3.159)
(b) Using Eqs. (3.143) and (3.157), for N = 4 and n = 1, we obtain, using thenotation c = 2(a2 + b2),
[PS ]8(a, b) =
⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣(+ +
+ −)⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1a1b
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ,(+ +
− +)⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1b−1−a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ,(+ +
+ −)⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1−b−1a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ,(+ +
− +)⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1−a
1−b
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦
=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 1a a b b −b −b −a −a1 1 −1 −1 −1 −1 1 1b b −a −a a a −b −b1 −1 −1 1 1 −1 −1 1a −a −b b −b b a −a1 −1 1 −1 −1 1 −1 1b −b a −a a −a b −b
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (3.160)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
138 Chapter 3
[PS ]−18 (a, b) =
12
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
⎛⎜⎜⎜⎜⎜⎝+ ++ −
⎞⎟⎟⎟⎟⎟⎠ ⊗ (14
ac
14
bc
)⎛⎜⎜⎜⎜⎜⎝+ −+ +
⎞⎟⎟⎟⎟⎟⎠ ⊗ (14
bc−1
4−a
c
)⎛⎜⎜⎜⎜⎜⎝+ ++ −
⎞⎟⎟⎟⎟⎟⎠ ⊗ (14−b
c−1
4ac
)⎛⎜⎜⎜⎜⎜⎝+ −+ +
⎞⎟⎟⎟⎟⎟⎠ ⊗ (14−a
c14−b
c
)
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
=12
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
14
ac
14
bc
14
ac
14
bc
14
ac
14
bc−1
4−a
c−1
4−b
c14
bc−1
4−a
c−1
4−b
c14
ac
14
bc−1
4−a
c14
bc−1
4−a
c14−b
c−1
4ac
14−b
c−1
4ac
14−b
c−1
4ac−1
4bc
14−a
c14−a
c14−b
c−1
4ac−1
4bc
14−a
c14−b
c14−a
c14−b
c
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (3.161)
(c) For N = 5 and n = 1, we have integer slant HT matrix [PS ]10 of order 10:
[PS ]10(a, b, c) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 1 1 1
2b 2b b b 0 0 −b −b −2b −2b
a a 0 0 −2a −2a 0 0 a a
−b −b 2b 2b 0 0 −2b −2b b b
2c 2c −3c −3c 2c 2c −3c 3c 2c 2c
1 −1 −1 1 1 −1 −1 1 1 −1
2b −2b −b b 0 0 b −b −2b 2b
a −a 0 0 −2a 2a 0 0 a −a
−b b −2b 2b 0 0 2b −2b b −b
2c −2c 3c −3c 2c −2c 3c −3c 2c −2c
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
, (3.162)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Discrete Orthogonal Transforms and Hadamard Matrices 139
[PS ]−110 (a, b, c) =
12
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
15
15a
16a− 1
10a1
15c15
15a
16a− 1
10a1
15c15
15a
16a− 1
10a1
15c−1
5− 1
5a− 1
6a1
10a− 1
15c15
110b
01
5b− 1
10c−1
5− 1
10b0 − 1
5b1
10c15
110b
01
5b− 1
10c15
110b
015b
− 110c
15
0 − 13a
01
15c15
0 − 13a
01
15c15
0 − 13a
01
15c−1
50
13a
0 − 115c
15− 1
10b0 − 1
5b− 1
10c−1
51
10b0
15b
110c
15− 1
10b0 − 1
5b− 1
10c15− 1
10b0 − 1
5b− 1
10c15− 1
5b1
6a1
10b1
15c15− 1
5b1
6a1
10b1
15c15− 1
5b1
6a1
10b1
15c−1
51
5b− 1
6a− 1
10b− 1
15c
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
.
(3.163)
Some useful properties of the integer slant HT matrix are given below.
Properties:
(a) The slant HT matrix S 2N is an orthogonal matrix only if N is a power of two.(b) If S N is sequential, then S 2N is also a sequential integer slant HT matrix [see
Eq. (3.156)].
Proof: Let Ri and R1i be i’th rows of S N and S 2N , respectively, and let ui, j be
an i’th and j’th element of S N , i, j = 0, 1, . . . ,N − 1. The top half of S 2N ,R1
i i = 0, 1, . . . ,N − 1, is obtained from (1, 1) ⊗ ui, j, which does not alter thesequential number of the rows.
Thus, the sequential number of R1i is equal to the sequential number of Ri,
i = 0, 1, . . . ,N−1. The bottom half of S 2N , R1i , i = N,N+1, . . . , 2N−1, is obtained
from (1,−1)⊗ui, j, and (−1, 1)⊗ui, j. This causes the sequential number of each rowto increase by N. Thus, the sequential number of each R1
i , i = N,N +1, . . . , 2N −1,is equal to the sequential number of its corresponding Ri, i = 0, 1, . . . ,N − 1 plusN. This implies that the sequential number of R1
i i = 0, 1, . . . , 2N − 1 grows withits index and S 2N is sequential, as can be seen from the examples given above.
(c) One can construct the same size slant-transform matrix in different ways.Indeed, the slant-transform matrix of order N = 16 can be obtained by twoways using Lemma 3.7.1 with initial matrix [PS ]4 (a, b) [see Eq. (3.143)]or using Lemma 3.7.1 once with the initial matrix [PS ]8 (a, b, c, d, e, f ) [seeEq. (3.144)]. It shows that we can construct an integer slant transform oforder 2n.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
140 Chapter 3
(d) The integer slant matrices [PS ]4 (a, b) and [PS ]−14 (a, b) = Q4(a, b) can be
factored as
[PS ]4(a, b) = S 2S 1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 0 00 0 b a1 −1 0 00 0 −a b
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 0 0 10 1 1 00 1 −1 01 0 0 −1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ,
Q4(a, b) = Q2Q1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 0 1 00 1 0 10 1 0 01 0 −1 −1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝c 0 c 0c 0 −c 00 a 0 b0 b 0 −a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ,(3.164)
where
c = (a2 + b2)/2. (3.165)
Let S N be an integer slant matrix of order N. We introduce the followingmatrix:
S 2N = [H2 ⊗ A1,H1 ⊗ A2, . . . ,H2 ⊗ AN−1,H1 ⊗ AN] , (3.166)
where Ai is the i’th column of S N , and H1 =(+ +
− +), H2 =
(+ +
+ −).
We can see that the matrix S 2N is an integer slant matrix of order 2N. In reality,we have
S 2NS T2N = H2HT
2 ⊗ A1AT1 + H1HT
1 ⊗ A2AT2 + · · ·
+H2HT2 ⊗ AN−1AT
N−1 + H1HT1 ⊗ AN AT
N
= 2I2 ⊗N∑
i=1
AiATi = diag {a1, a2, . . . , a2N} .
(3.167)
For N = 4, we have
S 8S T8 = 2
[I4 ⊕ 2(a2 + b2)
]. (3.168)
We can also check that the inverse matrix of S 4 (a, b) has the following form:
Q4(a, b) =14c
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝c a c bc b −c −ac −b −c ac −a c −b
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , c =a2 + b2
2, (3.169)
i.e., S 4(a, b)Q4(a, b) = Q4(a, b)S 4(a, b) = I4, and if parameters a and b are botheven or odd, the matrix in Eq. (3.169) is an integer matrix without granting acoefficient.
One can verify that the following matrices are mutually inverse matrices oforder 8:
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Discrete Orthogonal Transforms and Hadamard Matrices 141
S 8(a, b) = [H2 ⊗ A1,H1 ⊗ A2,H2 ⊗ A3,H1 ⊗ A4] ,
Q8(a, b) =14c
[H2 ⊗ Q1,H1 ⊗ Q2,H2 ⊗ Q3,H1 ⊗ Q4] ,(3.170)
where Ai and Qi are the i’th column and row of the matrices S 4(a, b) and Q4(a, b),respectively.
3.7.1 Fast algorithms
It is not difficult to show that the slant matrix in Eq. (3.137) can be represented as
S (2n) = M2n ⊗ I2 ⊗ S (2n−1), (3.171)
where
M2n =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 0 O0 0 0 O0
0 bn O0 an 0 O0
O0 O0 I2n−1−2 O0 O0 O2n−1−2
0 0 O0 0 1 O0
0 an O0 −bn 0 O0
O0 O0 O2n−1−2 O0 O0 I2n−1−2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (3.172)
where Om denotes a zero matrix of order m and M2 = I2. One can show that a slantmatrix of order 2n can be factored as
S (2n) = S nS n−1 · · · S 1, (3.173)
where
S i = (I2n−i ⊗ M2i) (I2n−i ⊗ H2 ⊗ I2i−1 ) . (3.174)
It is easy to prove that the fast algorithm based on decomposition in Eq. (3.173)requires C+(2n) addition and C×(2n) multiplication operations,
C+(2n) = (n + 1)2n − 2, C×(2n) = 2n+1 − 4. (3.175)
We see that the integer slant matrices in Eqs. (3.143) and (3.169) can be factoredas
S 4(a, b) = S 2S 1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 0 00 0 b a1 −1 0 00 0 −a b
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 0 0 10 1 1 00 1 −1 01 0 0 −1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (3.176)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
142 Chapter 3
Q4(a, b) = Q2Q1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 0 1 00 1 0 10 1 0 −11 0 −1 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝c 0 c 0c 0 −c 00 a 0 b0 b 0 −a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (3.177)
Now, using the above-given representation of matrices S 4(a, b), Q4(a, b), and theformula in Eq. (3.170), we can find the following respective complexities:
• 2n+1 additions and 2n multiplications for forward transform.• 2n+1 additions and 2n+1 multiplications for inverse transform.
3.7.2 Examples of slant-transform matrices
In this section, we give some sequency-ordered slant-transform matrices obtainedfrom parametric transforms.
S 3 =1√3
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1√6
20 −
√6
2√2
2−√2
√2
2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, S 4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 1 13 1 −1 −31 −1 −1 11 −1 3 −1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠· 1√
5
· 1√5
(3.178)
S 5 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 12 1 0 −1 −2
1 0 −2 0 1
−1 2 0 −2 1
2 −3 2 −3 2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠· 12
·√
56
· 12
· 16
S 6 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1
2 2 0 0 −2 −2
1 1 −2 −2 1 1
1 −1 −1 1 1 −1
2 −2 0 0 −2 2
1 −1 2 −2 1 −1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
·√
38
·√
12
·√
38
·√
12
(3.179)
S 8 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 1
4 2 2 0 0 −2 −2 −4
2 1 −1 −2 −2 −1 1 2
2 0 −4 −2 2 4 0 −2
1 −1 −1 1 1 −1 −1 1
2 −4 0 2 −2 0 4 −2
1 −2 2 −1 −1 2 −2 1
0 −2 2 −4 4 −2 2 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
·√
16
·√
25
·√
16
·√
16
·√
25
·√
16
(3.180)
S 9 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 1 1
4 3 2 1 0 −1 −2 −3 −4
1 1 1 −2 −2 −2 1 1 1
1 0 −1 −2 0 2 1 0 −1
3 0 −3 0 0 0 −3 0 3
2 −1 −4 3 0 −3 4 1 −2
1 −2 1 0 0 0 −1 2 −1
1 −2 1 1 −2 1 1 −2 1
1 −2 1 −2 4 −2 1 −2 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
·√
320
·√
12
·√
34
· 12
·√
320
·√
34
·√
12
· 12
(3.181)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Discrete Orthogonal Transforms and Hadamard Matrices 143
S 16 =14
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
13 9 15 11 5 1 7 3 −3 −7 −1 −5 −11 −15 −9 −13
1 1 1 1 −1 −1 −1 −1 −1 −1 −1 −1 1 1 1 1
1 1 1 1 −3 −3 −3 −3 3 3 3 3 −1 −1 −1 −1
3 1 −1 −3 −9 −3 3 9 9 3 −3 −9 −3 −1 1 3
3 1 −1 −3 −3 −1 1 3 −3 −1 1 3 3 1 −1 −3
9 3 −3 −9 3 1 −1 −3 −3 −1 1 3 −9 −3 3 9
11 5 −3 −13 11 5 −3 −13 11 5 −3 −13 11 5 −3 −13
1 1 −1 −1 1 1 −1 −1 1 1 −1 −1 1 1 −1 −1
3 −3 −3 3 1 −1 −1 1 −1 1 1 −1 −3 3 3 −3
1 −1 −1 1 −1 1 1 −1 −1 1 1 −1 1 −1 −1 1
1 −1 −1 1 −3 3 3 −3 3 −3 −3 3 −1 1 1 −1
1 −3 3 −1 −3 9 −9 3 3 −9 9 −3 −1 3 −3 1
1 −3 3 −1 −1 3 −3 1 −1 3 −3 1 1 −3 3 −1
3 −9 9 −3 1 −3 3 −1 −1 3 −3 1 −3 9 −9 3
1 −3 3 −1 1 −3 3 −1 1 −3 3 −1 1 −3 3 −1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
·√
185
·√
15
· 15
·√
15
· 15
·√
185
·√
15
·√
15
· 15
·√
15
· 15
·√
15
.
(3.182)
3.7.3 Iterative parametric slant Haar transform constructionThe forward and inverse parametric slant Haar transforms of order 2n (n ≥ 1) withparameters β22 , β23 , . . . , β2n are defined as29
X = S 2n(β22 , β23 , . . . , β2n)x,
x = S T2n(β22 , β23 , . . . , β2n)X,
(3.183)
where x is an input data vector of length 2n and S 2n is generated recursively as
S 2n = S 2n(β22 , β23 , . . . , β2n) = Q2n
(A2 ⊗ S 2,2n−1
I2 ⊗ S 2n−2,2n−1
), (3.184)
where S 2,2n−1 is a matrix of the dimension 2× 2n−1 comprising the first two rows ofS 2n−1 , and S 2n−1−2,2n−1 is a matrix of the dimension 2n−1 − 2 × 2n−1 comprising thethird to the 2n−1 rows of S 2n−1 , ⊗ denotes the operator of the Kronecker product, and
A2 =1√2
[1 11 −1
]. (3.185)
S 4 is the 4-point parametric slant HT constructed in the previous chapter. Q2n isthe recursion kernel matrix defined as
Q2n =
⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣
1 0 0 0 · · · 00 b2n a2n 0 · · · 00 a2n −b2n 0 · · · 00 0 0 1 · · · 0......
... 0. . ....
0 0 0 0 · · · 1
⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦, (3.186)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
144 Chapter 3
and a2n , b2n are defined as in Eq. (3.148):
a2n =
√3 · 22n−2
4 · 22n−2 − β2n
, b2n =
√22n−2 − β2n
4 · 22n−2 − β2n
, (3.187)
where
−22n−2 ≤ β2n ≤ 22n−2, n ≥ 3. (3.188)
The following remarks are relevant.
Properties: The parametric slant Haar transform fulfills the following require-ments of the classical slant transform:
• Its first-row vector is of constant value.• Its second-row vector represents the parametric slant vector.• It has a fast algorithm (see Section 2.2).• Its basis vectors are orthonormal.
Proof of orthonormality: Let R1 = A2 ⊗ S 2,2n−1 and R2 = I2 ⊗ S 2n−1−2,2n−1 . Then,
R1RT1 = (A2 ⊗ S 2,2n−1 )(A2 ⊗ S 2,2n−1 )T = A2AT
2 ⊗ S 2,2n−1S T2,2n−1 = I2 ⊗ I2 = I4, (3.189)
R1RT2 = (A2 ⊗ S 2,2n−1 )(I2 ⊗ S 2n−1−2,2n−1 )T = A2IT
2 ⊗ S 2,2n−1S T2n−1−2,2n−1
= A2 ⊗ O2,2n−1−2 = O4,2n−4, (3.190)
R2RT1 = (I2 ⊗ S 2n−1−2,2n−1 )(A2 ⊗ S 2,2n−1 )T = I2AT
2 ⊗ S 2n−1−2,2n−1S T2,2n−1
= AT2 ⊗ O2n−1−2 = O2n−4,4, (3.191)
R2RT2 = (I2 ⊗ S 2n−1−2,2n−1 )(I2 ⊗ S 2n−1−2,2n−1 )T = I2IT
2 ⊗ S 2n−1−2,2n−1S T2n−1−2,2n−1
= I2 ⊗ I2n−1−2 = I2n−4, (3.192)
but
S 2nS T2n = Q2n
⎡⎢⎢⎢⎢⎢⎢⎢⎣R1
−−R2
⎤⎥⎥⎥⎥⎥⎥⎥⎦[RT
1
... RT2
]QT
2n = Q2n
⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣R1RT
1
... R1RT2
− − − ... − − −R2RT
1
... R2RT2
⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦ QT2n
= Q2n
⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣I4
... O4,2n−4
− − −− ... − − −−O2n−4,4
... I2n−4,4
⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦ QT2n (3.193)
where On,m is an n × m zero matrix. Hence, S 2n is orthonormal, and S 2nS T2n =
Q2n I2n QT2n = I2n . This completes the proof.
It is easily verified that for β2n >∣∣∣22n−2
∣∣∣, parametric slant-transform matriceslose their orthogonality. Slant-transform matrix S 2n is a parametric matrix with(β4, β8, . . . , β2n) parameters.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Discrete Orthogonal Transforms and Hadamard Matrices 145
Figure 3.16 Parametric slant Haar transform basis vectors for (2n = 8): (a) classical case(β4 = 1, β8 = 1), (b) multiple-β case (β4 = 4, β8 = 16), (c) constant-β case (β4 = 1.7, β8 = 1.7),and (d) multiple-β case (β4 = 1.7, β8 = 7).
The parametric slant-Haar transform falls into one of at least three differentcategories according to β2n values:
• For β4 = β8 = · · · = β2n = β = 1, we obtain the classical slant Haar transform.20
• For β4 = β8 = · · · = β2n = β for β ≤ |4|, we refer to this as the constant-β slantHaar transform.• For β4 � β8 � · · · � β2n for −22n−2 ≤ β2n ≤ 22n−2
, n = 2, 3, 4, . . ., we refer to thisas the multiple-β slant Haar transform; some of the β2n values can be equal, butnot all of them.
Example: The parametric slant-Haar transforms of order 8 yield, respectively, forthe following cases: The classical case, (β4 = β8 = · · · = β2n = β = 1), gives inordered form, the multiple-β case (β4 = 4, β8 = 16), the constant-β case (β4 = 1.7,β8 = 1.7), and the multiple-β case (β4 = 1.7, β8 = 7). Their basis vectors areshown, respectively, in Fig. 3.16.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
146 Chapter 3
(a) The classical case, (β4 = β8 = 1):
SClassical =1√8
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 1
7 5 3 1 −1 −3 −5 −7
3 1 −1 −3 −3 −1 1 3
7 −1 −9 −17 17 9 1 −7
1 −1 −1 1 0 0 0 0
0 0 0 0 1 −1 −1 1
1 −3 3 −1 0 0 0 0
0 0 0 0 1 −3 3 −1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
· 1√21
· 1√5
· 1√105
· √2
· √2
·√
25
·√
25
. (3.194)
(b) The multiple-β case (β4 = 4, β8 = 16). Note that this is a special case of Haartransform:
SMultiple =1√8
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 1
1 1 1 1 −1 −1 −1 −1
1 −1 1 −1 1 −1 1 −1
1 −1 1 −1 −1 1 −1 1√
2√
2 −√2 −√2 0 0 0 0√
2 −√2 −√2√
2 0 0 0 0
0 0 0 0√
2√
2 −√2 −√2
0 0 0 0√
2 −√2 −√2√
2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (3.195)
(c) The constant-β case (β4 = 1.7, β8 = 1.7):
S Constant =1√8
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 1
1.5087 1.1246 0.6310 0.2466 −0.2466 −0.6310 −1.1246 −1.5087
0.6771 −0.0272 −0.9311 −1.6351 1.6351 0.9311 0.0272 −0.6771
1.3172 0.5151 −0.5151 −1.3172 −1.3172 −0.5151 0.5151 1.3172√2 −√2 −√2
√2 0 0 0 0
0.7283 −1.8628 1.8628 −0.7283 0 0 0 0
0 0 0 0√
2 −√2 −√2√
2
0 0 0 0 0.7283 −1.8628 1.8628 −0.7283
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
.
(3.196)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Discrete Orthogonal Transforms and Hadamard Matrices 147
(d) The multiple-β case (β4 = 1.7, β8 = 7):
S Multiple =1√8
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 1
1.4410 1.1223 0.7130 0.3943 −0.3943 −0.7130 −1.1223 −1.4410
0.8113 0.0752 −0.8700 −1.6060 1.6060 0.8700 −0.0752 −0.8113
1.3171 0.5150 −0.5150 −1.3171 −1.3171 −0.5150 0.5150 1.3171√2 −√2 −√2
√2 0 0 0 0
0.7283 −1.8627 1.8627 −0.7283 0 0 0 0
0 0 0 0√
2 −√2 −√2√
2
0 0 0 0 0.7283 −1.8627 1.8627 −0.7283
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
.
(3.197)
References
1. S. S. Agaian, Hadamard Matrices and Their Applications, Lecture Notes inMathematics, 1168, Springer-Verlag, Berlin (1985).
2. R. Stasinski and J. Konrad, “A new class of fast shape-adaptive orthogonaltransforms and their application to region-based image compression,” IEEETrans. Circuits Syst. Video Technol. 9 (1), 16–34 (1999).
3. M. Barazande-Pour and J.W. Mark, “Adaptive MHDCT coding of images,”in Proc. IEEE Image Proces. Conf., ICIP-94 1, 90–94 (Nov. 1994).
4. G.R. Reddy and P. Satyanarayana, “Interpolation algorithm using Walsh–Hadamard and discrete Fourier/Hartley transforms,” in Circuits and Systems1990, Proc.33rdMidwest Symp. 1, 545–547 (Aug. 1990).
5. Ch.-Fat Chan, “Efficient implementation of a class of isotropic quadraticfilters by using Walsh–Hadamard transform,” in Proc. of IEEE Int. Symp.on Circuits and Systems, Hong Kong, 2601–2604 (June 9–12, 1997).
6. B. K. Harms, J. B. Park, and S. A. Dyer, “Optimal measurement techniquesutilizing Hadamard transforms,” IEEE Trans. Instrum. Meas. 43 (3), 397–402(1994).
7. C. Anshi, Li Di and Z. Renzhong, “A research on fast Hadamard transform(FHT) digital systems,” in Proc. of IEEE TENCON 93, Beijing, 541–546(1993).
8. H.G. Sarukhanyan, “Hadamard matrices: construction methods andapplications,” in Proc. of Workshop on Transforms and Filter Banks,Tampere, Finland, 95–130 (Feb. 21–27, 1998).
9. N. Ahmed and K. R. Rao, Orthogonal Transforms for Digital SignalProcessing, Springer-Verlag, New York (1975).
10. S.S. Agaian and H.G. Sarukhanyan, “Hadamard matrices representation by(−1, +1)-vectors,” in Proc. Int. Conf. Dedicated to Hadamard Problem’sCentenary, Australia, (1993).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
148 Chapter 3
11. H. G. Sarukhanyan, “Decomposition of the Hadamard matrices and fastHadamard transform,” in Computer Analysis of Images and Patterns, LectureNotes in Computer Science, 1296 575–581 (1997).
12. R. K. Yarlagadda and E. J. Hershey, Hadamard Matrix Analysis andSynthesis with Applications and Signal/Image Processing, Kluwer AcademicPublishers, Boston (1996).
13. J. Seberry and M. Yamada, Hadamard Matrices, Sequences and BlockDesigns, Surveys in Contemporary Design Theory, John Wiley & Sons,Hoboken, NJ (1992).
14. S. Samadi, Y. Suzukake and H. Iwakura, “On automatic derivation of fastHadamard transform using generic programming,” in Proc. 1998 IEEE Asia-Pacific Conf. on Circuit and Systems, Thailand, 327–330 (1998).
15. D. Coppersmith, E. Feig, and E. Linzer, “Hadamard transforms onmultiply/add architectures,” IEEE Trans. Signal Process. 42 (4), 969–970(1994).
16. Z. Li, H.V. Sorensen and C.S. Burus, “FFT and convolution algorithms andDSP microprocessors,” in Proc. of IEEE Int. Conf. Acoustic, Speech, SignalProcessing, 289–294 (1986).
17. R. K. Montoye, E. Hokenek, and S. L. Runyon, “Design of the IBM RISCSystem/6000 floating point execution unit,” IBM J. Res. Dev. 34, 71–77(1990).
18. J.-L. Wu, “Block diagonal structure in discrete transforms,” Computers andDigital Techniques, Proc. IEEE 136 (4), 239–246 (1989).
19. H. Enomoto and K. Shibata, “Orthogonal transform coding system fortelevision signals,” in Proc. of Symp. Appl. Walsh Func., 11–17 (1971).
20. W. K. Pratt, J. Kane, and L. Welch, “Slant transform image coding,” IEEETrans. Commun. 22 (8), 1075–1093 (1974).
21. S. Agaian and V. Duvalyan, “On slant transforms,” Pattern Recogn. ImageAnal. 1 (3), 317–326 (1991).
22. I. W. Selesnick, “The slantlet transform,” IEEE Trans. Signal Process. 47(5), 1304–1313 (1999).
23. I.W. Selesnick, “The slantlet transform time-frequency and time-scaleanalysis,” in Proc. of IEEE-SP Int. Symp., 53–56 (1998).
24. J. L. Walsh, “A closed set of normal orthogonal functions,” Am. J. Math. 45,5–24 (1923).
25. S. Agaian, H. Sarukhanyan and K. Tourshan, “Integer slant transforms,” inProc. of CSIT Conf., 303–306 (Sept. 17–20, 2001).
26. S. Agaian, K. Tourshan, and J. P. Noonan, “Parametric slant-Hadamardtransforms with applications,” IEEE Signal Process. Lett. 9 (11), 375–377(2002).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Discrete Orthogonal Transforms and Hadamard Matrices 149
27. S. Agaian, K. Tourshan, and J. Noonan, “Generalized parametric slantHadamard transforms,” Signal Process. 84, 1299–1307 (2004).
28. S. Agaian, H. Sarukhanyan, and J. Astola, “Skew Williamson-Hadamardtransforms,” Multiple Valued Logic Soft Comput. J. 10 (2), 173–187 (2004).
29. S. Agaian, K. Tourshan, and J. Noonan, “Performance of parametricslant-Haar transforms,” J. Electron. Imaging 12 (3), 539–551 (2003)[doi:10.1117/1.1580494].
30. W. K. Pratt, L. R. Welch, and W. H. Chen, “Slant transform for imagecoding,” IEEE Trans. Commun. COM-22 (8), 1075–1093 (1974).
31. P. C. Mali, B. B. Chaudhuri, and D. D. Majumber, “Some properties andfast algorithms of slant transform in image processing,” Signal Process. 9,233–244 (1985).
32. L. R. Rabiner and B. Gold, Theory and Application of Digital SignalProcessing, Prentice-Hall, Englewood Cliffs, NJ (1975).
33. A. Jain, Fundamentals of Digital Image Processing, Prentice-Hall,Englewood Cliffs, NJ (1989).
34. B. J. Fino and V. R. Algazi, “Slant Haar transform,” Proc. of IEEE 62,653–654 (1974).
35. K. R. Rao, J. G. K. Kuo, and M. A. Narasimhan, “Slant-Haar transform,” Int.J. Comput. Math. B 7, 73–83 (1979).
36. J. F. Yang and C. P. Fang, “Centralized fast slant transform algorithms,”IEICE Trans. Fundam. Electron. Commun. Comput. Sci. E80-A (4), 705–711(1997).
37. Z. D. Wang, “New algorithm for the slant transform,” IEEE Trans. PatternAnal. Mach. Intell. 4 (5), 551–555 (1982).
38. S. Agaian, H. Sarukhanyan and Kh. Tourshan, “New classes of sequentialslant Hadamard transform,” in Proc. of Int. TICSP Workshop on SpectralMethods and Multirate Signal Processing, SMMSP’02, Toulouse, France(Sept. 7–8, 2002).
39. S. Minasyan, D. Guevorkian, S. Agaian and H. Sarukhanyan, “On ‘slant-like’fast orthogonal transforms of arbitrary order,” in Proc of VIPromCom-2002,4th EURASIP–IEEE Region 8 Int. Symp. on Video/Image Processing andMultimedia Communications, Zadar, Croatia, 309–314 (June 16–19, 2002).
40. Z. Wang, “Fast algorithms for the discrete W transform and for the discreteFourier transform,” IEEE Trans. on Acoust. Speech Signal Process. ASSP-32(4), 803–816 (1984).
41. S. Venkataraman, V. R. Kanchan, K. R. Rao, and M. Mohanty, “Discretetransforms via the Walsh–Hadamard transform,” Signal Process. 14 (4),371–382 (1988).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
150 Chapter 3
42. M. M. Anguh and R. R. Martin, “A 2-dimensional inplace truncationWalsh transform method,” J. Visual Comm. Image Represent. (JVCIR) 7 (2),116–125 (1996).
43. K. Rao, K. Revuluri, M. Narasimhan, and N. Ahmed, “Complex Haartransform,” IEEE Trans. Acoust. Speech Signal Process. 24 (1), 102–104(1976).
44. K. Rao, V. Devarajan, V. Vlasenko, and M. Narasimhan, “Cal-Sal Walsh–Hadamard transform,” IEEE Trans. Acoust. Speech Signal Process. 26 (6),605–607 (1978).
45. P. Marti-Puig, “Family of fast Walsh Hadamard algorithms with identicalsparse matrix factorization,” IEEE Signal Process. Lett. 13 (11), 672–675(2006).
46. L. Nazarov and V. Smolyaninov, “Use of fast Walsh–Hadamard trans-formation for optimal symbol-by-symbol binary block codes decoding,”Electron. Lett. 34, 261–262 (1998).
47. M. Bossert, E.M. Gabidulin and P. Lusina, “Space-time codes based onHadamard matrices proceedings,” in Proc. IEEE Int. Symp. InformationTheory, p. 283 (June 25–30, 2000).
48. Y. Beery and J. Snyders, “Optimal soft decision block decoders based on fastHadamard transformation,” IEEE Trans. Inf. Theory IT-32, 355–364 (1986).
49. J. Astola and D. Akopian, “Architecture-oriented regular algorithms fordiscrete sine and cosine transforms,” IEEE Trans. Signal Process. 47, 11–19(1999).
50. A. Amira, P. Bouridane, P. Milligan, and M. Roula, “Novel FPGA imple-mentations of Walsh–Hadamard transforms for signal processing,” Proc. Inst.Elect. Eng., Vis., Image, Signal Process. 148, 377–383 (Dec. 2001).
51. S. Boussakta and A. G. J. Holt, “Fast algorithm for calculation of bothWalsh–Hadamard and Fourier transforms,” Electron. Lett. 25, 1352–1354(1989).
52. J.-L. Wu, “Block diagonal structure in discrete transforms,” Proc. of Inst.Elect. Eng., Comput. Digit. Technol. 136 (4), 239–246 (1989).
53. M. H. Lee and Y. Yasuda, “Simple systolic array algorithm for Hadamardtransform,” Electron. Lett. 26 (18), 1478–1480 (1990).
54. M. H. Lee and M. Kaven, “Fast Hadamard transform based on a simplematrix factorization,” IEEE Trans. Acoust. Speech Signal Process. ASP-34(6), 166–667 (1986).
55. A. Vardy and Y. Beery, “Bit-level soft decision decoding of Reed-Solomoncodes,” IEEE Trans. Commun. 39, 440–445 (1991).
56. A. Aung, B. P. Ng, and S. Rahardja, “Sequency-ordered complex Hadamardtransform: properties, computational complexity and applications,” SignalProcess. IEEE Trans. 56 (8, Pt. 1), 3562–3571 (2008).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Discrete Orthogonal Transforms and Hadamard Matrices 151
57. B. Guoan, A. Aung, and B. P. Ng, “Pipelined hardware structure forsequency-ordered complex Hadamard transform,” IEEE Signal Process. Lett.15, 401–404 (2008).
58. I. Dumer, G. Kabatiansky, and C. Tavernier, “List decoding of biorthogonalcodes and the Hadamard transform with linear complexity,” IEEE Trans. Inf.Theory 54 (10), 4488–4492 (2008).
59. D. Sundararajan and M. O. Ahmad, “Fast computation of the discrete Walshand Hadamard transforms,” IEEE Trans. Image Process. 7 (6), 898–904(1998).
60. J.-D. Lee and Y.-H. Chiou, “A fast encoding algorithm for vector quantizationbased on Hadamard transform,” in Proc. of Industrial Electronics, IECON2008, 34th Annual Conf. of IEEE, 1817–1821 (Nov. 10–13, 2008).
61. P. Knagenhjelm and E. Agrell, “The Hadamard transform—a tool for indexassignment,” IEEE Trans. Inf. Theory 42 (4), 1139–1151 (July 1996).
62. K. Zeger and A. Gersho, “Pseudo-Gray coding,” IEEE Trans. Commun. 38(12), 2147–2158 (1990).
63. S.-C. Pei and W.-L. Hsue, “The multiple-parameter discrete fractionalFourier transform,” IEEE Signal Process. Lett. 13 (6), 329–332 (2006).
64. J. M. Vilardy, J. E. Calderon, C. O. Torres, and L. Mattos, “Digital imagesphase encryption using fractional Fourier transform,” Proc. IEEE Conf.Electron., Robot. Automotive Mech. 1, 15–18 (Sep. 2006).
65. J. Guo, Z. Liu, and S. Liu, “Watermarking based on discrete fractionalrandom transform,” Opt. Commun. 272 (2), 344–348 (2007).
66. V. Kober, “Fast algorithms for the computation of sliding discrete Hartleytransforms,” IEEE Trans. Signal Process. 55 (6), 2937–2944 (2007).
67. P. Dita, “Some results on the parameterization of complex Hadamardmatrices,” J. Phys. A 37 (20), 5355–5374 (2004).
68. W. Tadej and K. Kyczkowski, “A concise guide to complex Hadamardmatrices,” Open Syst. Inf. Dyn. 13 (2), 133–177 (2006).
69. F. Szollosi, “Parametrizing complex Hadamard matrices,” Eur. J. Comb. 29(5), 1219–1234 (2007).
70. V. Senk, V. D. Delic, and V. S. Milosevic, “A new speech scramblingconcept based on Hadamard matrices,” IEEE Signal Process. Lett. 4 (6),161–163 (1997).
71. J. A. Davis and J. Jedwab, “Peak-to-mean power control in OFDM, Golaycomplementary sequences, and Reed-Muller codes,” IEEE Trans. Inf. Theory45 (7), 2397–2417 (1999).
72. G. Guang and S. W. Golomb, “Hadamard transforms of three-termsequence,” IEEE Trans. Inf. Theory 45 (6), 2059–2060 (1999).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
152 Chapter 3
73. W. Philips, K. Denecker, P. de Neve, and S. van Assche, “Lossless quanti-zation of Hadamard transform coefficients,” IEEE Trans. Image Process. 9(11), 1995–1999 (2000).
74. M. Ramkumar and A. N. Akansu, “Capacity estimates for data hiding incompressed images,” IEEE Trans. Image Process. 10 (8), 1252–1263 (2001).
75. S.S. Agaian and O. Caglayan, “New fast Hartley transform with linear multi-plicative complexity,” presented at IEEE Int. Conf. on Image Processing,Atlanta, GA (Oct. 8–11, 2006).
76. S.S. Agaian and O. Caglayan, “Fast encryption method based on new FFTrepresentation for the multimedia data system security,” presented at IEEEInt. Conference on Systems, Man, and Cybernetics, Taipei, Taiwan (Oct.8–11, 2006).
77. S.S. Agaian and O. Caglayan, “New fast Fourier transform with linearmultiplicative complexity,” presented at IEEE 39th Asilomar Conf. onSignals, Systems and Computers, Pacific Grove, CA (Oct. 30–Nov. 2, 2005).
78. S.S. Agaian and O. Caglayan, “Super fast Fourier transform,” presentedat IS&T/SPIE 18th Annual Symp. on Electronic Imaging Science andTechnology, San Jose, CA (Jan. 15–19, 2006).
79. D. F. Elliot and K. R. Rao, Fast Transforms, Algorithms, Applications,Academic Press, New York (1982).
80. R. J. Clarke, Transform Coding of Image, Academic Press, New York (1985).
81. N. Ahmed and K. R. Rao, Orthogonal Transforms for Digital SignalProcessing, Springer-Verlag, New York (1975).
82. A. K. Jain, Fundamentals of Digital Image Processing, Prentice Hall,Englewood Cliffs, NJ (1989).
83. E. R. Dougherty, Random Processes for Image and Signal Processing,SPIE Press, Bellingham, WA, and IEEE Press, Piscataway, NJ (1999)[doi:10.1117/3.268105].
84. P. C. Mali, B. B. Chaudhuri, and D. D. Majumder, “Properties and somefast algorithms of the Haar transform in image processing and patternrecognition,” Pattern Recogn. Lett. 2 (5), 319–327 (1984).
85. H. Enomoto and K. Shibata, “Orthogonal transform system for televisionsignals,” IEEE Trans. Electromagn. Comput. 13, 11–17 (1971).
86. P. C. Mali, B. B. Chaudhuri, and D. D. Majumder, “Some properties andfast algorithms of slant transform in image processing,” Signal Process. 9,233–244 (1985).
87. P. Bahl, P.S. Gauthier and R.A. Ulichney, “PCWG’s INDEO-C video com-pression algorithm,” http://www.Europe.digital.com/info/DTJK04/ (11 April1996).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Discrete Orthogonal Transforms and Hadamard Matrices 153
88. I. W. Selesnick, “The slantlet transform,” IEEE Trans. Signal Process. 47(5), 1304–1313 (1999).
89. C.E. Lee and J. Vaisey, “Comparison of image transforms in the coding of thedisplaced frame difference for block-based motion compensation,” in Proc. ofCanadian Conf. on Electrical and Computer Engineering 1, 147–150 (1993).
90. B. J. Fino and V. R. Algazi, “Slant-Haar transform,” Proc. IEEE Lett. 62,653–654 (1974).
91. S. Agaian, K. Tourshan, and J. P. Noonan, “Parametric slant-Hadamardtransforms with applications,” IEEE Signal Process. Lett. 9 (11), 375–377(2002).
92. S. Agaian, K. Tourshan and J.P. Noonan, “Parametric slant-Hadamardtransforms,” presented at IS&T/SPIE 15th Annual Symp. Electronic ImagingScience and Technology, Image Processing: Algorithms and Systems II,Santa Clara, CA (Jan. 20–24, 2003).
93. S. Agaian, K. Tourshan, and J. Noonan, “Partially signal dependent slanttransforms for multispectral classification,” J. Integr. Comput.-Aided Eng. 10(1), 23–35 (2003).
94. M. D. Adam and F. Kossentni, “Reversible integer-to-integer wavelet trans-forms for image compression: performance evaluation and analysis,” IEEETrans. Image Process. 9 (6), 1010–1024 (2000).
95. M. M. Anguh and R. R. Martin, “A truncation method for computing slanttransforms with applications to image processing,” IEEE Trans. Commun. 43(6), 2103–2110 (1995).
96. R. A. Horn and C. R. Johnson, Topics in Matrix Analysis, CambridgeUniversity Press, Cambridge, England (1991).
97. P. C. Mali and D. D. Majumder, “An analytical comparative study of a classof discrete linear basis transforms,” IEEE Trans. Syst. Man. Cybernet. 24 (3),531–535 (1994).
98. N. Ahmed, T. Natarajan, and K. R. Rao, “Discrete cosine transform,” IEEETrans. Comput. C-23, 90–93 (1974).
99. W. K. Pratt, “Generalized Wiener filtering computation techniques,” IEEETrans. Comput. C-21, 636–641 (1972).
100. J. Pearl, “Walsh processing of random signals,” IEEE Trans. Electromagn.Comput. EMC-13 (4), 137–141 (1971).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Chapter 4“Plug-In Template” Method:Williamson–Hadamard Matrices
We have seen that one of the basic methods used to build Hadamard matricesis based on construction of a class of “special-component” matrices that canbe plugged into arrays (templates) of variables to generate Hadamard matrices.Several approaches for constructing special-component matrices and templateshave been developed.1–36 In 1944, Williamson1,2 first constructed “suitablematrices” (Williamson matrices) to replace the variables in a formally orthogonalmatrix. Generally, the arrays into which suitable matrices are plugged areorthogonal designs, which have formally orthogonal rows (and columns) but mayhave variations, such as Goethals–Seidel arrays, Wallis–Whiteman arrays, Spencearrays, generalized quaternion arrays, Agayan (Agaian) families, Kharaghani’smethods, and regular s-sets of regular matrices that give new matrices.3–35,37,38
This is an extremely prolific construction method.34 There are several interestingschemes for constructing the Williamson matrices and the Williamson arrays.1–83
In addition, it has been found that the Williamson–Hadamard sequences possessvery good autocorrelation properties that make them amenable to synchronizationrequirements, and they can thus be used in communication systems.42 In addition,Seberry, her students, and many other authors have made extensive use ofcomputers for relevant searches.35,76,79,80 For instance, Djokovic found the firstodd number, n = 31, for which symmetric circulant Williamson matrices exist.79,80
There are several interesting papers concerning the various types of Hadamardmatrix construction.82–98 A survey of the applications of Williamson matrices canbe found in Ref. 78.
In this chapter, two “plug-in template” methods of the construction of Hadamardmatrices are presented. The first method is based on Williamson matrices andthe Williamson array “template”; the second one is based on the Baumert–Hallarray “template.” Finally, we will give customary sequences based on constructionof new classes of Williamson and generalized Williamson matrices. We start thechapter with a brief description of the construction of Hadamard matrices fromWilliamson matrices. Then we construct a class of Williamson matrices. Finally,we show that if Williamson–Hadamard matrices of order 4m and 4n exist, thenWilliamson–Hadamard matrices of order mn/2 exist.
155
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
156 Chapter 4
4.1 Williamson–Hadamard Matrices
First, we briefly describe the Williamson approach to construction of the Hadamardmatrices. It is the first and simplest plug-in template method for generatingHadamard matrices.
Theorem 4.1.1: (Williamson1,2) If four (+1,−1) matrices An, Bn, Cn, Dn of ordern exist that satisfy both of the following conditions:
PQT = QPT , P,Q ∈ {An, Bn,Cn,Dn} ,AnAT
n + BnBTn +CnCT
n + DnDTn = 4nIn,
(4.1)
then ⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝An Bn Cn Dn
−Bn An −Dn Cn
−Cn Dn An −Bn
−Dn −Cn Bn An
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ (4.2)
is a Hadamard matrix of order 4n. This theorem can be proved by directverification.
Definitions: The “template” [matrix (4.2)] is called a Williamson array. The foursymmetric cyclic (+1,−1) matrices A, B, C, D with the condition of Eq. (4.1) arecalled Williamson matrices.1–3
The cyclic matrix Q of order m is defined as
Q = a0U0 + a1U1 + · · · + am−1Um−1, (4.3)
where U is the (0, 1) matrix of order m with first row (0 1 0 · · · 0), second rowobtained by one-bit cyclic shifts, third row obtained by 2-bit cyclic shifts, and soon. For m = 5, we have the following matrices:
U =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝0 1 0 0 00 0 1 0 00 0 0 1 00 0 0 0 11 0 0 0 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, U2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝0 0 1 0 00 0 0 1 00 0 0 0 11 0 0 0 00 1 0 0 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠,
U3 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝0 0 0 1 00 0 0 0 11 0 0 0 00 1 0 0 00 0 1 0 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, U4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝0 0 0 0 11 0 0 0 00 1 0 0 00 0 1 0 00 0 0 1 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠.
(4.4)
Note that the matrix U satisfies the following conditions:
U0 = Im, U pUq = U p+q, Um = Im. (4.5)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
“Plug-In Template” Method: Williamson–Hadamard Matrices 157
Therefore, the cyclic matrix of order n with first row (a0 a1 a2 · · · an−1) has theform
C(a0, a1, . . . , an−1) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝a0 a1 · · · an−1
an−1 a0 · · · an−2...
.... . ....
a1 a2 · · · a0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (4.6)
In other words, each row of A is equal to the previous row rotated downward byone element. Thus, a cyclic matrix of order n is specified (or generated) by itsfirst row and denoted by C(a0, a1, . . . , an−1). For example, starting with the vector(a, b, c, d), we can form the 4 × 4 cyclic matrix
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝a b c dd a b cc d a bb c d a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (4.7)
It can be shown that the multiplication of two cyclic matrices is also cyclic. Thiscan be proved by direct verification. For N = 4, we obtain the multiplication
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝a0 a1 a2 a3
a3 a0 a1 a2
a2 a3 a0 a1
a1 a2 a3 a0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝b0 b1 b2 b3
b3 b0 b1 b2
b2 b3 b0 b1
b1 b2 b3 b0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ =⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝c0 c1 c2 c3
c3 c0 c1 c2
c2 c3 c0 c1
c1 c2 c3 c0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (4.8)
If A, B, C, D are cyclic symmetric (+1,−1) matrices of order n, then the firstrelation of Eq. (4.1) is automatically satisfied, and the second condition becomes
A2 + B2 +C2 + D2 = 4nIn. (4.9)
Examples of the symmetric cyclic Williamson matrices of orders 1, 3, 5, and 7are as follows:
(1) Williamson matrices of order 1: A1 = B1 = C1 = D1 = (1).(2) The first rows and Williamson matrices of order 3 are given as follows:
A3 = (1, 1, 1), B3 = C3 = D3 = (1,−1,−1); (4.10)
A3 =
⎛⎜⎜⎜⎜⎜⎜⎜⎝+ + +
+ + +
+ + +
⎞⎟⎟⎟⎟⎟⎟⎟⎠ , B3 = C3 = D3 =
⎛⎜⎜⎜⎜⎜⎜⎜⎝+ − −− + −− − +
⎞⎟⎟⎟⎟⎟⎟⎟⎠ . (4.11)
(3) The first rows and Williamson matrices of order 5 are given as follows:
A5 = B5 = (1,−1,−1,−1,−1), C5 = (1, 1,−1,−1, 1),
D5 = (1,−1, 1, 1,−1);(4.12)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
158 Chapter 4
A5 = B5 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ − − − −− + − − −− − + − −− − − + −− − − − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, C5 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + − − ++ + + − −− + + + −− − + + ++ − − + +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, D5 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ − + + −− + − + ++ − + − ++ + − + −− + + − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (4.13)
(4) The first rows and Williamson matrices of order 7 are given as follows:
A7 = B7 = (1, 1,−1, 1, 1,−1, 1), C7 = (1,−1, 1, 1, 1, 1,−1),
D7 = (1, 1,−1,−1,−1,−1, 1);(4.14)
A7 = B7 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + − + + − ++ + + − + + −− + + + − + ++ − + + + − ++ + − + + + −− + + − + + ++ − + + − + +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, C7 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ − + + + + −− + − + + + ++ − + − + + ++ + − + − + ++ + + − + − ++ + + + − + −− + + + + − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠,
D7 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + − − − − ++ + + − − − −− + + + − − −− − + + + − −− − − + + + −− − − − + + ++ − − − − + +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠.
(4.15)
The first rows of cyclic symmetric Williamson matrices 34 of orders n = 3, 5, . . . ,33, 37, 39, 41, 43, 49, 51, 55, 57, 61, 63 are given in Appendix A.2.
By plugging in the above-presented Williamson matrices of orders 3 and 5into Eq. (4.2), we obtain a Williamson–Hadamard matrix of order 12 and 20,respectively:
H12 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + + + − − + − − + − −+ + + − + − − + − − + −+ + + − − + − − + − − +− + + + + + − + + + − −+ − + + + + + − + − + −+ + − + + + + + − − − +− + + + − − + + + − + ++ − + − + − + + + + − ++ + − − − + + + + + + −− + + − + + + − − + + +
+ − + + − + − + − + + +
+ + − + + − − − + + + +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
, (4.16)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
“Plug-In Template” Method: Williamson–Hadamard Matrices 159
H20 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ − − − − + − − − − + + − − + + − + + −− + − − − − + − − − + + + − − − + − + +− − + − − − − + − − − + + + − + − + − +− − − + − − − − + − − − + + + + + − + −− − − − + − − − − + + − − + + − + + − +− + + + + + − − − − − + − − + + + − − ++ − + + + − + − − − + − + − − + + + − −+ + − + + − − + − − − + − + − − + + + −+ + + − + − − − + − − − + − + − − + + ++ + + + − − − − − + + − − + − + − − + +− − + + − + − + + − + − − − − − + + + +− − − + + − + − + + − + − − − + − + + ++ − − − + + − + − + − − + − − + + − + ++ + − − − + + − + − − − − + − + + + − +− + + − − − + + − + − − − − + + + + + −− + − − + − − + + − + − − − − + − − − −+ − + − − − − − + + − + − − − − + − − −− + − + − + − − − + − − + − − − − + − −− − + − + + + − − − − − − + − − − − + −+ − − + − − + + − − − − − − + − − − − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (4.17)
In Table 4.1, we give some well-known classes of Williamson matrices.We denote the set of orders of Williamson matrices given in 1–8 by L. Note
that there are no Williamson matrices for order 155 by a complete computersearch, and no Williamson-type matrices are known for the orders 35, 155, 171,203, 227, 291, 323, 371, 395, 467, 483, 563, 587, 603, 635, 771, 875, 915, 923,963, 1131, 1307, 1331, 1355, 1467, 1523, 1595, 1643, 1691, 1715, 1803, 1923,and 1971.
Table 4.1 Classes of Williamson matrices.
No. Orders of cyclic symmetric Williamson matrices
1 n, where n ∈ W = {3, 5, 7, . . . , 29, 31, 43}3,72 (p + 1)/2, where p ≡ 1(mod 4) is a prime power8
No. Orders of Williamson matrices
1 n, where n ≤ 100 except 35, 39, 47, 53, 67, 73, 83, 89, and 949
2 3a, where a is a natural number10
3 (p + 1)pr/2, where is a prime power, and r is a natural number11,12
4 n(4n + 3), n(4n − 1), where n ∈ {1, 3, 5, . . . , 25}13
5 (p + 1)(p + 2), where p ≡ 1(mod 4) is a prime number and p + 3 is an order of symmetric Hadamardmatrix9
6 2n(4n + 7), where 4n + 1 is a prime number and n ∈ {1, 3, 5, . . . , 25}97 2.39, 2.103, 2.303, 2.333, 2.669, 2.695, 2.160911
8 2n, where n is an order of Williamson-type matrices9
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
160 Chapter 4
Lemma 4.1.1: (Agaian, Sarukhanyan14) If A, B, C, D are Williamson matricesof order n, then the matrices
X =12
(A + B C + DC + D −A − B
),
Y =12
(A − B C − D−C + D A − B
),
(4.18)
are (0,±1) matrices of order 2n and satisfy the conditions
X ∗ Y = 0, ∗ is an Hadamard product,XYT = YXT ,X ± Y is a (+1,−1) matrix,XXT + YYT = 2nI2n,
(4.19)
where the Hadamard product of two matrices A = (ai, j) and B = (bi, j) of the samedimension is defined as A ∗ B = (ai, j, bi, j).
Example 4.1.1: X, Y matrices of order 2n for n = 3, 5.For n = 3:
X =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ 0 0 + − −0 + 0 − + −0 0 + − − ++ − − − 0 0− + − 0 − 0− − + 0 0 −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, Y =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
0 + + 0 0 0+ 0 + 0 0 0+ + 0 0 0 00 0 0 0 + +0 0 0 + 0 +0 0 0 + + 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (4.20)
For n = 5:
X =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ − − − − + 0 0 0 0− + − − − 0 + 0 0 0− − + − − 0 0 + 0 0− − − + − 0 0 0 + 0− − − − + 0 0 0 0 ++ 0 0 0 0 − + + + +0 + 0 0 0 + − + + +0 0 + 0 0 + + − + +0 0 0 + 0 + + + − +0 0 0 0 + + + + + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
, Y =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
0 0 0 0 0 0 + − − +0 0 0 0 0 + 0 + − −0 0 0 0 0 − + 0 − −0 0 0 0 0 − − + 0 −0 0 0 0 0 + − − + 00 − + + − 0 0 0 0 0− 0 − + + 0 0 0 0 0+ − 0 − + 0 0 0 0 0+ + − 0 − 0 0 0 0 0− + + − 0 0 0 0 0 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (4.21)
Theorem 4.1.2: (Agaian–Sarukhanyan multiplicative theorem14,15) Let there beWilliamson–Hadamard matrices of order 4m and 4n. Then Williamson–Hadamardmatrices exist of order 4(2m)in, i = 1, 2, . . . .
Proof: Let A, B, C, D and A0, B0, C0, D0 be Williamson matrices of orders m andn, respectively. Note that according to Lemma 4.1.1, the (+1,−1) matrices X, Y
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
“Plug-In Template” Method: Williamson–Hadamard Matrices 161
satisfy the conditions of Eq. (4.19). Consider the following matrices:
Ai = Ai−1 ⊗ X + Bi−1 ⊗ Y,Bi = Bi−1 ⊗ X − Ai−1 ⊗ Y,Ci = Ci−1 ⊗ X + Di−1 ⊗ Y,Di = Di−1 ⊗ X −Ci−1 ⊗ Y,
(4.22)
where ⊗ is the Kronecker product.We want to prove that for any natural number i the matrices Ai, Bi, Ci, and Di
are Williamson matrices of order (2m)in. Let us consider two cases, namely, case(a) when i = 1, and case (b) when i is any integer.
Case (a): Let i = 1. From Eq. (4.22), we obtain
A1AT1 = A0AT
0 ⊗ XXT + B0BT0 ⊗ YYT + A0BT
0 ⊗ XYT + B0AT0 ⊗ YXT ,
B1BT1 = B0BT
0 ⊗ XXT + A0AT0 ⊗ YYT − B0AT
0 ⊗ XYT − A0BT0 ⊗ YXT .
(4.23)
Taking into account the conditions of Eqs. (4.1) and (4.19) and summarizing thelast expressions, we find that
A1AT1 + B1BT
1 = (A0AT0 + B0BT
0 ) ⊗ (XXT + YYT ). (4.24)
Similarly, we obtain
C1CT1 + D1DT
1 = (C0CT0 + D0DT
0 ) ⊗ (XXT + YYT ). (4.25)
Now, summarizing the last two equations and taking into account that A0, B0,C0, D0 are Williamson matrices of order n, and X and Y satisfy the conditions ofEq. (4.19), we have
A1AT1 + B1BT
1 +C1CT1 + D1DT
1 = 8mnI2mn. (4.26)
Let us now prove equality of A1BT1 = B1AT
1 . From Eq. (4.22), we have
A1BT1 = A0BT
0 ⊗ XXT − A0AT0 ⊗ XYT + B0BT
0 ⊗ YXT − B0AT0 ⊗ YYT ,
B1AT1 = B0AT
0 ⊗ XXT + B0BT0 ⊗ XYT − A0AT
0 ⊗ YXT − A0BT0 ⊗ YYT .
(4.27)
Comparing both expressions, we conclude that A1BT1 = B1AT
1 . Similarly, it can beshown that
PQT = QPT , (4.28)
where
P,Q ∈ {A1, B1,C1,D1} . (4.29)
Thus, the matrices A1, B1, C1, D1 are Williamson matrices of order 2mn.Case (b): Let i be any integer; we assume that the theorem is correct for
k = i > 1, i.e., Ak, Bk, Ck, Dk are Williamson matrices of order (2m)kn. Let us
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
162 Chapter 4
prove that Ai+1, Bi+1, Ci+1, and Di+1 also are Williamson matrices. Check only thesecond condition of Eq. (4.1). By computing
Ai+1ATi+1 = AiA
Ti ⊗ XXT + AiB
Ti ⊗ XYT + BiA
Ti ⊗ YXT + BiB
Ti ⊗ YYT ,
Bi+1BTi+1 = BiB
Ti ⊗ XXT − BiA
Ti ⊗ XYT − AiB
Ti ⊗ YXT + AiA
Ti ⊗ YYT ,
Ci+1CTi+1 = CiC
Ti ⊗ XXT +CiD
Ti ⊗ XYT + DiC
Ti ⊗ YXT + DiD
Ti ⊗ YYT ,
Di+1DTi+1 = DiD
Ti ⊗ XXT − DiC
Ti ⊗ XYT −CiD
Ti ⊗ YXT +CiC
Ti ⊗ YYT
(4.30)
and summarizing the obtained equations, we find that
Ai+1ATi+1 + Bi+1BT
i+1 +Ci+1CTi+1 + Di+1DT
i+1
=(Ai+1AT
i+1 + Bi+1BTi+1 +Ci+1CT
i+1 + Di+1DTi+1
) (XXT + YYT
). (4.31)
Because Ai, Bi, Ci, Di are Williamson matrices of order (2m)in, and matrices X, Ysatisfy the conditions of Eq. (4.19), we can conclude that
Ai+1ATi+1 + Bi+1BT
i+1 +Ci+1CTi+1 + Di+1DT
i+1 = 4(2m)i+1nI(2m)i+1n. (4.32)
From the Williamson Theorem (Theorem 4.1.1), we obtain the Williamson–Hadamard matrix of order 4(2m)in.
Theorem 4.1.2 is known as the Multiplicative Theorem because it is relatedwith multiplication of orders of two Hadamard matrices. It shows that ifWilliamson–Hadamard matrices of order 4m and 4n exist, then Williamson–Hadamard matrices of order (4m · 4n)/2 = 8mn exist. Remember that the firstrepresentative of a multiplicative theorem is as follows: if H4m and H4n areHadamard matrices of order 4m and 4n, then the Kronecker product H4m ⊗ H4n
is a Hadamard matrix of order 16mn.
Proof: First of all, the Kronecker product is a (+1,−1) matrix. Second, it isorthogonal:
(H4m ⊗ H4n) (H4m ⊗ H4n)T = (H4m ⊗ H4n)(HT
4m ⊗ HT4n
)=
(H4mHT
4m
)⊗
(H4nHT
4n
)= (4mI4m) ⊗ (4nI4n) = 16mnI4nm. (4.33)
Problems for Exploration
• Show that Williamson-type matrices of order n exist, where n is an integer.• Show that if W1 and W2 are Williamson-type matrices of order n and m, then
Williamson-type matrices of order mn exist.• Show that if two Williamson–Hadamard matrices of order n and m exist, then
Williamson–Hadamard matrices of order mn/4 exist.
Algorithm 4.1.1: The algorithm for the generation of the Williamson–Hadamardmatrix of order 4mn comes from the Williamson–Hadamard matrices of orders 4mand 4n.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
“Plug-In Template” Method: Williamson–Hadamard Matrices 163
Input: Williamson matrices A, B, C, D and A0, B0, C0, D0 of orders m and n.
Step 1. Construct the matrices
X =12
(A + B C + DC + D −A − B
), Y =
12
(A − B C − D−C + D A − B
). (4.34)
Step 2. Construct matrices A1, B1, C1, D1 of order 2mn, according toEq. (4.22).
Step 3. For I = 2, 3, . . ., construct Williamson matrices Ai, Bi, Ci, Di oforder (2m)in using recursion [Eq. (4.22)]:
Ai = Ai−1 ⊗ X + Bi−1 ⊗ Y, Bi = Bi−1 ⊗ X − Ai−1 ⊗ Y,Ci = Ci−1 ⊗ X + Di−1 ⊗ Y, Di = Di−1 ⊗ X −Ci−1 ⊗ Y.
(4.35)
Step 4. Construct the Williamson–Hadamard matrix as
[WH]4(2m)in =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝Ai Bi Ci Di
−Bi Ai −Di Ci
−Ci Di Ai −Bi
−Di −Ci Bi Ai
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (4.36)
Output: The Williamson–Hadamard matrix [WH]4(2m)in of order 4(2m)in.
Example 4.1.2: Construction of Williamson–Hadamard matrix of order 24.
Step 1. Input the matrices A0 = (1), B0 = (1),C0 = (1),D0 = (1), and
A =
⎛⎜⎜⎜⎜⎜⎜⎜⎝+ + +
+ + +
+ + +
⎞⎟⎟⎟⎟⎟⎟⎟⎠ , B = C = D =
⎛⎜⎜⎜⎜⎜⎜⎜⎝+ − −− + −− − +
⎞⎟⎟⎟⎟⎟⎟⎟⎠ . (4.37)
Step 2. Using Eq. (4.22), construct the following matrices:
A1 = A3 =
(A CD −B
), A2 = A4 =
(B DC −A
), (4.38)
A1 = A3 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + + + − −+ + + − + −+ + + − − ++ − − − + +− + − + − +− − + + + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, A2 = A4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ − − + − −− + − − + −− − + − − ++ − − − − −− + − − − −− − + − − −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (4.39)
Step 3. Substitute matrices A1, A2, A3, A4 into the Williamson array:
H24 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝A1 A2 A1 A2
−A2 A1 −A2 A1
−A1 A2 A1 −A2
−A2 −A1 A2 A1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (4.40)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
164 Chapter 4
Output: The Williamson–Hadamard matrix H24 of order 24.
Example 4.1.3: Construction of Williamson–Hadamard matrix of order 40.
Step 1. Input the matrices A0 = (1), B0 = (1),C0 = (1),D0 = (1) and
A = B =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ − − − −− + − − −− − + − −− − − + −− − − − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, C =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + − − ++ + + − −− + + + −− − + + ++ − − + +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠,
D =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ − + + −− + − + ++ − + − ++ + − + −− + + − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠.
(4.41)
Step 2. Construct matrices
A1 = A3 =
(A CD −B
), A2 = A4 =
(B DC −A
), (4.42)
A1 = A3 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ − − − − + + − − +− + − − − + + + − −− − + − − − + + + −− − − + − − − + + +− − − − + + − − + ++ − + + − − + + + +− + − + + + − + + ++ − + − + + + − + ++ + − + − + + + − +− + + − + + + + + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
,
A2 = A4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ − − − − + − + + −− + − − − − + − + +− − + − − + − + − +− − − + − + + − + −− − − − + − + + − ++ + − − + − + + + ++ + + − − + − + + +− + + + − + + − + +− − + + + + + + − ++ − − + + + + + + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
.
(4.43)
Step 3. Substitute matrices A1, A2, A3, A4 into the Williamson array.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
“Plug-In Template” Method: Williamson–Hadamard Matrices 165
Table 4.2 Orders of Williamson matrices of order 2n.
35, 37, 39, 43, 49, 51, 55, 63, 69, 77, 81, 85, 87, 93, 95, 99, 105, 111, 115, 117, 119, 121, 125, 129, 133, 135,143, 145, 147, 155, 161, 165, 169, 171, I75, 185, 187, 189, 195, 203, 207, 209, 215, 217, 221, 225, 231, 243,247, 253, 255, 259, 261, 273, 275, 279, 285, 289, 297, 299, 301, 315, 319, 323, 333, 335, 341, 345, 351, 357,361, 363, 377, 387, 391, 403, 405, 407, 425, 429, 437, 441, 455, 459, 465, 473, 475, 481, 483, 495, 513, 525,527, 529, 551, 559, 561, 567, 575, 589, 609, 621, 625, 627, 637, 645, 651, 667, 675, 693, 713, 725, 729, 731,751, 759, 775, 777, 783, 817, 819, 825, 837, 851, 891, 899, 903, 925, 957, 961, 989, 1023, 1073, 1075, 1081,1089, 1147, 1161, 1221, 1247, 1333, 1365, 1419, 1547, 1591, 1729, 1849, 2013
Step 4. Output the Williamson–Hadamard matrix of order 40:
H40 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝A1 A2 A1 A2
−A2 A1 −A2 A1
−A1 A2 A1 −A2
−A2 −A1 A2 A1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (4.44)
Corollary 4.1.1:14,15 Williamson matrices of orders 2i−1n1n2 · · · ni exist whereni ∈ L, i = 1, 2, . . . . In particular, Williamson matrices of order 2n exist, wherethe values of n are given in Table 4.2. From Corollary 4.1.1 and Williamson’stheorem, 2,3 the following emerges:
Corollary 4.1.2: If Williamson matrices of orders n1, n2, . . . , nk exist, then aWilliamson–Hadamard matrix of order 2k+1n1n2 · · · nk exists.
Note that as follows from the list of orders of existing Williamson matrices,the existence of Williamson-type matrices of order n also implies the existenceof Williamson matrices of order 2n. The value of Corollary 4.1.1 is as follows:Although the existence of Williamson matrices of order n can be unknown, accord-ing to Corollary 4.1.1, Williamson-type matrices of order 2n nevertheless exist.
Lemma 4.1.2: If p = 1(mod 4) is a power of a prime number, then symmetric(0,±1) matrices of order p + 1 satisfying conditions of Eq. (4.19) exist.
Proof: In Ref. 8, the existence of cyclic symmetric Williamson matrices wasproved, represented as I + A1, I − A1, B1, B1, and having an order (p + 1)/2, ifp = 1(mod 4) is a prime power. In this case, we can represent the matrices inEq. (4.18) as
X =
(I B1
B1 −I
), Y =
(A1 0
0 A1
). (4.45)
It is evident that these are symmetric matrices, and we can easily check that theysatisfy all conditions of Eq. (4.19). Now, from Theorem 4.1.2 and Lemma 4.1.2,we obtain the following.
Corollary 4.1.3: If symmetric Williamson matrices of order n exist, then symme-tric Williamson matrices of order n(p+1) also exist, where p = 1(mod 4) is a primepower. Below, several orders of existing symmetric Williamson-type matrices oforder 2n are given, where n ∈ W2 = {k1 · k2}, and where k1 ∈ W, k2 ∈ {5, 9,13, 17, 25, 29, 37, 41, 49, 53, 61, 73, 81}.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
166 Chapter 4
Examples of symmetric Williamson-type matrices of order 10 and 18 are givenas follows:
A10 = B10 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + − − + + − − − −+ + + − − − + − − −− + + + − − − + − −− − + + + − − − + −+ − − + + − − − − ++ − − − − − + − − +− + − − − + − + − −− − + − − − + − + −− − − + − − − + − +− − − − + + − − + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
,
C10 = D10 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ − + + − + − − − −− + − + + − + − − −+ − + − + − − + − −+ + − + − − − − + −− + + − + − − − − ++ − − − − − − + + −− + − − − − − − + +− − + − − + − − − +− − − + − + + − − −− − − − + − + + − −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
,
(4.46)
A18 = B18 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + + + − − + + + + + − + − − + − ++ + + + + − − + + + + + − + − − + −+ + + + + + − − + − + + + − + − − ++ + + + + + + − − + − + + + − + − −− + + + + + + + − − + − + + + − + −− − + + + + + + + − − + − + + + − ++ − − + + + + + + + − − + − + + + −+ + − − + + + + + − + − − + − + + ++ + + − − + + + + + − + − − + − + ++ + − + − − + − + − + + + − − + + ++ + + − + − − + − + − + + + − − + +− + + + − + − − + + + − + + + − − ++ − + + + − + − − + + + − + + + − −− + − + + + − + − − + + + − + + + −− − + − + + + − + − − + + + − + + ++ − − + − + + + − + − − + + + − + +− + − − + − + + + + + − − + + + − ++ − + − − + − + + + + + − − + + + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
, (4.47a)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
“Plug-In Template” Method: Williamson–Hadamard Matrices 167
C18 = D18
=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ − − − + + − − − + + − + − − + − +− + − − − + + − − + + + − + − − + −− − + − − − + + − − + + + − + − − +− − − + − − − + + + − + + + − + − −+ − − − + − − − + − + − + + + − + −+ + − − − + − − − − − + − + + + − +− + + − − − + − − + − − + − + + + −− − + + − − − + − − + − − + − + + +− − − + + − − − + + − + − − + − + ++ + − + − − + − + − − − − + + − − −+ + + − + − − + − − − − − − + + − −− + + + − + − − + − − − − − − + + −+ − + + + − + − − − − − − − − − + +− + − + + + − + − + − − − − − − − +− − + − + + + − + + + − − − − − − −+ − − + − + + + − − + + − − − − − −− + − − + − + + + − − + + − − − − −+ − + − − + − + + − − − + + − − − −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (4.47b)
Theorem 4.1.3: If A, B, C, D are cyclic Williamson matrices of order n, then thematrix ⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
A A A B −B C −C −D B C −D −D
A −A B −A B −D D −C −B −D −C −C
A −B −A A −D D −B B −C −D C −C
B A −A −A D D D C C −B −B −C
B −D D D A A A C −C B −C B
B C −D D A −A C −A −D C B −B
D −C B −B A −C −A A B C D −D
−C −D −C −D C A −A −A −D B −B −B
D −C −B −B −B C C −D A A A D
−D −B C C C B B −D A −A D −A
C −B −C C D −B −D −B A −D −A A
−C −D −D −D C −C −B B B D −A −A
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
(4.48)
is a Williamson–Hadamard matrix of order 12n.
Corollary 4.1.4: A Williamson–Hadamard matrix of order 12n exists, where ntakes a value from Tables 4.1 and 4.2.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
168 Chapter 4
4.2 Construction of 8-Williamson Matrices
8-Williamson matrices are defined similar to Williamson matrices as follows.
Definition: Square (+1,−1) matrices Ai, i = 1, 2, . . . , 8 of order n, which are called8-Williamson matrices of order n, exist if the following conditions are satisfied:
AiATj = AjA
Ti , i, j = 1, 2, . . . , 8,
8∑i=1
AiATi = 8nIn.
(4.49)
The Williamson array of order 8 is also known, making it possible to construct aHadamard matrix of order 8n, if 8-Williamson matrices of order n exist.
Theorem 4.2.1:3 If A, B, . . . ,G,H are 8-Williamson matrices of order n, then⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
A B C D E F G H
−B A D −C F −E −H G
−C −D A B G H −E −F
−D C −B A H −G F −E
−E −F −G −H A B C D
−F E −H G −B A −D C
−G H E −F −C D A −B
−H −G F E −D −C B A
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
(4.50)
is a Williamson–Hadamard matrix of order 8n.
Theorem 4.2.2: (Multiplicative theorem for 8-Williamson matrices14–16) IfWilliamson–Hadamard matrices of orders 4n and 4m exist, then a Williamson–Hadamard matrix of order 8mn also exists.
Proof: Let A1, A2, A3, A4 and P1, P2, P3, P4 be Williamson matrices of orders nand m, respectively. Consider the following matrices:
X1 = P1 ⊗ A1 + A2
2− P2 ⊗ A1 − A2
2,
X2 = P1 ⊗ A1 − A2
2+ P2 ⊗ A1 + A2
2,
X3 = P3 ⊗ A1 + A2
2− P4 ⊗ A1 − A2
2,
X4 = P3 ⊗ A1 − A2
2+ P4 ⊗ A1 + A2
2, (4.51)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
“Plug-In Template” Method: Williamson–Hadamard Matrices 169
X5 = P1 ⊗ A3 + A4
2− P2 ⊗ A3 − A4
2,
X6 = P1 ⊗ A3 − A4
2+ P2 ⊗ A3 + A4
2,
X7 = P3 ⊗ A3 − A4
2− P4 ⊗ A3 + A4
2,
X8 = P3 ⊗ A3 + A4
2+ P4 ⊗ A3 − A4
2.
Below, we check that Xi, i = 1, 2, . . . , 8 are 8-Williamson matrices of order mn,i.e., the conditions of Eq. (4.49) are satisfied. Check the first condition,
X1XT2 = P1PT
1 ⊗A1AT
1 − A2AT2
4+ P1PT
2 ⊗A1AT
1 + 2A1AT2 + A2AT
2
4
− P2PT2 ⊗
A1AT1 − A2AT
2
4− P2PT
1 ⊗A1AT
1 − 2A1AT2 + A2AT
2
4,
X2XT1 = P1PT
1 ⊗A1AT
1 − A2AT2
4− P1PT
2 ⊗A1AT
1 − 2A1AT2 + A2AT
2
4
− P2PT2 ⊗
A1AT1 − A2AT
2
4+ P2PT
1 ⊗A1AT
1 + 2A1AT2 + A2AT
2
4.
(4.52)
Comparing the obtained expressions, we conclude that X1XT2 = X2XT
1 . Similarly, itcan be shown that
XiXTj = XjX
Ti , i, j = 1, 2, . . . , 8. (4.53)
Now we check the second condition of Eq. (4.49). With this purpose, we calculate
X1XT1 = P1PT
1 ⊗(A1 + A2)(A1 + A2)T
4− P1PT
2 ⊗A1AT
1 − A2AT2
2
+ P2PT2 ⊗
(A1 − A2)(A1 − A2)T
4,
X2XT2 = P1PT
1 ⊗(A1 − A2)(A1 − A2)T
4+ P1PT
2 ⊗A1AT
1 − A2AT2
2
+ P2PT2 ⊗
(A1 + A2)(A1 + A2)T
4,
X3XT3 = P3PT
3 ⊗(A1 + A2)(A1 + A2)T
4− P3PT
4 ⊗A1AT
1 − A2AT2
2
+ P4PT4 ⊗
(A1 − A2)(A1 − A2)T
4,
X4XT4 = P3PT
3 ⊗(A1 − A2)(A1 − A2)T
4+ P3PT
4 ⊗A1AT
1 − A2AT2
2
+ P4PT4 ⊗
(A1 + A2)(A1 + A2)T
4.
(4.54)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
170 Chapter 4
Summarizing the above expressions, we find that
8∑i=1
XiXTi =
4∑i=1
PiPTi ⊗
(A1 + A2)(A1 + A2)T + (A1 − A2)(A1 − A2)T
4. (4.55)
But, (A1 + A2)(A1 + A2)T + (A1 − A2)(A1 − A2)T = 2(A1AT1 + A2AT
2 ). Thus, fromEq. (4.55) we have
4∑i=1
XiXTi =
12
4∑i=1
PiPTi ⊗ (A1AT
1 + A2AT2 ). (4.56)
From Eq. (4.51), we obtain
8∑i=5
XiXTi =
12
4∑i=1
PiPTi ⊗ (A3AT
3 + A4AT4 ). (4.57)
Summarizing both parts of equalities [Eqs. (4.56) and (4.57)], we find that
8∑i=1
XiXTi =
12
4∑i=1
PiPTi ⊗
4∑i=1
AiATi . (4.58)
Because Pi and Ai, i = 1, 2, 3, 4 are Williamson matrices of order n and m,respectively,
4∑i=1
PiPTi = 4nIn,
4∑i=1
AiATi = 4mIm. (4.59)
Now, substituting the last expressions into Eq. (4.58), we conclude that
8∑i=1
XiXTi = 8mnImn. (4.60)
Substituting matrices Xi into Eq. (4.50), we obtain a Williamson–Hadamard matrixof order 8mn.
Algorithm 4.2.1: Construction of a Williamson–Hadamard matrix of order 24n.
Input: Take matrices
A1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎝+ + +
+ + +
+ + +
⎞⎟⎟⎟⎟⎟⎟⎟⎠ , A2 = A3 = A4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎝+ − −− + −− − +
⎞⎟⎟⎟⎟⎟⎟⎟⎠ (4.61)
and Williamson matrices P1, P2, P3, P4 of order n.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
“Plug-In Template” Method: Williamson–Hadamard Matrices 171
Step 1. Substitute the matrices Ai and Pi, i = 1, 2, 3, 4 into the formula inEq. (4.51) to find the matrices
X1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝P1 −P2 −P2
−P2 P1 −P2
−P2 −P2 P1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , X2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝P2 P1 P1
P1 P2 P2
P1 P1 P2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ,
X4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝P2 P2 P2
P2 P2 P2
P2 P2 P2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ,
X5 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝P1 −P1 −P1
−P1 P1 −P1
−P12−P1 P1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ,
X3 = X6 = X7 = X8 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝P2 −P2 −P2
−P2 P2 −P2
−P2 −P2 P2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ .
(4.62)
Step 2. Substitute the matrices Xi into the array in Eq. (4.50) to obtain aWilliamson–Hadamard matrix of order 24n:⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
X1 X2 X3 X4 X5 X3 X3 X3
−X2 X1 X4 −X3 X3 −X5 −X3 X3
−X3 −X4 X1 X2 X3 X3 −X5 −X3
−X4 X3 −X2 X2 X3 −X3 X3 −X5
−X5 −X3 −X3 −X3 X1 X2 X3 X4
−X3 X5 −X3 X3 −X2 X1 −X4 X3
−X3 X3 X5 −X3 −X3 X4 X1 −X2
−X3 −X3 X3 X5 −X4 −X3 X2 X1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (4.63)
Output: A Williamson–Hadamard matrix of order 24n.
In particular, 8-Williamson matrices of order 9 are represented as
X1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + + − + + − + ++ + + + − + + − ++ + + + + − + + −− + + + + + − + ++ − + + + + + − ++ + − + + + + + −− + + − + + + + +
+ − + + − + + + +
+ + − + + − + + +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
,
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
172 Chapter 4
X2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ − − + + + + + +
− + − + + + + + +
− − + + + + + + +
+ + + + − − + + +
+ + + − + − + + +
+ + + − − + + + +
+ + + + + + + − −+ + + + + + − + −+ + + + + + − − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
,
X4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ − − + − − + − −− + − − + − − + −− − + − − + − − ++ − − + − − + − −− + − − + − − + −− − + − − + − − ++ − − + − − + − −− + − − + − − + −− − + − − + − − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
,
X5 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + + + + + + + +
+ + + + + + + + +
+ + + + + + + + +
+ + + + + + + + +
+ + + + + + + + +
+ + + + + + + + +
+ + + + + + + + +
+ + + + + + + + +
+ + + + + + + + +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
,
X3 = X6 = X7 = X8 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ − − − + + − + +− + − + − + + − +− − + + + − + + −− + + + − − − + ++ − + − + − + − ++ + − − − + + + −− + + − + + + − −+ − + + − + − + −+ + − + + − − − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (4.64)
From Theorem 4.2.2 and Corollary 4.1.3, we have the following.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
“Plug-In Template” Method: Williamson–Hadamard Matrices 173
Table 4.3 Orders of 8-Williamson matrices.
3, 5, . . . , 39, 43, 45, 49, 51, 55, 57, 63, 65, 69, 75, 77, 81, 85, 87, 91, 93, 95, 99, 105, 111, 115, 117, 119, 121,125, 129, 133, 135, 143, 145, 147, 153, 155, 161, 165, 169, 171, 175, 185, 187, 189, 195, 203, 207, 209, 215,217, 221, 225, 231, 243, 247, 253, 255, 259, 261, 273, 275, 279, 285, 289, 297, 299, 301, 315, 319, 323, 325,333, 341, 345, 351, 361, 375, 377, 387, 391, 399, 403, 405, 407, 425, 435, 437, 441, 455, 459, 473, 475, 481,483, 493, 495, 513, 525, 527, 529, 551, 555, 559, 567, 575, 589, 609, 621, 625, 629, 637, 645, 651, 667, 675,703, 713, 725, 729, 731, 775, 777, 783, 817, 819, 837, 841, 851, 899, 903, 925, 961, 989, 999, 1001, 1073
Corollary 4.2.1: (a) Symmetric 8-Williamson matrices of order mn exist, where
m, n ∈ W ∪{
p + 12
}, p ≡ 1 (mod 4) is a prime power. (4.65)
(b) 8-Williamson matrices of order mn exist, where
m, n ∈ W ∪ L ∪{
p + 12
}, p ≡ 1 (mod 4) is a prime power.
See Table 4.3 for examples. The following theorem is correct.
Theorem 4.2.3: Let A, B, C, D and A0, B0, C0, D0, E0, F0, G0, H0 be Williamsonand 8-Williamson matrices of orders n and m, respectively. Then, the followingmatrices
Ai = Ai−1 ⊗ X + Bi−1 ⊗ Y, Bi = Bi−1 ⊗ X − Ai−1 ⊗ Y,
Ci = Ci−1 ⊗ X + Di−1 ⊗ Y, Di = Di−1 ⊗ X −Ci−1 ⊗ Y,
Ei = Ei−1 ⊗ X + Fi−1 ⊗ Y, Fi = Fi−1 ⊗ X − Ei−1 ⊗ Y,
Gi = Gi−1 ⊗ X + Hi−1 ⊗ Y, Hi = Hi−1 ⊗ X −Gi−1 ⊗ Y,
(4.66)
are 8-Williamson matrices of order (2n)im, i = 1, 2, . . . , where X, Y has the formof Eq. (4.18).
From Corollary 4.2.1 and Theorem 4.2.3, we conclude that there are eightWilliamson-type matrices of order 2mn, where
m ∈ W8, n ∈ W ∪ L. (4.67)
4.3 Williamson Matrices from Regular Sequences
In this section, we describe the construction of Williamson and generalizedWilliamson matrices based on regular sequences.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
174 Chapter 4
Definition 4.3.1:17 A sequence of (+1,−1) matrices {Qi}2si=1 of order m is called a
semiregular s-sequence if the following conditions are satisfied:
QiQTj = J, i − j � 0,±s, i, j = 1, 2, . . . , 2s,
QiQTi+s = Qi+sQ
Ti , i = 1, 2, . . . , s,
2s∑i=1
QiQTi = 2smIm.
(4.68)
Definition 4.3.2:18,19 A sequence of (+1,−1) matrices {Ai}si=1 of order m is calleda regular s-sequence if
AiAj = J, i � j, i, j = 1, 2, . . . , s,
ATi A j = AjA
Ti , i = 1, 2, . . . , s,
s∑i=1
(AiATi + AT
i Ai) = 2smIm.
(4.69)
Remark: From the conditions of Eq. (4.67), we can obtain matrices Ai, i = 1,2, . . . s that also satisfy AiJ = JAT
j = aJ, i, j = 1, 2, . . . , s, where a is an integer.
Lemma 4.3.1: If a regular s-sequence exists, then a semiregular s-sequence alsoexists.
Proof: Let {Ai}si=1 be a regular s-sequence of matrices of order m. It is not difficultto check that the sequence {Qi}2s
i=1 is a semiregular s-sequence, where Qi = Ai, andQi+s = AT
i , i = 1, 2, . . . , s.
Remark: A regular two-sequence (A1, A2) exists of the form19
A1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝B1 B1U B1U2
B1U B1U2 B1
B1U2 B1 B1U
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , A2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝B2 B2 B1U2
B2U B2U B2U
B2U2 B2U2 B2U2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (4.70)
where
B1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + −− + ++ − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , B2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + −+ + −+ + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ; (4.71)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
“Plug-In Template” Method: Williamson–Hadamard Matrices 175
i.e.,
A1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + − + + − + + −− + + − + + − + ++ − + + − + + − +− + + + − + + + −+ − + + + − − + ++ + − − + + + − ++ − + + + − − + ++ + − − + + + − +− + + + − + + + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
, A2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + − + + − + + −+ + − + + − + + −+ + − + + − + + −− + + − + + − + +− + + − + + − + +− + + − + + − + ++ − + + − + + − ++ − + + − + + − ++ − + + − + + − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
.
(4.72)
Some known results are now provided.
Theorem 4.3.1: Let p ≡ 1(mod 4) and q ≡ 3(mod 4) be prime powers. Then,
(a) a semiregular (p + 1)-sequence of matrices of order p2 exists 18 and(b) a regular (q + 1)/2-sequence of matrices of order q2 exists. 19
In particular, from this theorem, we show the existence of a semiregular (n + 1)-sequence of matrices of order n2 and regular (m + 1)/2-sequence of matrices oforder m2, where
n ∈ R1 = {5, 9, 13, 17, 25, 29, 37, 41, 49, 53, 61, 73, 81, 89, 97} ,m ∈ R2
= {3, 7, 11, 19, 23, 27, 31, 43, 47, 59, 67, 71, 79, 83, 103, 107, 119,127, 131, 139, 151, 163, 167, 179, 191} . (4.73)
Theorem 4.3.2:17,19 If a regular s-sequence of matrices of order m and a regularsm-sequence of matrices of order n exists, then a regular s-sequence of matrices oforder mn exists.
Proof: Let{A1 = (a1
i, j)mi, j=1, A2 = (a2
i, j)mi, j=1, . . . , As = (as
i, j)mi, j=1
}be a regular s-sequ-
ence of matrices of order m, and {B1, B2, . . . , Bt} (t = sm) be a regular t-sequenceof matrices of order n. Denoting
Ck =(ck
i, j
)=
(ak
i, jB(k−1)m+i+ j−1
)m
i, j=1, k = 1, 2, . . . , s, (4.74)
or
Ck =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
ak11B(k−1)m+1 ak
12B(k−1)m+2 · · · ak1mBkm
ak21B(k−1)m+2 ak
22B(k−1)m+3 · · · ak2mB(k−1)m+1
......
. . ....
akm1Bkm ak
m2B(k−1)m+1 · · · akmmBkm−1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (4.75)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
176 Chapter 4
we can show that {Ck}sk=1 is a regular s-sequence of matrices of order mn. FromLemma 4.3.1 and Theorem 4.3.2, we also obtain the following.
Corollary 4.3.1: (a) If a semiregular s-sequence of matrices of order m and asemiregular (regular) sm-sequence of matrices of order n exist, then a semiregulars-sequence of matrices of order mn exist.
(b) If q ≡ 3(mod 4) and (q+ 1)q2 − 1 ≡ 3(mod 4) are prime powers, then a regular(1/2)s-sequence of matrices of order [(q + 1)q2 − 1]2q2 exists.
Proof: Actually, according to Theorem 4.3.1, a regular (1/2)s-sequence ofmatrices of order q2 and a regular [(q + 1)/2]q2-sequence of matrices of order[(q + 1)q2 − 1]2 exist. Now, according to Theorem 4.3.2, it is possible to assertthat a regular (q + 1)/2-sequence of matrices of order [(q + 1)q2 − 1]2q2 exists.In particular, there are regular 12-, 20-, and 28-sequences of matrices of orders112 · 14512, 192 · 72192, and 272 · 20,4112, respectively. Note that if q ≡ 3(mod 4)is a prime power, then (q + 1)q2 − 1 ≡ 3(mod 4).
Theorem 4.3.3: If two regular 2-sequences of matrices of orders m and n exist,respectively, then a regular 2-sequence of matrices of order mn also exists.
Proof: Let {A1, A2} and {B1, B2} be regular 2-sequences of matrices of orders mand n, respectively. We can show that matrices
P1 = A1 ⊗ B1 + B2
2+ A2 ⊗ B1 − B2
2,
P2 = A2 ⊗ B1 + B2
2+ A1 ⊗ B1 − B2
2
(4.76)
form a regular sequence of matrices of order mn, i.e., they satisfy the conditions ofEq. (4.69).
Corollary 4.3.2: A regular 2-sequence of matrices of order 9t, t = 1, 2, . . . exists.
In reality, according to Theorem 4.3.1, a regular 2-sequence of matrices of order9 exists. It is easy to see that from the previous theorem we have Corollary 4.3.2.Now we will construct Williamson matrices using regular sequences.
Theorem 4.3.4:17,19 If Williamson matrices of order n and a regular (semiregular)2n-sequence of matrices of order m exist, then Williamson matrices of order mnexist.
Next, we prove a theorem similar to Theorem 4.3.4 for the construction of8-Williamson matrices.
Theorem 4.3.5: Let 8-Williamson matrices of order n and the regular 4n-sequ-ence of matrices of order m exist. Then 8-Williamson matrices of order mn alsoexist.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
“Plug-In Template” Method: Williamson–Hadamard Matrices 177
Proof: Let Ai = (ait, j)
nt, j=1, i = 1, 2, . . . , 8 be 8-Williamson matrices of order n, and
(Qi)4ni=1 be a regular sequence of matrices of order m. We introduce the following
(+1,−1) matrices of order mn:
X1 =(a1
i+1, j+1Qi+ j
)n−1
i, j=0, X2 =
(a2
i+1, j+1Qn+i+ j
)n−1
i, j=0,
X3 =(a3
i+1, j+1QTi+ j
)n−1
i, j=0, X4 =
(a4
i+1, j+1QTn+i+ j
)n−1
i, j=0,
X5 =(a5
i+1, j+1Q2n+i+ j
)n−1
i, j=0, X6 =
(a6
i+1, j+1Q3n+i+ j
)n−1
i, j=0,
X7 =(a7
i+1, j+1QT2n+i+ j
)n−1
i, j=0, X8 =
(a8
i+1, j+1QT3n+i+ j
)n−1
i, j=0,
(4.77)
where the subscript r is calculated by the formula r(mod n).Prove that matrices Xi are 8-Williamson matrices of order mn, i.e., the conditions
of Eq. (4.50) are satisfied. Calculate the i’th and j’th element of a matrix X1XT2 :
X1XT2 (i, j) =
n−1∑k=0
a1i+1,k+1a2
j+1,k+1Qi+kQTn+ j+k = Jm
n−1∑k=0
a1i+1,k+1a2
j+1,k+1. (4.78)
We can see that X1XT2 = X2XT
1 . We can also show that XiXTj = XjXT
i , for all i,j = 1, 2, . . . , 8. Now, we will prove the second condition of Eq. (4.49). With thispurpose, we calculate the i’th and j’th element P(i, j) of the matrix
∑8i=1 XiXT
i :
P(i, j) =n∑
r=1
(a1
i,ra1j,rQi+r−1QT
j+r−1 + a2i,ra
2j,rQn+i+r−1QT
n+ j+r−1
+ a3i,ra
3j,rQi+r−1QT
j+r−1 + a4i,ra
4j,rQn+i+r−1QT
n+ j+r−1
+ a5i,ra
5j,rQ2n+i+r−1QT
2n+ j+r−1 + a6i,ra
6j,rQ3n+i+r−1QT
3n+ j+r−1
+ a7i,ra
7j,rQ2n+i+r−1QT
2n+ j+r−1 + a8i,ra
8j,rQ3n+i+r−1QT
3n+ j+r−1
). (4.79)
From the conditions of Eqs. (4.49) and (4.69), and from the above relation, weobtain
P(i, j) = Jm
8∑t=1
n∑r=1
ati,ra
tj,r = 0, i � j,
P(i, i) =4n∑j=1
(QjQ
Tj + QT
j Qj
)= 8mnIm.
(4.80)
This means that matrices Xi, i = 1, 2, . . . , 8 are 8-Williamson matrices of order mn.From Theorems 4.3.1 and 4.3.5, we can also conclude that there are 8-Williamsonmatrices of order 3 · 232, 9 · 712, 9 · 712, 13 · 1032, 15 · 1192, 19 · 1512, and 21 · 1672.
Now we will construct generalized Williamson matrices.15,16 The Williamsonmethod was modified in Refs. 3–6. Thus, instead of using the array in Eq. (4.2),
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
178 Chapter 4
the method used the so-called Geothals–Seidel array,
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝An BnR Cn Dn
−BnR An −DTn R CT
n R
−CnR DTn R An −BT
n R
−DnR −CTn R BT
n R An
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, where R =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
0 0 · · · 0 10 0 · · · 1 0....... . .... 0
0 1 · · · 0 01 0 · · · 0 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (4.81)
Cyclic matrices An, Bn, Cn, Dn satisfying the second condition of Eq. (4.1) arecalled matrices of the Geothals–Seidel type.3
Theorem 4.3.6: (Geothals-Seidel3,6) If An, Bn, Cn, and Dn are Geothals–Seidel-type matrices, then the Geothals–Seidel array gives a Hadamard matrix of order4n.
Definition: Square (+1,−1) matrices A, B, C, D of order n are called generalizedWilliamson matrices if
PQ = QP, PRQT = QRPT , P,Q ∈ {A, B,C,D} ,AAT + BBT +CCT + DDT = 4nIn.
(4.82)
Note that from the existence of Williamson matrices of order m and T -matrices oforder k, one can construct generalized Williamson matrices of order km.14
Definition: 3,14,33 Cyclic (0,−1,+1) matrices X1, X2, X3, X4 of order k are calledT -matrices if the conditions
Xi ∗ Xj = 0, i � j, i, j = 1, 2, 3, 4,XiXj = XjXi, i, j = 1, 2, 3, 4,X1 + X2 + X3 + X4 is a (−1,+1) −matrix,X1XT
1 + X2XT2 + X3XT
3 + X3XT3 = kIk,
(4.83)
are satisfied, where * is a Hadamard (pointwise) product.
It can be proved that if X1, X2, X3, X4 are T -matrices of order k, then substitutionof matrices
X = A ⊗ X1 + B ⊗ X2 +C ⊗ X3 + D ⊗ X4,
Y = −B ⊗ X1 + A ⊗ X2 − D ⊗ X3 +C ⊗ X4,
Z = −C ⊗ X1 + D ⊗ X2 + A ⊗ X3 − B ⊗ X4,
W = −D ⊗ X1 −C ⊗ X2 + B ⊗ X3 + A ⊗ X4
(4.84)
into the following array (called a Geothals–Seidel array):
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝X YR ZR WR−YR X −WT R −ZT R−ZR WT R X −YT R−WR −ZT R YT R X
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ (4.85)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
“Plug-In Template” Method: Williamson–Hadamard Matrices 179
gives a Baumert–Hall array (for more detail, see forthcoming chapters). There areinfinite classes of T -matrices of orders 2a10b26c + 1, where a, b, c are nonnegativeintegers.
Theorem 4.3.7: If generalized Williamson matrices of order n and a regular 2n-sequence of matrices of order m exist, then generalized Williamson matrices oforder mn exist.
Proof: Let A = (ai, j), B = (bi, j), C = (ci, j), D = (di, j) be generalized Williamsonmatrices of order n, and {Qi}2n−1
i=0 be a regular sequence of matrices of order m. Thematrices A, B, C, D are represented as
AT =(AT
1 , AT2 , . . . , A
Tn
), BT =
(BT
1 , BT2 , . . . , B
Tn
),
CT =(CT
1 ,CT2 , . . . ,C
Tn
), DT =
(DT
1 ,DT2 , . . . ,D
Tn
).
(4.86)
Now, we can rewrite the conditions of Eq. (4.82) as
n∑k=1
pi,kqk, j =
n∑k=1
qi,k pk, j, pi, j, qi, j ∈{ai, j, bi, j, di, j, di, j
},
P1i Q j = Q1
i P j, Pi,Qi ∈ {Ai, Bi,Ci,Di} ,
AiAj + BiBj +CiC j + DiDj =
⎧⎪⎪⎪⎨⎪⎪⎪⎩0, if i � j,
4n, if i = j,i, j = 1, 2, . . . , n,
(4.87)
where
P1 = (pn, pn−1, . . . , p1), if P = (p1, p2, . . . , pn). (4.88)
We introduce the following matrices:
X =(ai, jQ(n−i+ j)(mod n)
)n−1
i, j=0, Y =
(bi, jQn+(n−i+ j)(mod n)
)n−1
i, j=0,
Z =(ci, jQ
T(n−i+ j)(mod n)
)n−1
i, j=0, W =
(di, jQ
Tn+(n−i+ j)(mod n)
)n−1
i, j=0.
(4.89)
First, we prove that X, Y , Z, W are generalized Williamson matrices, i.e., theconditions of Eq. (4.87) are satisfied. Furthermore, we omit (mod n) in an index. Itis not difficult to see that the i’th block row and j’th block column of matrices Xand Y have the form
ai,0Q(n−i) ai,1Q(n−i+1) · · · ai,n−1Q(2n−i−1),
b0, jQn+(n+ j) b1, jQn+(n+ j−1) · · · bn−1, jQn+( j+1).(4.90)
Hence, the i’th, j’th block element of matrix XY is represented as
XY(i, j) =n−1∑k=0
ai,kbk, jQ(n−i+k)Qn+(n+ j−k). (4.91)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
180 Chapter 4
On the other hand, we also find that
YX(i, j) =n−1∑k=0
bi,kak, jQn+(n−i+k)Q(n+ j−k). (4.92)
According to Eq. (4.69), we can rewrite the two last equations as
XY(i, j) = Jm
n−1∑k=0
ai,kbk, j,
YX(i, j) = Jm
n−1∑k=0
bi,kak, j, i, j = 1, 2, . . . , n − 1.
(4.93)
According to the first condition of Eq. (4.87), we find that XY = YX. Otherconditions such as PQ = QP can be proved in a similar manner.
Now, prove the condition XRYT = YRXT . Let us represent the i’th block row ofmatrix XR and the j’th block column of the matrix YT as
ai,nQ(n−i−1), ai,n−1Q(n−i−2), . . . , ai,2Q(−i+1), ai,1Q(−i),
b j,1QTn+(n− j), b j,2QT
n+(n− j+1), . . . , b j,n−1QTn+(2n− j−2), b j,nQT
n+(2n− j−1).(4.94)
Hence, the i’th, j’th block elements of matrices XRYT and YRXT have thefollowing form:
XRYT (i, j) =n−1∑k=0
ai,n−k−1b j,kQ(n−i−1−k)QTn+(n− j+k),
YRXT (i, j) =n−1∑k=0
bi,n−k−1a j,kQn+(2n−i−1−k)QT(n− j+k).
(4.95)
According to the second condition of Eq. (4.69), we obtain
XRYT (i, j) = Jm
n−1∑k=0
ai,n−k−1b j,k,
YRXT (i, j) = Jm
n−1∑k=0
bi,n−k−1a j,k.
(4.96)
Thus, the second condition of Eq. (4.87) is satisfied, which means that we have
XRYT (i, j) = JmA1i B j = JmB1
i A j = YRXT (i, j). (4.97)
Other conditions such as PRQT = QRPT may be similarly proved.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
“Plug-In Template” Method: Williamson–Hadamard Matrices 181
Table 4.4 Williamson matrices of various types.
Types of matrices Conditions Order of matrices
Williamson n ≡ 3 (mod 4) is a prime power, (n + 1)/4 isthe order of Williamson matrices
n2(n + 1)/4
Williamson n is the order of Williamson matrices, k is anatural number
2 · 9kn
8-Williamson m ≡ 7 (mod 8) is a prime power, (m + 1)/8 isthe order of 8-Williamson matrices
m2(m + 1)/8
8-Williamson n,m ≡ 3(mod 4) is a prime power, k,(n + 1)/4, (m + 1)/4 is the order of8-Williamson matrices
kn2(n+ 1)/4, [n2m2(n+ 1)(m+ 1)]/16
8-Williamson n ≡ 3 (mod 4) is prime power, n, (m + 1)/4 isthe order of Williamson matrices, k is anatural number
[9km2n(m + 1)]/2
GeneralizedWilliamson
n ≡ 3 (mod 4) is the prime power, (n + 1)/4 isthe order of generalized Williamson matrices
n2(n + 1)/4
Now we are going to prove the third condition of Eq. (4.87). We can see that thei’th block rows of matrices X, Y , Z, W, have the following forms, respectively:
ai,1Q(n−i) ai,2Q(n−i+1) · · · ai,nQ(2n−i−1);
bi,1Qn+(n−i) bi,2Qn+(n−i+1) · · · bi,nQn+(2n−i−1);
ci,1QT(n−i) ci,2QT
(n−i+1) · · · ci,nQT(2n−i−1);
di,1QTn+(n−i) di,2QT
n+(n−i+1) · · · di,nQTn+(2n−i−1).
(4.98)
Calculating the i’th, j’th block element of a matrix XXT + YYT + ZZT +WWT , wefind that
P(i, j) =n−1∑k=0
(ai,ka j,kQ(n−i+k)Q
T(n− j+k) + bi,kb j,kQn+(n−i+k)Q
Tn+(n− j+k)
+ ci,kc j,kQT(n−i+k)Q(n− j+k) + di,kd j,kQT
n+(n−i+k)Qn+(n− j+k)
). (4.99)
From the conditions of Eqs. (4.69) and (4.87), we conclude
P(i, j) = Jm
n−1∑k=0
(ai,ka j,k + bi,kb j,k + ci,kc j,k + di,kd j,k),
P(i, i) =n−1∑k=0
(QkQTk + QT
k Qk) = 4mnIm.
(4.100)
From Theorems 4.3.1 and 4.3.4–4.3.7, it follows that the existence ofWilliamson matrices of various types are given in Table 4.4. In particular, weconclude the existence of
(1) generalized Williamson matrices of orders [n2(n + 1)]/4, where
n ∈ G = {19, 27, 43, 59, 67, 83, 107, 131, 163, 179, 211, 227,
251, 283, 307, 331, 347, 379, 419}. (4.101)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
182 Chapter 4
(2) 8-Williamson matrices of orders [m2(m + 1)]/8, where
m ∈ W18 = {23, 71, 103, 119, 151, 167, 263, 311, 359, 423, 439}. (4.102)
References
1. J. Williamson, “Hadamard determinant theorem and sum of four squares,”Duke Math. J. 11, 65–81 (1944).
2. J. Williamson, “Note on Hadamard’s determinant theorem,” Bull. Am. Math.Soc. (53), 608–613 (1947).
3. W. D. Wallis, A. P. Street, and J. S. Wallis, Combinatorics: Room Squares,Sum-Free Sets, Hadamard Matrices, Lecture Notes in Mathematics, 292,Springer, Berlin/Heidelberg (1972) 273–445.
4. J. S. Wallis, “Some matrices of Williamson type,” Utilitas Math. 4, 147–154(1973).
5. J. M. Geothals and J. J. Seidel, “Orthogonal matrices with zero diagonal,”Can. J. Math. 19, 1001–1010 (1967).
6. J. M. Geothals and J. J. Seidel, “A skew Hadamard matrix of order 36,”J. Austral. Math. Soc. 11, 343–344 (1970).
7. M. Hall Jr., Combinatorial Theory, Blaisdell Publishing Co., Waltham, MA(1970).
8. R. J. Turyn, “An infinitive class of Williamson matrices,” J. Comb. Theory,Ser. A 12, 319–322 (1972).
9. J. S. Wallis, “On Hadamard matrices,” J. Comb. Theory, Ser. A 18, 149–164(1975).
10. A. G. Mukhopodhyay, “Some infinitive classes of Hadamard matrices,”J. Comb. Theory, Ser. A 25, 128–141 (1978).
11. J. S. Wallis, “Williamson matrices of even order,” in Combinatorial Mathe-matics, Proc. 2nd Austral. Conf., Lecture Notes in Mathematics, 403 132–142Springer, Berlin/Heidelberg (1974).
12. E. Spence, “An infinite family of Williamson matrices,” J. Austral. Math. Soc.,Ser. A 24, 252–256 (1977).
13. J. S. Wallis, “Construction of Williamson type matrices,” Lin. Multilin.Algebra 3, 197–207 (1975).
14. S. S. Agaian and H. G. Sarukhanian, “Recurrent formulae of the constructionWilliamson type matrices,” Math. Notes 30 (4), 603–617 (1981).
15. S. S. Agaian, Hadamard Matrices and Their Applications, Lecture Notes inMathematics, 1168, Springer, Berlin/Heidelberg (1985).
16. H.G. Sarukhanyan, “Hadamard Matrices and Block Sequences”, Doctoralthesis, Institute for Informatics and Automation Problems NAS RA, Yerevan,Armenia (1998).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
“Plug-In Template” Method: Williamson–Hadamard Matrices 183
17. X. M. Zhang, “Semi-regular sets of matrices and applications,” Australas. J.Combinator 7, 65–80 (1993).
18. J. Seberry and A. L. Whiteman, “New Hadamard matrices and conferencematrices obtained via Mathon’s construction,” Graphs Combinator 4, 355–377(1988).
19. J. Seberry and M. Yamada, “Hadamard matrices, sequences and blockdesigns,” in Surveys in Contemporary Design Theory, Wiley-InterscienceSeries in Discrete Mathematics, John Wiley & Sons, Hoboken, NJ (1992).
20. N. Ahmed and K. R. Rao, Orthogonal Transforms for Digital SignalProcessing, Springer-Verlag, Berlin (1975).
21. S. Agaian, H. Sarukhanyan, K. Egiazarian and J. Astola, Williamson–Hadamard transforms: design and fast algorithms, in Proc. of 18th Int.Scientific Conf. on Information, Communication and Energy Systems andTechnologies, ICEST-2003, Oct. 16–18, Sofia, Bulgaria, 199–208 (2003).
22. H. Sarukhanyan, S. Agaian, J. Astola, and K. Egiazarian, “Decomposition ofbinary matrices and fast Hadamard transforms,” Circuit Syst. Signal Process24 (4), 385–400 (2005).
23. H. Sarukhanyan, S. Agaian, J. Astola, and K. Egiazarian, “Binary matrices,decomposition and multiply-add architectures,” Proc. SPIE 5014, 111–122(2003).
24. S. Agaian, H. Sarukhanyan, and J. Astola, “Skew Williamson-Hadamardtransforms,” J. Multiple-Valued Logic Soft Comput. 10 (2), 173–187 (2004).
25. S. Agaian and H. Sarukhanyan, Williamson type M-structures, presented at2nd Int. Workshop on Transforms and Filter Banks, Berlin (Mar. 1999).
26. H. G. Sarukhanyan, “Multiplicative methods of Hadamard matrices construc-tion and fast Hadamard transform,” Pattern Recogn. Image Anal. 9 (1), 89–91(1999).
27. H. Sarukhanyan, S. Agaian, K. Egiazarian and J. Astola, “On fast Hadamardtransforms of Williamson type,” in Proc. of Signal Processing: Theories andApplications (X European Signal Processing Conf.), EUSIPCO-2000, Sept.4–8, Tampere, Finland, 1077–1080 (2000).
28. H. Sarukhanyan, S. Agaian, K. Egiazarian and J. Astola, “Construction ofWilliamson type matrices and Baumert-Hall, Welch and Plotkin arrays,” Int.Workshop on Spectral Techniques and Logic Design for Future Digital Systems(SPELOG-2000), Tampere, Finland, TICSP 10, 189–205 (Jun. 2–3, 2000).
29. H. Sarukhanyan, S. Agaian, K. Egiazarian and J. Astola, “Decomposition ofHadamard matrices,” Int. Workshop on Spectral Techniques and Logic Designfor Future Digital Systems (SPELOG-2000), Tampere, Finland, TICSP 10,207–221 (Jun. 2–3, 2000).
30. H. Sarukhanyan and A. Anoyan, “Fast Hadamard transforms of Williamsontype,” Math. Prob. Comput. Sci. 21, 7–16 (2000).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
184 Chapter 4
31. H. Sarukhanyan, “Decomposition of the Hadamard matrices and fastHadamard transform,” in Computer Analysis of Images and Patterns, LectureNotes in Computer Science, 1296 575–581 (1997).
32. H. Sarukhanyan, “Product of Hadamard matrices,” in Proc. of Conf. onComputer Science and Information Technologies (CSIT-97), Sept. 25–30,Yerevan (1997).
33. H. Sarukhanyan, “Hadamard matrices: construction methods and applica-tions,” in Proc. of Workshop on Transforms and Filter Banks, Feb. 21–27,Tampere, Finland, 95–129 (1998).
34. http://www.uow.edu.au/∼jennie/lifework.html.
35. S. Agaian, H. Sarukhanyan, and J. Astola, “Multiplicative theorem based fastWilliamson–Hadamard transforms,” Proc. SPIE 4667, 82–91 (2002).
36. S. Agaian and H. Sarukhanyan, “Parametric M-structures,” Preprint IIAP NASRA, No. 94-007 (1994).
37. S. Georgiou, C. Koukouvinos and J. Seberry, “Hadamard matrices, orthogonaldesigns and construction algorithms,” http://www.uow.edu.au/∼jennie (2003).
38. R. Craigen, J. Seberry, and X.-M. Zhang, “Product of four Hadamardmatrices,” J. Combin. Theory Ser. A 59, 318–320 (1992).
39. K. Yamamoto and M. Yamada, “Williamson Hadamard matrices and Gausssums,” J. Math. Soc. Jpn. 37 (4), 703–717 (1985).
40. R. J. Turyn, “A special class of Williamson matrices and difference sets,”J. Combin. Theory Ser. A 36, 111–115 (1984).
41. M.-Y. Xia and G. Liu, “An infinite class of supplementary difference sets andWilliamson matrices,” J. Comb. Theory Ser. A 58 (2), 310–317 (1991).
42. J. Seberry, B.J. Wysocki and T.A. Wysocki, “Williamson–Hadamard spread-ing sequences for DS-CDMA applications,” http://www.uow.edu.au/∼jennie/WEB/Will_CDMA.pdf (2003).
43. J. Horton, Ch. Koukouvinos and J. Seberry, “A Search for Hadamardmatrices constructed from Williamson matrices,” http://www.uow.edu.au/∼jennie (2003).
44. H. Sarukhanyan, A. Anoyan, S. Agaian, K. Egiazarian and J. Astola, “FastHadamard transforms,” in Proc. Int. TICSP Workshop on Spectral Methodsand Multirate Signal Processing, T. Saramäki, K. Egiazarian, J. Astola, Eds.,SMMSP’2001, Pula, Croatia, 33–40 (2001).
45. J. Cooper and J. Wallis, “A construction for Hadamard arrays,” Bull. Austral.Math. Soc. 7, 269–278 (1972).
46. W. H. Holzmann and H. Kharaghani, “On the amicability of orthogonaldesigns,” J. Combin. Des. 17, 240–252 (2009).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
“Plug-In Template” Method: Williamson–Hadamard Matrices 185
47. M. H. Dawson and S. E. Tavares, “An expanded set of S-box design criteriabased on information theory and its relation to differential-like attacks,”in Advances in Cryptology—EUROCRYPT’91, Lecture Notes in ComputerScience, 547 352–367 Springer-Verlag, Berlin (1991).
48. G. M’gan Edmonson, J. Seberry, and M. Anderson, “On the existence ofTuryn sequences of length less than 43,” Math. Comput. 62, 351–362 (1994).
49. S. Eliahou, M. Kervaire, and B. Saffari, “A new restriction on the lengths ofGolay complementary sequences,” J. Combin. Theory, Ser A 55, 49–59 (1990).
50. S. Eliahou, M. Kervaire, and B. Saffari, “On Golay polynomial pairs,” Adv.Appl. Math. 12, 235–292 (1991).
51. J. Hammer and J. Seberry, “Higher dimensional orthogonal designs andHadamard matrices,” in Congressus Numerantium, Proc. 9th Manitoba Conf.on Numerical Mathematics 27, 23–29 (1979).
52. J. Hammer and J. Seberry, “Higher dimensional orthogonal designs andapplications,” IEEE Trans. Inf. Theory 27 (6), 772–779 (1981).
53. H. F. Harmuth, Transmission of Information by Orthogonal Functions,Springer-Verlag, Berlin (1972).
54. C. Koukouvinos, C. Kounias, and K. Sotirakoglou, “On Golay sequences,”Disc. Math. 92, 177–185 (1991).
55. C. Koukouvinos, M. Mitrouli, and J. Seberry, “On the smith normal form ofd-optimal designs,” J. Lin. Multilin. Algebra 247, 277–295 (1996).
56. Ch. Koukouvinos and J. Seberry, “Construction of new Hadamard matriceswith maximal excess and infinitely many new SBIBD (4k2, 2k2 + k, k2 + k),”in Graphs, Matrices and Designs: A Festschrift for Norman J. Pullman,R. Rees, Ed., Lecture Notes in Pure and Applied Mathematics, Marcel Dekker,New York (1992).
57. R. E. A. C. Paley, “On orthogonal matrices,” J. Math. Phys. 12, 311–320(1933).
58. D. Sarvate and J. Seberry, “A note on small defining sets for some SBIBD(4t−1, 2t − 1, t − 1),” Bull. Inst. Comb. Appl. 10, 26–32 (1994).
59. J. Seberry, “Some remarks on generalized Hadamard matrices and theoremsof Rajkundlia on SBIBDs,” in Combinatorial Mathematics VI, Lecture Notesin Mathematics, 748 154–164 Springer-Verlag, Berlin (1979).
60. J. Seberry, X.-M. Zhang, and Y. Zheng, “Cryptographic Boolean functionsvia group Hadamard matrices,” Australas. J. Combin. 10, 131–145 (1994).
61. S. E. Tavares, M. Sivabalan, and L. E. Peppard, “On the designs of {SP}networks from an information theoretic point of view,” in Advances inCryptology—CRYPTO’92, Lecture Notes in Computer Science, 740 260–279Springer-Verlag, Berlin (1992).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
186 Chapter 4
62. R. J. Turyn, Complex Hadamard Matrices, Structures and Their Applications,Gordon and Breach, New York (1970).
63. J. Seberry and J. Wallis, “On the existence of Hadamard matrices,” J. Combin.Theory, Ser. A 21, 188–195 (1976).
64. J. Wallis, “Some (1,−1) matrices,” J. Combin. Theory, Ser. B 10, 1–11 (1971).
65. J. Seberry, K. Finlayson, S. S. Adams, T. A. Wysocki, T. Xia, and B. J.Wysocki, “The theory of quaternion orthogonal designs,” IEEE Trans. SignalProcess. 56 (1), 256–265 (2008).
66. M. Xia, T. Xia, and J. Seberry, “A new method for constructing Williamsonmatrices,” Des. Codes Cryptog. 35 (2), 191–209 (2005).
67. M. Xia, T. Xia, J. Seberry, and J. Wu, “An infinite family of Goethals–Seidelarrays,” Discr. Appl. Math. 145 (3), 498–504 (2005).
68. T. Xia, J. Seberry, and J. Wu, “Boolean functions with good properties,”Security Man. 294–299 (2004).
69. Ch. Koukouvinos and J. Seberry, “Orthogonal designs of Kharaghani type:II,” Ars Comb. 72, (2004).
70. S. Georgiou, Ch. Koukouvinos, and J. Seberry, “Generalized orthogonaldesigns,” Ars Comb. 71, (2004).
71. J Seberry, B. J. Wysocki, and T. A. Wysocki, “Williamson–Hadamardspreading sequences for DS-CDMA applications,” Wireless Commun. MobileComput. 3 (5), 597–607 (2003).
72. S. Georgiou, Ch. Koukouvinos, and J. Seberry, “On full orthogonal designs inorder 56,” Ars Comb. 65, (2002).
73. Ch. Qu, J. Seberry, and J. Pieprzyk, “On the symmetric property of homo-geneous Boolean functions,” in Proc. ACISP, Lecture Notes in ComputerScience, 1587 26–35 Springer, Berlin/Heidelberg (1999).
74. A. Jiwa, J. Seberry, and Y. Zheng, “Beacon based authentication,” inProc. ESORICS, Lecture Notes in Computer Science, 875 123–141 Springer,Berlin/Heidelberg (1994).
75. J. Seberry, X.-M. Zhang, and Y. Zheng, “Nonlinearly balanced Booleanfunctions and their propagation characteristics (Extended abstract),” in Proc.CRYPTO 1993, Lecture Notes in Computer Science, 773 49–60 Springer,Berlin/Heidelberg (1993).
76. W. H. Holzmann, H. Kharaghani, and B. Tayfeh-Rezaie, “Williamsonmatrices up to order 59,” Des. Codes Cryptogr. 46, 343–352 (2008).
77. L. D. Baumert and M. Hall Jr., “Hadamard matrices of the Williamson type,”Math. Comput. 19, 442–447 (1965).
78. K. Sawade, “Hadamard matrices of order 100 and 108,” Bull. Nagoya Inst.Technol. 29, 147–153 (1977).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
“Plug-In Template” Method: Williamson–Hadamard Matrices 187
79. D. Z. Djokovic, “Note on Williamson matrices of orders 25,” J. Combin. Math.Combin. Comput. 18, 171–175 (1995).
80. D. Z. Djokovic, “Williamson matrices of orders 4. 29 and 4. 31,” J. Combin.Theory, Ser. A 59, 442–447 (1992).
81. A. L. Whiteman, “An infinite family of Hadamard matrices of Williamsontype,” J. Combin. Theory, Ser. A 14, 334–340 (1973).
82. S. Agaian and H. Sarukhanyan, “On construction of Hadamard matrices,”Dokladi NAS RA LXV (4), (1977) (in Russian).
83. K. Egiazarian, J. Astola, and S. Agaian, “Binary polynomial transforms andlogical correlation,” in Nonlinear Filters for Image Processing, E. Doughertyand J. Astola, Eds., 299–354 SPIE Press, Bellingham, WA (1999) IEEE Press,New York.
84. S. Agaian and H. Sarukhanyan, “A note on the construction of Hadamardmatrix,” presented at 4th Int. Cong. Cybernetics Systems, Amsterdam (1978).
85. H. Sarukhanyan, “Generalized Williamson’s type matrices,” Scientific NotesESU (No. 2), (1978) (in Russian).
86. H. Sarukhanyan, “Parametric matrices of Williamson type and Geothals-Seidelarrays,” presented at 5th All Union Conference in Problem of TheoreticalCybernetics, Novosibirsk (1980).
87. H. Sarukhanyan, “On decomposition of Williamson type matrices,” Math.Prob. Comput. Sci. 10, 91–101 (1982) (in Russian).
88. H. Sarukhanyan, “Product of Hadamard matrices,” in Proc. Conf. ComputerScience Inform. Technol. (CSIT-97), NAR RA, Sept. 25–29, Yerevan, 153–154(1997).
89. H. Sarukhanyan, “Fast Hadamard transform,” Math. Prob. Comput. Sci. 18,14–18 (1997).
90. H. Sarukhanyan and A. Badeyan, “Fast Walsh-Hadamard transform of pre-assigned spectral coefficients,” in Proc. Conf. Comput. Sci. Inform. Technol.(CSIT-97), NAS RA, Yerevan, Sept. 25–29, 150–152 (1997).
91. H. Sarukhanyan, “Decomposition of Hadamard matrices by (-1,+1)-vectorsand fast Hadamard transform algorithm,” Dokladi NAS RA 97 (2), (1997) (inRussian).
92. H. G. Sarukhanyan, “Multiplicative methods of Hadamard matricesconstruction and fast Hadamard transform,” Pattern Recogn. Image Anal. 9(1), 89–91 (1999).
93. H. Sarukhanyan and A. Petrosian, “Construction and application of hybridwavelet and other parametric orthogonal transforms,” J. Math. Imaging Vis. 23(1), 25–46 (2005).
94. S. S. Agaian, “2D and 3D block Hadamard matrices constructions,” Math.Prob. Comput. Sci. 12, 5–50 (1984) (in Russian).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
188 Chapter 4
95. S. S. Agaian and K. O. Egiazarian, “Generalized Hadamard matrices,” Math.Prob. Comput. Sci. 12, 51–88 (1984) (in Russian).
96. S. M. Athurian, “On one modification of Paley–Wallis–Whiteman methodon the Hadamard matrices construction,” Math. Prob. Comput. Sci. 12, 89–94(1984) (in Russian).
97. A. K. Matevosian, “On construction of orthogonal arrays, Hadamard matricesand their possibility applications,” Math. Prob. Comput. Sci. 12, 95–104 (1984)(in Russian).
98. H. G. Sarukhanyan, “On construction of generalized sequences with zeroautocorrelation functions and Hadamard matrices,” Math. Prob. Comput. Sci.12, 105–129 (1984) (in Russian).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Chapter 5Fast Williamson–HadamardTransforms
Hadamard matrices have recently received attention due to their numerous knownand promising applications.1–27 The FHT algorithms were developed for the ordersN = 2n, 12 · 2n, 4n. The difficulties of the construction of the N ≡ 0(mod 4)–pointHT are related to the problem of the existence of Hadamard matrices (the so-calledHadamard problem).
In this chapter, we have utilized a Williamson’s construction of parametricHadamard matrices in order to develop efficient computational algorithms of aspecial type of HTs—the Williamson–Hadamard transforms. Several algorithmsfor fast computation of a special type of HTs, namely, the Williamson–Hadamardtransform, are presented. An efficient algorithm to compute 4t-point (t is an “arbi-trary” integer number) Williamson–Hadamard transforms is traced. Comparativeestimates revealing the efficiency of the proposed algorithms with respect to onesknown are given, and the results of numerical examples are presented.
Section 5.1 describes the Hadamard matrix construction from Williamsonmatrices. Sections 5.2 and 5.3 present the block representation of parametricWilliamson–Hadamard matrices and the fast Williamson–Hadamard blocktransform algorithm. In Section 5.4, the Williamson–Hadamard transformalgorithm on add/shift architecture is developed. Sections 5.5 and 5.6 present fastWilliamson–Hadamard transform algorithms based on multiplicative theorems. InSection 5.7, complexities of developed algorithms and also comparative estimatesare presented, revealing the efficiency of the proposed algorithms with respect toones known.
5.1 Construction of Hadamard Matrices Using Williamson Matrices
In this section, we describe a fast algorithm for generation of Williamson–Hadamardmatrices and transforms. We have seen that if four (+1,−1) matrices, A, B, C, D,of order n exist with
PQT = QPT , P,Q ∈ {A, B,C,D},AAT + BBT +CCT + DDT = 4nIn,
(5.1)
189
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
190 Chapter 5
then,
W4n =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝A B C D−B A −D C−C D A −B−D −C B A
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ (5.2)
is a Hadamard matrix of order 4n.Note that any cyclic symmetric matrix A of order n can be represented as
A =n−1∑i=0
aiUi, (5.3)
where U is a cyclic matrix of order n with the first row (0, 1, 0, . . . , 0) of length n,and Un+i = Ui, ai = an−i, for i = 1, 2, . . . , n − 1.
Thus, the four cyclic symmetric Williamson matrices A ⇔ (a0, a1, . . . , an−1),B⇔ (b0, b1, . . . , bn−1),C ⇔ (c0, c1, . . . , cn−1),D ⇔ (d0, d1, . . . , dn−1) can be repre-sented as
A(a0, a1, . . . , an−1) =n−1∑i=0
aiUi,
B(b0, b1, . . . , bn−1) =n−1∑i=0
biUi,
C(c0, c1, . . . , cn−1) =n−1∑i=0
ciUi,
D(d0, d1, . . . , dn−1) =n−1∑i=0
diUi,
(5.4)
where the coefficients ai, bi, ci, and di, 0 ≤ i ≤ n − 1, satisfy the following condi-tions:
ai = an−i, bi = bn−i, ci = cn−i, and di = dn−i. (5.5)
Additionally, if a0 = b0 = c0 = d0 = 1, then
A(a0, a1, . . . , an−1) = A+ − A−, B(b0, b1, . . . , bn−1) = B+ − B−,C(c0, c1, . . . , cn−1) = C+ −C−, D(d0, d1, . . . , dn−1) = D+ − d−,
(5.6)
where Q+ denotes the (0, 1) matrix, which is obtained from the (+1,−1) matrixQ by replacement of −1 by zero, and Q− denotes the (0, 1) matrix, which isobtained from the (+1,−1) matrix Q by replacement of −1 by +1 and +1 by zero,respectively.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Fast Williamson–Hadamard Transforms 191
Thus, the equation
A2 + B2 +C2 + D2 = 4nIn (5.7)
can be expressed by
(2A+ − J
)2+
(2B+ − J
)2+
(2C+ − J
)2+
(2D+ − J
)2= 4nIn, (5.8)
where J = A+ + A− = B+ + B− = C+ +C− = D+ + D−, i.e., J is the matrix of ones.Below, we state an algorithm to construct Williamson–Hadamard matrices.
Algorithm 5.1.1: Hadamard matrix construction via cyclic symmetric parametricWilliamson matrices.
Input: (a0, a1, . . . , an−1), (b0, b1, . . . , bn−1), (c0, c1, . . . , cn−1), and (d0, d1, . . . , dn−1).
Step 1. Construct matrices A, B, C, D by
A =n−1∑i=0
aiUi, B =
n−1∑i=0
biUi, C =
n−1∑i=0
ciUi, D =
n−1∑i=0
diUi. (5.9)
Step 2. Substitute matrices A, B, C, D into the array
W4n =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝A B C D−B A −D C−C D A −B−D −C B A
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (5.10)
Output: Parametric Williamson–Hadamard matrix W4n:
W4n(a0, . . . , an−1, b0, . . . , bn−1, c0, . . . , cn−1, d0, . . . , dn−1) =⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝A(a0, a1, . . . , an−1) B(b0, b1, . . . , bn−1) C(c0, c1, . . . , cn−1) D(d0, d1, . . . , dn−1)−B(b0, b1, . . . , bn−1) A(a0, a1, . . . , an−1) −D(d0, d1, . . . , dn−1) C(c0, c1, . . . , cn−1)−C(c0, c1, . . . , cn−1) D(d0, d1, . . . , dn−1) A(a0, a1, . . . , an−1) −B(b0, b1, . . . , bn−1)−D(d0, d1, . . . , dn−1) −C(c0, c1, . . . , cn−1) B(b0, b1, . . . , bn−1) A(a0, a1, . . . , an−1)
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ .(5.11)
It has been shown that for any ai, bi, ci,, di, 0 ≤ i ≤ n − 1 with |ai| = |bi| = |ci| =|di| = 1, for all 0 ≤ i ≤ n−1 and ai = an−i, bi = bn−i, ci = cn−i,, di = dn−i, the matrixW4n (a0, . . . , an−1, . . . , d0, . . . dn−1) is a Williamson–Hadamard matrix of order 4n.
The following is an example. Let (a0, a1, a1), (b0, b1, b1), (c0, c1, c1), and (d0,
d1, d1) be the first rows of parametric Williamson-type cyclic symmetric matricesof order 3. Using Algorithm 5.1.1, we can construct the following parametric
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
192 Chapter 5
matrix of order 12:⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
a0 a1 a1 b0 b1 b1 c0 c1 c1 d0 d1 d1
a1 a0 a1 b1 b0 b1 c1 c0 c1 d1 d0 d1
a1 a1 a0 b1 b1 b0 c1 c1 c0 d1 d1 d0
−b0 −b1 −b1 a0 a1 a1 −d0 −d1 −d1 c0 c1 c1
−b1 −b0 −b1 a1 a0 a1 −d1 −d0 −d1 c1 c0 c1
−b1 −b1 −b0 a1 a1 a0 −d1 −d1 −d0 c1 c1 c0
−c0 −c1 −c1 d0 d1 d1 a0 a1 a1 −b0 −b1 −b1
−c1 −c0 −c1 d1 d0 d1 a1 a0 a1 −b1 −b0 −b1
−c1 −c1 −c0 d1 d1 d0 a1 a1 a0 −b1 −b1 −b0
−d0 −d1 −d1 −c0 −c1 −c1 b0 b1 b1 a0 a1 a1
−d1 −d0 −d1 −c1 −c0 −c1 b1 b0 b1 a1 a0 a1
−d1 −d1 −d0 −c1 −c1 −c0 b1 b1 b0 a1 a1 a0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (5.12)
5.2 Parametric Williamson Matrices and Block Representation ofWilliamson–Hadamard Matrices
In this section, we present an approach of block Hadamard matrix constructionequivalent to the Williamson–Hadamard matrices. This approach is useful fordesigning fast transform algorithms and generating new Hadamard matrices (formore details, see Chapter 2). Now we want to use the concepts of Algorithm 5.1.1to build an equivalent block cyclic matrix.
The first block, P0, is formed as follows: (1) from the first row of the matrix inEq. (5.12) taking the first, fourth, seventh, and tenth elements (a0, b0, c0, d0), weform the first row of block P0; (2) from the fourth row of the above-given matrix,taking the first, fourth, seventh, and tenth elements (−b0, a0, −d0, c0), we constructthe second row of block P0, and so on. Hence, we obtain
P0 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝a0 b0 c0 d0
−b0 a0 −d0 c0
−c0 d0 a0 −b0
−d0 −c0 b0 a0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (5.13)
We form the second (and third) block P1 as follows: (1) from the second, fifth,eighth, and eleventh elements of the first row, we make the first row (a1, b1, c1, d1)of block P1; (2) from the second, fifth, eighth, and eleventh elements of the fourthrow we make the second row (−b1, a1, −d1, c1) of block P1, and so on.
Hence, we obtain
P1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝a1 b1 c1 d1
−b1 a1 −d1 c1
−c1 d1 a1 −b1
−d1 −c1 b1 a1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (5.14)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Fast Williamson–Hadamard Transforms 193
From Eqs. (5.12)–(5.14), we obtain
[BW]12 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
a0 b0 c0 d0 a1 b1 c1 d1 a1 b1 c1 d1
−b0 a0 −d0 c0 −b1 a1 −d1 c1 −b1 a1 −d1 c1
−c0 d0 a0 −b0 −c1 d1 a1 −b1 −c1 d1 a1 −b1
−d0 −c0 b0 a0 −d1 −c1 b1 a1 −d1 −c1 b1 a1
a1 b1 c1 d1 a0 b0 c0 d0 a1 b1 c1 d1
−b1 a1 −d1 c1 −b0 a0 −d0 c0 −b1 a1 −d1 c1
−c1 d1 a1 −b1 −c0 d0 a0 −b0 −c1 d1 a1 −b1
−d1 −c1 b1 a1 −d0 −c0 b0 a0 −d1 −c1 b1 a1
a1 b1 c1 d1 a1 b1 c1 d1 a0 b0 c0 d0
−b1 a1 −d1 c1 −b1 a1 −d1 c1 −b0 a0 −d0 c0
−c1 d1 a1 −b1 −c1 d1 a1 −b1 −c0 d0 a0 −b0
−d1 −c1 b1 a1 −d1 −c1 b1 a1 −d0 −c0 b0 a0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
(5.15)
or
[BW]12 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝P0 P1 P1
P1 P0 P1
P1 P1 P0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (5.16)
which is a block-cyclic, block-symmetric Hadamard matrix.Using the properties of the Kronecker product, we can rewrite Eq. (5.15) as
[BW]12 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝P0 P1 P1
P1 P0 P1
P1 P1 P0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ = I3 ⊗ P0 + U ⊗ P1 + U2 ⊗ P1. (5.17)
In general, any Williamson–Hadamard matrix of order 4n can be presented as
[BW]4n =
n−1∑i=0
Ui ⊗ Qi, (5.18)
where
Qi(ai, bi, ci, di) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝ai bi ci di
−bi ai −di ci
−ci di ai −bi
−di −ci bi ai
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (5.19)
where Qi = Qn−i, ai, bi, ci, di = ±1, and ⊗ is a sign of Kronecker product.13
The Hadamard matrices of the form in Eq. (5.18) are called block-cyclic, block-symmetric Hadamard matrices.13
The Williamson–Hadamard matrix W12 (see Section 5.1) can be represented asa block-cyclic, block-symmetric matrix,
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
194 Chapter 5
[BW]12 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + + + + − − − + − − −− + − + + + + − + + + −− + + − + − + + + − + +− − + + + + − + + + − ++ − − − + + + + + − − −+ + + − − + − + + + + −+ − + + − + + − + − + ++ + − + − − + + + + − ++ − − − + − − − + + + +
+ + + − + + + − − + − ++ − + + + − + + − + + −+ + − + + + − + − − + +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
, (5.20)
or
[BW]12 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝Q0(+1,+1,+1,+1) Q4(+1,−1,−1,−1) Q4(+1,−1,−1,−1)
Q4(+1,−1,−1,−1) Q0(+1,+1,+1,+1) Q4(+1,−1,−1,−1)
Q4(+1,−1,−1,−1) Q4(+1,−1,−1,−1) Q0(+1,+1,+1,+1)
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (5.21)
From Eq. (5.18), we can see that all of the blocks are Hadamard matrices ofthe Williamson type of order 4. In Ref. 14, it was proved that cyclic symmetricWilliamson–Hadamard block matrices can be constructed using only five differentblocks, for instance, as
Q0 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + + +
− + − +− + + −− − + +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , Q1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + + −− + + +− − + −+ − + +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , Q2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + − +− + − −+ + + −− + + +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ,
Q3 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ − + ++ + − +− + + +− − − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , Q4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ − − −+ + + −+ − + ++ + − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ .(5.22)
For example, Williamson–Hadamard block matrix [BW]12 was constructed usingonly matrices Q0 and Q4.
Note that to fix the first block, one needs a maximum of four blocks to design anyWilliamson–Hadamard block matrix, and these four blocks are defined uniquely upto a sign. Thus, if the first row of the first block consists of an even number of +1,then the first rows of the other four blocks consist of an odd number of +1. Thismeans that if n is odd and Qi = Qn−i, ai, bi, ci, di = ±1, and a0 + b0 + c0 + d0 = 4,then ai + bi + ci + di = ±2. Similarly, if the first row of the first block consists ofan odd number of +1, then the first rows of the other four blocks consist of an evennumber of +1. Or, if n is odd, Qi = Qn−i, ai,bi,ci,di = ±1, and a0 + b0 + c0 + d0 = 2,then ai + bi + ci + di = 0 or 4.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Fast Williamson–Hadamard Transforms 195
The set of blocks with a fixed first block with odd +1 is as follows:
Q10 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + + −− + + +− − + −+ − + +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , Q11 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ − − ++ + − −+ + + +
− + − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , Q12 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝− + − +− − − −+ + − −− + + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ,
Q13 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝− − + ++ − − +− + − +− − − −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , Q14 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + + +
− + − +− + + −− − + +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ .(5.23)
The first block rows of Williamson–Hadamard block matrices are given inAppendix A.3.13,14
5.3 Fast Block Williamson–Hadamard Transform
In this section, we describe two algorithms for calculation of the 4n-point forwardblock Williamson–Hadamard transform,
F = [BW]4n f . (5.24)
Let us split the vector column f into n 4D vectors as
f =n−1∑i=0
Pi ⊗ Xi, (5.25)
where Pi are column vectors of dimension n whose i’th element is equal to 1, theremaining elements are equal to 0, and
Xi = ( f4i, f4i+1, f4i+2, f4i+3)T , i = 0, 1, . . . , n − 1. (5.26)
Now, using Eq. (5.18), we have
[BW]4n f =
⎛⎜⎜⎜⎜⎜⎜⎝n−1∑i=0
Ui ⊗ Qi
⎞⎟⎟⎟⎟⎟⎟⎠⎛⎜⎜⎜⎜⎜⎜⎝n−1∑
j=0
Pj ⊗ Xj
⎞⎟⎟⎟⎟⎟⎟⎠ = n−1∑i, j=0
UiPj ⊗ QiXj. (5.27)
We can verify that UiPj = Pn−i+ j, j = 0, 1, . . . , n − 1, i = j + 1, . . . , n − 1. Hence,Eq. (5.27) can be presented as
[BW]4n f =n−1∑i, j=0
UiPj ⊗ QiXj =
n−1∑j=0
Bj, (5.28)
where Bj = UiPj ⊗ QiXj.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
196 Chapter 5
From Eq. (5.28), we see that in order to perform the fast Williamson–Hadamardtransform, we need to calculate the spectral coefficients of the block transforms,such as Yi = QiX. Here, Qi, i = 0, 1, 2, 3, 4 have the form of Eq. (5.22), and
X = (x0, x1, x2, x3)T , Y = (y0, y1, y2, y3)T (5.29)
are the input and output column vectors, respectively.
Algorithm 5.3.1: Joint computation of five four-point Williamson–Hadamardtransforms.
Input: X = (x0, x1, x2, x3) signal vector column.
Step 1. Compute a = x0 + x1, b = x2 + x3, c = x0 − x1, d = x2 − x3.
Step 2. Compute the transforms Yi = QiX, i = 0, 1, 2, 3, 4 [where Qi hasthe form of Eq. (5.22)] as
y00 = a + b, y1
0 = −c − d, y20 = −c + d, y3
0 = −a + b,
y01 = a + d, y1
1 = −c + b, y21 = −a + d, y3
1 = c + b,
y02 = −y2
1, y12 = −y3
1, y22 = y0
1, y32 = y1
1,
y03 = y3
1, y13 = −y2
1, y23 = y1
1, y33 = −y0
1,
y04 = −y1
1, y14 = y0
1, y24 = y3
1, y34 = −y2
1.
(5.30)
Output: The transform (spectral) coefficients
Y0 = (y00, y
10, y
20, y
30), Y1 = (y0
1, y11, y
21, y
31), Y2 = (y0
2, y12, y
22, y
32),
Y3 = (y03, y
13, y
23, y
33), Y4 = (y0
4, y14, y
24, y
34).
(5.31)
The flow graph for joint computation QiX, i = 0, 1, 2, 3, 4 is given inFig. 5.1.
From Eqs. (5.30) and (5.31), we can see that the joint computation of four-point transforms QiX, i = 0, 1, 2, 3, 4 requires only 12 addition/subtractionoperations. Note that the separate calculation of QiX, i = 0, 1, 2, 3, 4 requires 40addition/subtraction operations. In reality, from Fig. 5.1 we can check that thetransform Q0X requires eight addition/subtraction operations, and the transformQ1X requires four addition/subtraction operations. We can also see that the jointcomputation of all four-point transforms QiX, i = 0, 1, 2, 3, 4 requires only 12addition/subtraction operations. Now we present a detailed description of the36-point block Williamson–Hadamard fast transform algorithm.
Example 5.3.1: The 36-point Williamson–Hadamard transform can be calculatedusing 396 operations.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Fast Williamson–Hadamard Transforms 197
y0x0
x1
x2
x3
x0
x1
x2
x3
x0
x1
x2
x3
x0
x1
x2
x3
x0
x1
x2
x3
0
y01
y02
y03
y10
y11
y12
y13
y20
y21
y22
y23
y30
y31
y3
y3
2
3
y40
y41
y42
y43
Q0 X
Q1 X
Q2 X
Q3 X
Q4 X
Figure 5.1 Flow graph for the joint QiX transforms, i = 0, 1, 2, 3, 4.
Input: Vector column F36 = ( fi)35i=0 and blocks Q0, Q1, and Q2.
Q0 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + + +− + − +− + + −− − + +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , Q1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + + −− + + +− − + −+ − + +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , Q2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + − +− + − −+ + + −− + + +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (5.32)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
198 Chapter 5
Step 1. Split vector F36 into nine parts as follows: F36 = (X0, X1,. . . , X8)T , where
XTi = ( f4i, f4i+1, f4i+2, f4i+3), i = 0, 1, . . . , 8. (5.33)
Step 2. Compute the vectors Yi, i = 0, 1, . . . , 8, as shown in Fig. 5.2. Notethat the sub-blocks A(Q0,Q1,Q2) in Fig. 5.2 can be computedusing Algorithm 5.3.1 (see Fig. 5.1).
Step 3. Evaluate the vector Y = Y0 + Y1 + · · · + Y8.
Output: 36-point Williamson–Hadamard Transform coefficients, i.e., vector Y .
From Eqs. (5.30) and (5.31), it follows that the joint computation of thetransforms Q0Xi, Q1Xi, and Q2Xi requires only 12 addition/subtraction operations.From Eq. (5.22), we can see that only these transforms are present in each vector Yi.Hence, for all of these vectors, it is necessary to perform 108 operations. Finally,the 36-point HT requires only 396 addition/subtraction operations, but in directcomputation it requires 1260 addition/subtraction operations.
Note that we have developed a fast Walsh–Hadamard transform algorithmwithout knowing the existence of any Williamson–Hadamard matrices. Thisalgorithm can be developed more efficiently if we use a construction of thesematrices.
The first block rows of the block-cyclic, block-symmetric (BCBS) Hadamardmatrices of the Williamson type of order 4n, n = 3, 5, . . . , 2513,15 with markedcyclic congruent circuits (CCCs), are given in the Appendix.
In addition, we describe the add/shift architecture for the Williamson–Hadamardtransform. Denoted by z1 = x1 + x2 + x3, z2 = z1 − x0, and using Eq. (5.22), we cancalculate Yi = QiX as follows:
y00 = z1 + x0, y1
0 = z2 − 2x2, y20 = z2 − 2x3, y3
0 = z2 − 2x1;
y01 = y0
0 − 2x3, y11 = z2, y2
1 = y20 − 2x1, y3
1 = y00 − 2x1;
y02 = −y2
1, y12 = −y3
1, y22 = y0
1, y32 = y1
1;
y03 = y3
1, y13 = −y2
1, y23 = y1
1, y33 = −y0
1;
y04 = −y1
1, y14 = y0
1, y24 = y3
1, y34 = −y2
1.
(5.34)
It is easy to check that the joint four-point transform computation requires feweroperations than its separate computations. The separate computations of transformsQ0X and Q1X require 14 addition/subtraction operations and six one-bit shifts;however, for their joint computation, only 10 addition/subtraction operations andthree one-bit shifts are necessary. Thus, using this fact, the complexity of the fastWilliamson–Hadamard transform will be discussed next.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Fast Williamson–Hadamard Transforms 199
A (Q0, Q1, Q2) A (Q0, Q1, Q2)
Q0 X0 Q0 X1 Q0 X2
Q1 X1 Q1 X2
Q2 X1 Q2 X2
Q0 X0 Q1X1
Q1X1
Q1X3
Q1X3
Q1X3
Q1X4
Q0X4
Q1X4
Q1X5
Q1X5
Q1X5
Q0X5
Q1X5
Q1X4
Q1X4
Q1X6
Q1X6
Q1X7
Q1X8
Q1X8
Q1X8
Q1X7
Q0X7
Q0X8
Q1X7
Q1X7
Q1X6
Q0X6
Q1X6
Q1X3
Q0X3
Q1X1
Q1X1
Q1X2
Q1X2
Q1X2
Q1X2
Q0X2
–Q2X1
–Q2X2
–Q2X2
–Q1X2
–Q1X2
–Q2X1
–Q2X3
–Q2X4
–Q1X5
–Q2X5
–Q2X5
Q2X4
–Q1X4
–Q1X6
–Q2X6
–Q2X7
–Q1X7
–Q1X8–Q1X8
–Q2X7
–Q2X8
–Q2X8
–Q1X7
–Q1X6
–Q2X6
–Q1X4 –Q1X5
–Q2X3
–Q1X3
–Q1X3
–Q1X1
–Q1X1
Q0X1Q1 X0
Q1 X0
Q1 X0
Q1 X0
–Q2 X0
–Q2 X0
–Q1 X0
–Q1 X0
Q1 X0
Q2 X0
Y0 Y1 Y2
A (Q0, Q1, Q2)
Q0 X3
Q1 X3
Q2 X3
A (Q0, Q1, Q2)
Q0 X4
Q1 X4
Q2 X4
A (Q0, Q1, Q2)
Q0 X5
Q1 X5
Q2 X5
Y3 Y4 Y5
A (Q0, Q1, Q2)
Q0 X6
Q1 X6
Q2 X6
Y6
A (Q0, Q1, Q2)
Q0 X7
Q1 X7
Q2 X7
Y7
A (Q0, Q1, Q2)
Q0 X8
Q1 X8
Q1 X8
Q2 X8
Y8
Figure 5.2 Flow graphs of 36-dimensional vector components Yi, i = 0, 1, . . . , 8,computation.
5.4 Multiplicative-Theorem-Based Williamson–Hadamard Matrices
In this section, we describe Williamson–Hadamard matrix construction based onthe following multiplicative theorems:
Theorem 5.4.1: (Agaian–Sarukhanyan Multiplicative Theorem16) Let there beWilliamson–Hadamard matrices of orders 4m and 4n. Then, Williamson–Hadamardmatrices of order 4(2m)in, i = 1, 2, . . . exist.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
200 Chapter 5
Theorem 5.4.2: Let there be Williamson matrices of order n and a Hadamardmatrix of order 4m. Then, a Hadamard matrix of order 8mn exists.
Algorithm 5.4.1: Generation of a Williamson–Hadamard matrix of order 4mnfrom Williamson–Hadamard matrices of orders 4m and 4n.
Input: Williamson matrices A, B, C, D and A0, B0, C0, D0 of orders m and n,respectively.
Step 1. Construct matrices X and Y as follows:
X =12
⎛⎜⎜⎜⎜⎜⎝ A + B C + D
C + D −A − B
⎞⎟⎟⎟⎟⎟⎠ , Y =12
⎛⎜⎜⎜⎜⎜⎝ A − B C − D
−C + D A − B
⎞⎟⎟⎟⎟⎟⎠ . (5.35)
Step 2. For i = 1, 2, . . . , k, recursively construct the following matrices:
Ai = Ai−1 ⊗ X + Bi−1 ⊗ Y, Bi = Bi−1 ⊗ X − Ai−1 ⊗ Y,
Ci = Ci−1 ⊗ X + Di−1 ⊗ Y, Di = Di−1 ⊗ X −Ci−1 ⊗ Y.(5.36)
Step 3. For i = 1, 2, . . . , k, construct the Williamson–Hadamard matrixas follows:
[WH]i =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝Ai Bi Ci Di
−Bi Ai −Di Ci
−Ci Di Ai −Bi
−Di −Ci Bi Ai
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (5.37)
Output: Williamson–Hadamard matrices [WH]i of the order 8mni, i = 1, 2, . . . , k.
Example 5.4.1: Construction of Williamson matrices. Using Williamson matricesof order 3 and 5 from Algorithm 5.1.1 and Eq. (5.35), we obtain the following:
For n = 3,
X =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ 0 0 + − −0 + 0 − + −0 0 + − − ++ − − − 0 0− + − 0 − 0− − + 0 0 −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, Y =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
0 + + 0 0 0+ 0 + 0 0 0+ + 0 0 0 00 0 0 0 + +0 0 0 + 0 +0 0 0 + + 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (5.38)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Fast Williamson–Hadamard Transforms 201
For n = 5,
X =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ − − − − + 0 0 0 0− + − − − 0 + 0 0 0− − + − − 0 0 + 0 0− − − + − 0 0 0 + 0− − − − + 0 0 0 0 ++ 0 0 0 0 − + + + +0 + 0 0 0 + − + + +0 0 + 0 0 + + − + +0 0 0 + 0 + + + − +0 0 0 0 + + + + + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
,
Y =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
0 0 0 0 0 0 + − − +0 0 0 0 0 + 0 + − −0 0 0 0 0 − + 0 + −0 0 0 0 0 − − + 0 +0 0 0 0 0 + − − + 00 − + + − 0 0 0 0 0− 0 − + + 0 0 0 0 0+ − 0 − + 0 0 0 0 0+ + − 0 − 0 0 0 0 0− + + − 0 0 0 0 0 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
.
(5.39)
Let A0 = (1), B0 = (1), C0 = (1), D0 = (1), A = (1, 1, 1), B = C = D =(1,−1,−1). Then, from Eq. (5.36), we obtain Williamson matrices of order 6, i.e.,
A1 = A3 =
(A CD −B
), A2 = A4 =
(B DC −A
), (5.40)
or
A1 = A3 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + + + − −+ + + − + −+ + + − − ++ − − − + +− + − + − +− − + + + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, A2 = A4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ − − + − −− + − − + −− − + − − ++ − − − − −− + − − − −− − + − − −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (5.41)
Let A0 = (1), B0 = (1),C0 = (1),D0 = (1) and A = B = (1,−1,−1,−1,−1),C = (1, 1,−1,−1, 1), and D = (1,−1, 1, 1,−1) be cyclic symmetric matrices oforder 1 and 5, respectively. Then, from Eq. (5.36), we obtain Williamson matricesof order 10, i.e.,
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
202 Chapter 5
A1 = A3 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ − − − − + + − − +− + − − − + + + − −− − + − − − + + + −− − − + − − − + + +− − − − + + − − + ++ − + + − − + + + +− + − + + + − + + ++ − + − + + + − + ++ + − + − + + + − +− + + − + + + + + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
,
A2 = A4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ − − − − + − + + −− + − − − − + − + +− − + − − + − + − +− − − + − + + − + −− − − − + − + + − ++ + − − + − + + + ++ + + − − + − + + +− + + + − + + − + +− − + + + + + + − ++ − − + + + + + + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
.
(5.42)
Now the Williamson–Hadamard matrix of order 40 can be synthesized as
[WH]40 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝A1 A2 A1 A2
−A2 A1 −A2 A1
−A1 A2 A1 −A2
−A2 −A1 A2 A1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (5.43)
5.5 Multiplicative-Theorem-Based Fast Williamson–HadamardTransforms
In this section, we present a fast transform algorithm based on Theorems 5.4.1 and5.4.2. First we present an algorithm for generation of a Hadamard matrix based onTheorem 5.4.2.
Algorithm 5.5.1: Generation of a Hadamard matrix via Theorem 5.4.2.
Input: Williamson matrices A, B, C, and D of order n and Hadamard matrix H1
of order 4m.Step 1. Construct the matrices X and Y according to Eq. (5.35).Step 2. Construct a Hadamard matrix as
P = X ⊗ H1 + Y ⊗ S 4mH1, (5.44)
where S 4m is a monomial matrix with the conditions
S T4m = −S 4m, S 4mS T
4m = I4m. (5.45)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Fast Williamson–Hadamard Transforms 203
Output: Hadamard matrix P of order 8mn.An example of a monomial matrix of order eight is given below.
S 8 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
0 + 0 0 0 0 0 0− 0 0 0 0 0 0 00 0 0 + 0 0 0 00 0 − 0 0 0 0 00 0 0 0 0 + 0 00 0 0 0 − 0 0 00 0 0 0 0 0 0 +0 0 0 0 0 0 − 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (5.46)
Algorithm 5.5.2: Fast transform with the matrix in Eq. (5.44).
Input: Vector column FT = ( f1, f2, . . . , f8mn), and Hadamard matrix P fromEq. (5.44).Step 1. Perform P as
P = (X ⊗ I4m + Y ⊗ S 4m)(I2n ⊗ H1). (5.47)
Step 2. Split vector F as F = [F1, F2, . . . , F2n], where
F j =(
f4m( j−1)+1, f4m( j−1)+2, . . . , f4m( j−1)+4m
). (5.48)
Step 3. Compute the transform Qi = H1F1, i = 1, 2, . . . , 2n.Step 4. Split vector Q = (Q1,Q2, . . .Q2n) into 4m 2n-dimensional
vectors as
Q = (P1, P2, . . . , P4m), (5.49)
where Pj = ( f2n( j−1)+1, f2n( j−1)+2, . . . , f2n( j−1)+2n).Step 5. Compute the transforms XPj and YPj.
Output: Transform coefficients.
Let us present an example of the computation of transforms XF and YF[F =( f1, f2, . . . , f6)], where A, B, C, D are Williamson matrices of order 3, and X, Y arefrom Algorithm 5.1.1: First, we compute
XF =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
f1 + f4 − ( f5 + f6)f2 + f5 − ( f4 + f6)f3 + f6 − ( f4 + f5)f1 − f4 − ( f2 + f3)f2 − f5 − ( f1 + f3)f3 − f6 − ( f1 + f2)
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, YF =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
f2 + f3f1 + f3f1 + f2f5 + f6f4 + f6f4 + f5
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (5.50)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
204 Chapter 5
Figure 5.3 Flow graph for the joint computation of XF and YF transforms.
From Eq. (5.50), it follows that joint computation of XF and YF requires only18 additions/subtractions (see Fig. 5.3). Then, from Eq. (5.47), we can concludethat the complexity of the PF transform algorithm can be obtained by
C(24m) = 48m(2m + 1). (5.51)
Note, that if X, Y are matrices of order k defined by Eq. (5.35), Hm is a Hadamardmatrix of order m and S m is a monomial matrix of order m; then, for any integer n,
Hmkn = X ⊗ Hmkn−1 + Y ⊗ S mkn−1 Hmkn−1 (5.52)
is a Hadamard matrix of order mkn.
Remark 5.5.1: For A = B = C = D = (1) from Eq. (5.35), we have
X =
(1 11 −1
), Y =
(0 00 0
), (5.53)
and if H2 =(
1 11 −1
), then the matrix in Eq. (5.18) is the Walsh–Hadamard matrix of
order 2n+1.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Fast Williamson–Hadamard Transforms 205
Algorithm 5.5.3: Construction of Hadamard matrices of order m(2n)k.
Input: Williamson matrices A, B, C, D of order n and Hadamard matrix oforder m.Step 1. Construct matrices X and Y according to Eq. (5.35).Step 2. Construct the matrix H2mn = X ⊗ Hm + Y ⊗ S mHm.Step 3. If i < k then i← i + 1; Hm(2n)i ← Hm(2n)i+1 ; S m(2n)i ← S m(2n)i+1 ;
and go to step 2.Output: Hadamard matrix Hm(2n)k of the order m(2n)k.
Let us represent a matrix Hmkn as a product of sparse matrices
Hmkn = (X ⊗ Imkn−1 + Y ⊗ S mkn−1 ) (Ik ⊗ Hmkn−1 ) = A1 (Ik ⊗ Hmkn−1 ) . (5.54)
Continuing this factorization process for all matrices Hmkn−i , i = 1, 2, . . . , n, weobtain
Hmkn = A1A2 · · · An (Ikn ⊗ Hm) , (5.55)
where Ai = Iki−1 ⊗ (X ⊗ Imkn−i + Y ⊗ S mkn−i) , i = 1, 2, . . . , n.
Example 5.5.1: Let Hm be a Hadamard matrix of order m, let X and Y have theform as in Algorithm 5.1.1, and let F = ( fi)6m
i=1 be an input vector. Then, we havea Hadamard matrix of order 6m of the form H6m = X ⊗ Hm + Y ⊗ S mHm. As inEq. (5.55), we have H6m = A1(I6 ⊗ Hm), where A1 = X ⊗ Im + Y ⊗ S m, and
X ⊗ Im =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
Im Om Om Im −Im −Im
Om Im Om −Im Im −Im
Om Om Im −Im −Im Im
Im −Im −Im −Im Om Om
−Im Im −Im Om −Im Om
−Im −Im Im Om Om −Im
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠,
Y ⊗ S m =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
Om S m S m Om Om Om
S m Om S m Om Om Om
S m S m Om Om Om Om
Om Om Om Om S m S m
Om Om Om S m Om S m
Om Om Om S m S m Om
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠.
(5.56)
The input column vector is represented as F = (F1, F2, . . . , F6), where Fi is anm-dimensional vector. Now we estimate the complexity of transforms
H6mF = A1 (I6 ⊗ Hm) F = A1 diag {HmF1,HmF2, . . . ,HmF6} . (5.57)
Denote T = (I6 ⊗ Hm)F. Computing A1T , where T = (T1, T2, . . .T6), fromEq. (5.56) we obtain
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
206 Chapter 5
(X ⊗ Im)T =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
T1 + T4 − (T5 + T6)T2 + T5 − (T4 + T6)T3 + T6 − (T4 + T5)T1 − T4 − (T2 + T3)T2 − T5 − (T1 + T3)T3 − T6 − (T1 + T2)
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (Y ⊗ S m)T =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
S m(T2 + T3)S m(T1 + T3)S m(T1 + T2)S m(T5 + T6)S m(T4 + T6)S m(T4 + T5)
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (5.58)
From Eqs. (5.57) and (5.58), it follows that the computational complexity oftransform H6mF is C(H6m) = 24m + 6C(Hm), where C(Hm) is a complexity of anm-point HT.
5.6 Complexity and Comparison
5.6.1 Complexity of block-cyclic, block-symmetric Williamson–Hadamardtransform
Because every block row of the block-cyclic, block-symmetric Hadamard matrixcontains block Q0, and other blocks are from the set {Q1,Q2,Q3,Q4} (seeAppendix A.3), it is not difficult to find that the complexity of the blockWilliamson–Hadamard transform of order 4n can be obtained from the followingformula:
C(H4n) = 4n(n + 2). (5.59)
From representation of a block Williamson–Hadamard matrix (see the Appendix),we can see that some of block pairs are repeated.
Two block sequences of length k (k < n) in the first block row of the blockWilliamson–Hadamard matrix of order 4n are
{(−1)p1 Qi, (−1)p2 Qi, . . . , (−1)pk Qi} and{(−1)q1 Qj, (−1)q2 Qj, . . . , (−1)qk Q j
},
(5.60)
where pt, qt ∈ {0, 1} and for all t = 1, 2, . . . , k, pt = qt, or qt = pt(1 = 0, 0 = 1), wecall cyclic congruent circuits if
dist[(−1)pt Qi, (−1)pt+1 Qi] = dist[(−1)qt Q j, (−1)qt+1 Qj] (5.61)
for all t = 1, 2, . . . , k − 1, where dist[Ai, Aj] = j − i, for A = (Ai)mi=1. For example,
in the first block row of the block-cyclic, block-symmetric Hadamard matrix oforder 36, there are three cyclic congruent circuits of length 2. These circuits areunderlined as follows:
Q0,Q1,−Q2,Q1,−Q1;−Q1,Q1,−Q2,Q1. (5.62)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Fast Williamson–Hadamard Transforms 207
Table 5.1 Values of parameters n,m, tm, Nm, j and the complexity of the Williamson-type HTof order 4n.
n 4n M tm Nm, j Cr(H4n) Direct comp.
3 12 0 0 0 60 1325 20 0 0 0 140 3807 28 2 1 2 224 7569 36 2 1 3 324 1260
11 44 2 1 2 528 189213 52 3 1 2 676 265215 60 2 3 3, 2, 2 780 354017 68 2 2 2, 3 1088 455819 76 2 3 2„4, 3 1140 570021 84 2 3 2, 2, 5 1428 697223 92 2 3 4, 2, 2 1840 837225 100 2 3 2, 7, 2 1850 9900
With this observation, one can reduce several operations in summing up thevectors Yi (see step 3 of the above example and its corresponding flow graphs). Letm be a length of the cyclic congruent circuits of the first block row of the block-cyclic, block-symmetric Hadamard matrix of order 4n, tm be a number of variouscyclic congruent circuits of length m, and Nm, j be the number of cyclic congruentcircuits of type j and length m. Then, the complexity of the HT of order 4n takesthe form
Cr(H4n) = 4n
⎡⎢⎢⎢⎢⎢⎢⎣n + 2 − 2tm∑j=1
(Nm, j − 1)(i − 1)
⎤⎥⎥⎥⎥⎥⎥⎦ . (5.63)
The values of parameters n,m, tm, Nm, j and the complexity of the Williamson-type HT of order 4n are given in Table 5.1. Thus, the complexity of the blockWilliamson–Hadamard transform can be calculated from the formula
C± = 2n(2n + 3),Csh = 3n,
(5.64)
where C± is the number of additions/subtractions, and Csh is the number of shifts.Now, using repetitions of additions of vectors Yi and the same notations as in the
previous subsection [see Eq. (5.63)], the complexity of the Williamson–Hadamardtransform can be presented as
C±r = 2n
⎛⎜⎜⎜⎜⎜⎜⎝2n + 3 − 2m∑
i=2
tm∑j=1
(Nm, j − 1)(i − 1)
⎞⎟⎟⎟⎟⎟⎟⎠ ,Csh = 3n.
(5.65)
Formulas for the complexities of the fast Williamson–Hadamard transformswithout repetitions of blocks, and with repetitions and shifts, and their numerical
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
208 Chapter 5
Table 5.2 Williamson–Hadamard transforms without repetitions of blocks, and withrepetitions and shifts, and their numerical results.
n 4n C C± Csh Cr C±r Direct comp.
3 12 60 54 9 60 54 1325 20 140 130 15 140 130 3807 28 252 238 21 224 210 7569 36 396 378 27 324 306 1260
11 44 572 550 33 528 506 189213 52 780 754 39 676 650 265215 60 1020 990 45 780 750 354017 68 1292 1258 51 1088 1054 455819 76 1596 1558 57 1140 1102 570021 84 1932 1890 63 1428 1386 697223 92 2300 2254 69 1840 1794 837225 100 2700 2650 75 1900 1850 9900
results, are given in the formula in Eq. (5.66) and in Table 5.2, respectively.
C = 4n(n + 2),
Cr = 4n
⎡⎢⎢⎢⎢⎢⎢⎣n + 2 − 2m∑
i=2
tm∑j=1
(Nm, j − 1)(i − 1)
⎤⎥⎥⎥⎥⎥⎥⎦ ,C± = 2n(2n + 3),Csh = 3n,
C±r = 2n
⎡⎢⎢⎢⎢⎢⎢⎣n + 3 − 2m∑
i=2
tm∑j=1
(Nm, j − 1)(i − 1)
⎤⎥⎥⎥⎥⎥⎥⎦ ,Csh = 3n.
(5.66)
5.6.2 Complexity of the HT from the multiplicative theorem
Recall that if X, Y are matrices from Eq. (5.35) of order k, and Hm is a Hadamardmatrix of order m, then the recursively constructed Hadamard matrix
Hmkn = X ⊗ Hmkn−1 + Y ⊗ S mkn−1 Hmkn−1 (5.67)
can be factorized as
Hmkn = A1A2 · · · An(Ikn ⊗ Hm), (5.68)
where
Ai = Iki−1 ⊗ (X ⊗ Imkn−i + Y ⊗ S mkn−i). (5.69)
Let us now evaluate the complexity of a transform:
HFmkn , FT = ( f1, f2, . . . , fmkn). (5.70)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Fast Williamson–Hadamard Transforms 209
Table 5.3 Complexity of m-point HTs.
Hm Complexity
Hm = X = H2 (see Remark 5.5.1) (n + 1)2n+1
Walsh–Hadamard (W–H) (CX +CY + k)mnkn−1 + mkn log2 m
BCBS W–H with block repetition (CX +CY + k)mnkn−1 + m[(m/4) + 2]kn
BCBS W–H with block repetition andcongruent circuits
(CX +CY + k)mnkn−1 +
mkn[(m/4) + 2 − 2
∑ri=2
∑trj=1(Nr, j − 1)(i − 1)
]BCBS W–H with block repetition and shifts (CX +CY + k)mnkn−1 + (m/2)[(m/2) + 3]kn, (3m/4)kn
BCBS W–H with block repetition andcongruent circuits and shifts
(CX +CY + k)mnkn−1 +
kn(m/2)[(m/2) + 3 − 2
∑ri=2
∑trj=1(Nr, j − 1)(i − 1)
], (3m/4)kn
First, we find the required operations for the transform:
AiP, P = (p1, p2, . . . , pmkn). (5.71)
We represent PT = (P1, P2, . . . , Pki−1 ), where
Pj =[( j − 1)mkn−i+1 + t
]mkn−i+1
t=1, j = 1, 2, . . . , ki−1. (5.72)
Then, from Eq. (5.69), we have
AiP = diag {(X ⊗ Imkn−i + Y ⊗ S mkn−i)P1, . . . ,
(X ⊗ Imkn−i + Y ⊗ S mkn−i)Pki−1} . (5.73)
We denote the complexities of transforms XQ and YQ by CX and CY , respectively.We have
CX < k(k − 1), CY < k(k − 1). (5.74)
From Eq. (5.73), we obtain the complexity of transform∏n
i=1 AiP by (CX + CY +
k)mnkn−1. Hence, the total complexity of transform Hmkn F is
C(Hmkn) < (CX +CY + k)mnkn−1 + knC(Hm), (5.75)
where C(Hm) is a complexity of an m-point HT (see Table 5.3).For given matrices X and Y , we can compute the exact value of CX , CY ;
therefore, we can obtain the exact complexity of the transform. For example, fork = 6, from Eq. (5.50), we see that CX+CY = 18; hence, the 6nm-point HT requiresonly 24 · 6n−1mn + 6nC(Hm) operations.
References
1. N. Ahmed and K. Rao, Orthogonal Transforms for Digital Signal Processing,Springer-Verlag, New York (1975).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
210 Chapter 5
2. H. Sarukhanyan, S. Agaian, J. Astola, and K. Egiazarian, “Binary matrices,decomposition and multiply-add architectures,” Proc. SPIE 5014, 111–122(2003) [doi:10.1117/12.473134].
3. S. Agaian, H. Sarukhanyan, and J. Astola, “Skew Williamson–Hadamardtransforms,” J. Multiple-Valued Logic Soft Comput. 10 (2), 173–187 (2004).
4. S. Agaian, H. Sarukhanyan, and J. Astola, “Multiplicative theorem basedfast Williamson–Hadamard transforms,” Proc. SPIE 4667, 82–91 (2002)[doi:10.1117/12.467969].
5. H. Sarukhanyan, A. Anoyan, S. Agaian, K. Egiazarian and J. Astola, FastHadamard transforms, in Proc. of Int. TICSP Workshop on Spectral Methodsand Multirate Signal Processing, SMMSP’2001, Pula, Croatia, Jun. 16–18,pp. 33–40 (2001).
6. H. Sarukhanyan and A. Anoyan, “On fast Hadamard transform,” Math. Prob.Comput. Sci. 21, 7–16 (2000).
7. S. Agaian, Williamson family and Hadamard matrices, in Proc. of 5th All-Union Conf. on Problems of Cybernetics (in Russian) (1977).
8. S. Agaian and A. Matevosian, “Fast Hadamard transform,” Math. Prob.Cybernet. Comput. Technol. 10, 73–90 (1982).
9. S. Agaian, “A unified construction method for fast orthogonal transforms,”Prog. Cybernet. Syst. Res. 8, 301–307 (1982).
10. S. Agaian, “Advances and problems of the fast orthogonal transforms forsignal-images processing applications (Part 1),” Pattern Recognition, Classifi-cation, Forecasting, Yearbook, 3, Russian Academy of Sciences, 146–215,Nauka, Moscow (1990) (in Russian).
11. S. Agaian, “Advances and problems of the fast orthogonal transforms forsignal-images processing applications (Part 2),” Pattern Recognition, Classifi-cation, Forecasting, Yearbook, 4, Russian Academy of Sciences, 156–246,Nauka, Moscow (1991) (in Russian).
12. S. Agaian, “Optimal algorithms for fast orthogonal transforms and theirrealization,” Cybernetics and Computer Technology, Yearbook, 2, 231–319,Nauka, Moscow (1986).
13. S. S. Agaian, Hadamard Matrices and their Applications, Lecture Notes inMathematics, 1168, Springer-Verlag, New York (1985).
14. S. S. Agaian, “Construction of spatial block Hadamard matrices,” Math. Prob.Cybernet. Comput. Technol. 12, 5–50 (1984) (in Russian).
15. H. Sarukhanyan, S. Agaian, K. Egiazarian and J. Astola, On fast Hadamardtransforms of Williamson type, in Proc. EUSIPCO-2000, Tampere, FinlandSept. 4–8, 2, 1077–1080 (2000).
16. S. S. Agaian and H. G. Sarukhanian, “Recurrent formulae of the constructionWilliamson type matrices,” Math. Notes 30 (4), 603–617 (1981).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Fast Williamson–Hadamard Transforms 211
17. G.R. Reddy and P. Satyanarayana, Interpolation algorithm using Walsh–Hadamard and discrete Fourier/Hartley transforms, in Proc. of IEEE on the33rd Midwest Symposium on Circuits and Systems, Vol. 1, 545–547 (1991).
18. C.-F. Chan, Efficient implementation of a class of isotropic quadratic filters byusing Walsh–Hadamard transform, in Proc of. IEEE Int. Symp. on Circuits andSystems, June 9–12, Hong Kong, 2601–2604 (1997).
19. A. Chen, D. Li and R. Zhou, A research on fast Hadamard transform (FHT)digital systems, IEEE TENCON 93, Beijing, 541–546 (1993).
20. H.G. Sarukhanyan, Hadamard matrices: construction methods and applica-tions, in Proc. of Workshop on Transforms and Filter Banks, Tampere, Finland,95–130 (Feb. 21–27, 1998).
21. H. G. Sarukhanyan, “Decomposition of the Hadamard matrices and fastHadamard transform,” in Computer Analysis of Images and Patterns, LectureNotes in Computer Science, 1296, 575–581 (1997).
22. R. K. Yarlagadda and E. J. Hershey, Hadamard Matrix Analysis and Synthesiswith Applications and Signal/Image Processing, Kluwer Academic Publishers,Boston (1996).
23. J. Seberry and M. Yamada, “Hadamard matrices, sequences and blockdesigns,” in Contemporary Design Theory, 431–554, John Wiley & Sons,Hoboken, NJ (1992).
24. S. Samadi, Y. Suzukake and H. Iwakura, On automatic derivation of fastHadamard transform using generic programming, in Proc. of 1998 IEEE Asia-Pacific Conf. on Circuit and Systems, Thailand, 327–330 (1998).
25. D. Coppersmith, E. Feig, and E. Linzer, “Hadamard transforms on multiply/add architectures,” IEEE Trans. Signal Process 42 (4), 969–970 (1994).
26. http://www.cs.uow.edu.au/people/jennie/lifework.html.
27. S. Agaian, H. Sarukhanyan, K. Egiazarian and J. Astola, Williamson–Hadamard transforms: design and fast algorithms, in Proc. of 18 Int. ScientificConf. on Information, Communication and Energy Systems and Technologies,ICEST-2003, Oct. 16–18, Sofia, Bulgaria, 199–208 (2003).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Chapter 6Skew Williamson–HadamardTransforms
Skew Hadamard matrices are of special interest due to their uses, among others,in constructing orthogonal designs. Fast computational algorithms for skewWilliamson–Hadamard transforms are constructed in this chapter. Fast algorithmsof two groups of transforms based on skew-symmetric Williamson–Hadamardmatrices are designed using the block structures of these matrices.
6.1 Skew Hadamard Matrices
Many constructions of Hadamard matrices are known, but not all of them give skewHadamard matrices.1–39 In Ref. 1, the authors provide a survey on the existence andequivalence of skew-Hadamard matrices. In addition, they present some new skewHadamard matrices of order 52 and improve the known lower bound on the numberof the skew Hadamard matrices of this order. As of August 2006, skew Hadamardmatrices were known to exist for all n ≤ 188 with n divisible by 4. The survey ofknown results about skew-Hadamard matrices is given in Ref. 33. It is conjecturedthat skew Hadamard matrices exist for n = 1, 2 and all n divisible by 4.8,11,13,14
Definition 6.1.1: A matrix Am is called symmetric if ATm = Am. Matrix Am of order
m is called skew symmetric if ATm = −Am. The following matrices are examples of
skew-symmetric matrices of order 2, 3, and 4:
(0 1−1 0
),
⎛⎜⎜⎜⎜⎜⎜⎜⎝0 1 −1−1 0 1
1 −1 0
⎞⎟⎟⎟⎟⎟⎟⎟⎠ ,⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
0 1 1 1−1 0 −1 1−1 1 0 −1−1 −1 1 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (6.1)
6.1.1 Properties of the skew-symmetric matrices
• If A = (ai, j) is a skew-symmetric matrix, then ai, j = 0.• If A = (ai, j) is a skew-symmetric matrix, then ai, j = −a j,i.• Sums and scalar products of skew-symmetric matrices are again skew symme-
tric, i.e., A and B are skew-symmetric matrices of the same orders, and c is thescalar number, then A + B and cA are skew-symmetric matrices.
213
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
214 Chapter 6
• If A is a skew-symmetric matrix of order n, the determinant of A satisfies
det(A) = det(AT ) = det(−A) = (−1)n det(A).
Definition 6.1.2: A Hadamard matrix H4n of order 4n of the form H4n = I4n + S 4n
is called skew-symmetric type, skew symmetric, or skew if S T4n = −S 4n.34,35
We can see that if H4n = I4n + S 4n is a skew-symmetric Hadamard matrix oforder 4n, then
S 4nS T4n = S 2
4n = (1 − 4n)I4n. (6.2)
Indeed,
H4nHT4n = (I4n + S 4n)(I4n − S 4n) = I4n − S 4n + S 4n − S 2
4n
= I4n − S 24n = 4nI4n, (6.3)
from which we obtain S 24n = (1 − 4n)I4n.
A skew-Hadamard matrix Hm of order m can always be written in a skew-normalform as
Hm =
(1 e−eT Cm−1 + Im−1
), (6.4)
where e is the row vector of ones of size m − 1, Cm−1 is a skew-symmetric(0,−1,+1) matrix of order m − 1, and Im−1 is an identity matrix of order m − 1.Or, a Hadamard matrix Hm is “skew Hadamard” if Hm + HT
m = 2Im.For example, the following matrices are skew-symmetric Hadamard matrices of
orders 2 and 4:
H2 =
(+ +
− +)=
(0 +− 0
)+
(+ 00 +
)=
(0 +− 0
)+ I2,
H4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + + +
− + − +− + + −− − + +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ =⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝0 + + +− 0 − +− + 0 −− − + 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ + I4.
(6.5)
The simple skew-Hadamard matrix construction method is based on the followingrecursive formulas:
Suppose that Hn = S n + In is a skew-Hadamard matrix of order n. Then
H2n =
(S n + In S n + In
S n − In −S n + In
)(6.6)
is a skew-Hadamard matrix of order 2n. Examples of skew-Hadamard matrices oforders 8 and 16 are given as follows:
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Skew Williamson–Hadamard Transforms 215
H8 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + + + + + + +
− + − + − + − +− + + − − + + −− − + + − − + +− + + + + − − −− − − + + + + −− + − − + − + +− − + − + + − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (6.7)
H16 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + + + + + + + + + + + + + + +
− + − + − + − + − + − + − + − +− + + − − + + − − + + − − + + −− − + + − − + + − − + + − − + +− + + + + − − − − + + + + − − −− − − + + + + − − − − + + + + −− + − − + − + + − + − − + − + +− − + − + + − + − − + − + + − +− + + + + + + + + − − − − − − −− − − + − + − + + + + − + − + −− + − − − + + − + − + + + − − +− − + − − − + + + + − + + + − −− + + + − − − − + − − − + + + +− − − + + − + − + + + − − + − +− + − − + − − + + − + + − + + −− − + − + + − − + + − + − − + +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (6.8)
6.2 Skew-Symmetric Williamson Matrices
Similarly, many constructions of Williamson–Hadamard matrices are known, butnot all of them give skew Williamson–Hadamard matrices. For instance, in Ref. 36Blatt and Szekeres use symmetric balanced incomplete block design (SBIBD)configurations with parameters [q2(q + 2), q(q + 1), q], for q is a prime power toconstruct Williamson-type matrices. In Ref. 40, J. Seberry shows that if q is a primepower, then there are Williamson-type matrices of order
• (1/2)q2(q + 1), where q ≡ 1(mod 4) is a prime power, and• (1/2)q2(q + 1), where q ≡ 3(mod 4) is a prime power and there are Williamson-
type matrices of order (1/2)(q + 1).
This gives Williamson-type matrices for the new orders 363, 1183, 1805, 2601,3174, and 5103. Other related results of Williamson matrices can be found inRefs. 41–49.
Let A, B, C, D be cyclic (+1, −1) matrices of order n satisfying the conditions
A = In + A1, AT1 = −A1,
BT = B, CT = C, DT = B,
AAT + BBT +CCT + DDT = 4nIn.
(6.9)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
216 Chapter 6
Then the matrix ⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝A B C D−B A D −C−C −D A B−D C −B A
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ (6.10)
will be a skew-symmetric Hadamard matrix of Williamson type of order 4n.The following theorem is correct:
Theorem 6.2.1:47,48 Let A = (ai, j), B = (bi, j), C = (ci, j), and D = (di, j) be (+1,−1)matrices of order n. Furthermore, let A be a skew-type cyclic matrix, and B, C, Dbe back-cyclic matrices whose first rows satisfy the following equations:
a0,0 = b0,0 = c0,0 = d0,0 = 1,
a0, j = −a0,n− j, b0, j = b0,n− j, c0, j = c0,n− j, d0, j = d0,n− j,
j = 1, 2, . . . , n − 1.
(6.11)
If AAT + BBT + CCT + DDT = 4nIn, then Eq. (6.10) is a skew-Hadamard matrixof order 4n.
Four matrices satisfying the conditions of Eq. (6.9) are called skew-symmetricWilliamson-type matrices of order n, and the matrix Eq. (6.10) is called the skew-symmetric Hadamard matrix of the Williamson type.47,48 Let us give an exampleof a skew-symmetric Hadamard matrix of the Williamson type of order 12. Skew-symmetric Williamson matrices of order 3 have the following forms:
A =
⎛⎜⎜⎜⎜⎜⎜⎜⎝+ − ++ + −− + +
⎞⎟⎟⎟⎟⎟⎟⎟⎠ , B = C =
⎛⎜⎜⎜⎜⎜⎜⎜⎝+ − −− + −− − +
⎞⎟⎟⎟⎟⎟⎟⎟⎠ , D =
⎛⎜⎜⎜⎜⎜⎜⎜⎝+ + +
+ + +
+ + +
⎞⎟⎟⎟⎟⎟⎟⎟⎠ . (6.12)
Thus, the skew-symmetric Hadamard matrix of the Williamson type of order 12obtained from Eq. (6.10) will be represented as
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ − + + − − + − − + + +
+ + − − + − − + − + + +
− + + − − + − − + + + +
− + + + − + + + + − + ++ − + + + − + + + + − ++ + − − + + + + + + + −− + + − − − + − + + − −+ − + − − − + + − − + −+ + − − − − − + + − − +− − − + − − − + + + − +− − − − + − + − + + + −− − − − − + + + − − + +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (6.13)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Skew Williamson–Hadamard Transforms 217
The first rows of the Williamson-type skew-symmetric cyclic matrices A, B, C, Dof order n, n = 3, 5, . . . , 31 [47, 48, 52] are given in Appendix A.4.
6.3 Block Representation of Skew-SymmetricWilliamson–Hadamard Matrices
In this section, we present a construction of block-cyclic Hadamard matrices, i.e.,Hadamard matrices that can be defined by the first block rows. We demonstratethat the Williamson-type skew-symmetric cyclic matrices of orders n exist, andthus block-cyclic, skew-symmetric Hadamard matrices of order 4n exist, whoseblocks are also skew-symmetric Hadamard matrices of order 4.
Let H4n be a skew-symmetric Hadamard matrix of the Williamson type of order4n and let A, B, C, D be the Williamson-type cyclic skew-symmetric matrices.We have seen that the Williamson-type cyclic skew-symmetric matrices can berepresented as
A =n−1∑i=0
aiUi, B =
n−1∑i=0
biUi, C =
n−1∑i=0
ciUi, D =
n−1∑i=0
diUi, (6.14)
where U is the cyclic matrix of order n, with the first row (0, 1, 0, . . . , 0), U0 =
Un = In being an identity matrix of order n, and Un+i = Ui, ai = −an−i, bi = bn−i,ci = cn−i, di = dn−i, for i = 1, 2, . . . , n − 1. Now, the skew-symmetric Williamson-type Hadamard matrix H4n can be represented as
H4n =
n−1∑i=0
Ui ⊗ Pi, (6.15)
where
Pi =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝ai bi ci di
−bi ai di −ci
−ci −di ai bi
−di ci −bi ai
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , i = 0, 1, . . . , n − 1, (6.16)
and ai, bi, ci, di = ±1.We will call the Hadamard matrices of the form of Eq. (6.15) skew-symmetric,
block-cyclic Williamson–Hadamard matrices.An example of a skew-symmetric, block-cyclic Williamson–Hadamard matrix
of order 12 is given as follows:
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
218 Chapter 6
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + + + − − − + + − − +− + + − + − + + + + + +
− − + + + − − − + − + −− + − + − − + − − − + ++ − − + + + + + − − − ++ + + + − + + − + − + ++ − + − − − + + + − − −− − + + − + − + − − + −− − − + + − − + + + + +
+ − + + + + + + − + + −+ − − − + − + − − − + +− − + − − − + + − + − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (6.17)
Note that block-cyclic, skew-symmetric Hadamard matrices are synthesized fromeight different blocks of order 4; the first ones have been used once in the firstposition.
The following skew-symmetric Hadamard matrices of Williamson type of order4 can be used to design block-cyclic, skew-symmetric Hadamard matrices of theWilliamson type of order 4n:
P0 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + + +
− + + −− − + +− + − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , P1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + + −− + − −− + + ++ + − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ,
P2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + − +− + + ++ − + +− − − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , P3 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + − −− + − ++ + + +
+ − − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ,
P4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ − + ++ + + −− − + −− + + +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , P5 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ − + −+ + − −− + + −+ + + +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ,
P6 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ − − ++ + + +
+ − + −− − + +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , P7 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ − − −+ + − ++ + + −+ − + +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ .
(6.18)
In Appendix A.4, the first block rows of the block-cyclic, skew-symmetricHadamard matrices of Williamson type of order 4n are given, e.g., n = 3, 5,. . . , 35.48–53
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Skew Williamson–Hadamard Transforms 219
6.4 Fast Block-Cyclic, Skew-Symmetric Williamson–HadamardTransform
In this section, we present a fast algorithm for calculation of a block-cyclic,skew-symmetric Williamson–Hadamard Transform of order 4n. As follows fromSection 6.3, all block-cyclic, skew-symmetric Williamson–Hadamard matricescontain several blocks from the set of matrices {P0, P1, . . . , P7}. Therefore, therealization of block-cyclic, skew-symmetric Williamson–Hadamard transformscan be accomplished by calculating several specialized 4-point HTs such as Yi =
PiX, where Pi, i = 0, 1, . . . , 7.Let us calculate the spectral coefficients of transforms from Eq. (6.18). Let
X = (x0, x1, x2, x3)T and Yi = (y0i , y
1i , y
2i , y
3i )T be input and output column vectors,
respectively. We obtain
y00 = (x0 + x1) + (x2 + x3), y1
0 = (x0 − x1) + (x2 − x3),
y20 = −(x0 + x1) + (x2 + x3), y3
0 = −(x0 − x1) − (x2 − x3);
y01 = (x0 + x1) + (x2 − x3), y1
1 = −(x0 − x1) − (x2 + x3),
y21 = −(x0 − x1) + (x2 + x3), y3
1 = (x0 + x1) − (x2 − x3);
(6.19a)
y02 = (x0 + x1) − (x2 − x3), y1
2 = −(x0 − x1) + (x2 + x3),
y22 = (x0 − x1) + (x2 + x3), y3
2 = −(x0 + x1) − (x2 − x3);
y03 = (x0 + x1) − (x2 + x3), y1
3 = −(x0 − x1) − (x2 − x3),
y23 = (x0 + x1) + (x2 + x3), y3
3 = (x0 − x1) − (x2 − x3);
(6.19b)
y04 = (x0 − x1) + (x2 + x3), y1
4 = (x0 + x1) + (x2 − x3),
y24 = −(x0 + x1) + (x2 − x3), y3
4 = −(x0 − x1) + (x2 + x3);
y05 = (x0 − x1) + (x2 − x3), y1
5 = (x0 + x1) − (x2 + x3),
y25 = −(x0 − x1) + (x2 − x3), y3
5 = (x0 + x1) + (x2 + x3);
(6.19c)
y06 = (x0 − x1) − (x2 − x3), y1
6 = (x0 + x1) + (x2 + x3),
y26 = (x0 − x1) + (x2 − x3), y3
6 = −(x0 + x1) + (x2 + x3);
y07 = (x0 − x1) − (x2 + x3), y1
7 = (x0 + x1) − (x2 − x3),
y27 = (x0 + x1) + (x2 − x3), y3
7 = (x0 − x1) + (x2 + x3).
(6.19d)
From the equations above, it follows that
y02 = y3
1, y12 = y2
1, y22 = −y1
1, y32 = −y0
1;
y03 = −y2
0, y13 = y3
0, y23 = y0
0, y33 = −y1
0;
y04 = −y1
1, y14 = y0
1, y24 = −y3
1, y34 = y2
1;
y05 = −y3
0, y15 = −y2
0, y25 = y1
0, y35 = y0
0;
y06 = −y1
0, y16 = y0
0, y26 = −y3
0, y36 = y2
0;
y07 = −y2
1, y17 = y3
1, y27 = y0
1, y37 = −y1
1.
(6.20)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
220 Chapter 6
x0
x1
x2
x3
P0 X
x0
x1
x2
x3
P1X
P3X
P4X
P7X
x0
x1
x2
x3
x0
x1
x2
x3
x0
x1
x2
x3
x0
x1
x2
x3
P2 X
P5 X
x0
x1
x2
x3
x0
x1
x2
x3
P6 X
y00
y01
y00 y0
1y0
2y0
3
y02
y03
y10
y10
y11
y11
y12
y12
y13
y13
y00
y01
y02
y03
y00
y01
y02
y03
y00
y01
y02
y03
y00
y01
y02
y03
y10
y11
y12
y13
y10
y11
y12
y13
Figure 6.1 Flow graphs of the joint PiX transforms, i = 0, 1, . . . , 7.
Now, from Eqs. (6.19a)–(6.20), we can see that the joint computation of 4-pointtransforms PiX, i = 0, 1, . . . , 7 requires only 12 addition/subtraction operations. InFig. 6.1, the joint PiX transforms, i = 0, 1, . . . , 7, are shown.
Let us give an example. The block-cyclic, skew-symmetric Hadamard matrix ofthe Williamson type of order 36 has the following form:
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Skew Williamson–Hadamard Transforms 221
H36 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
P0 −P3 −P1 −P2 P2 −P5 P5 P6 P4
P4 P0 −P3 −P1 −P2 P2 −P5 P5 P6
P6 P4 P0 −P3 −P1 −P2 P2 −P5 P5
P5 P6 P4 P0 −P3 −P1 −P2 P2 −P5
−P5 P5 P6 P4 P0 −P3 −P1 −P2 P2
P2 −P5 P5 P6 P4 P0 −P3 −P1 −P2
−P2 P2 −P5 P5 P6 P4 P0 −P3 −P1
−P1 −P2 P2 −P5 P5 P6 P4 P0 −P3
−P3 −P1 −P2 P2 −P5 P5 P6 P4 P0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (6.21)
The input vector F36 can be represented as
FT36 =
[XT
0 , XT1 , . . . , X
T8
], (6.22)
where
XTi =
[f4i, f4i+1, f4i+2, f4i+3
], i = 0, 1, . . . , 8. (6.23)
Now, the 36-point HT is represented as follows:
H36F36 = Y0 + Y1 + · · · + Y8, (6.24)
where Yi, i = 0, 1, . . . , 8 has the following form, respectively:
Y0 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
P0X0
P4X0
P6X0
P5X0
−P5X0
P2X0
−P2X0
−P1X0
−P3X0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
, Y1=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
−P3X1
P0X1
P4X1
P6X1
P5X1
−P5X1
P2X1
−P2X1
−P1X1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
, Y2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
−P1X2
−P3X2
P0X2
P4X2
P6X2
P5X2
−P5X2
P2X2
−P2X2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
, Y3 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
−P2X3
−P1X3
−P3X3
P0X3
P4X3
P6X3
P5X3
−P5X3
P2X3
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
, Y4=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
P2X4
−P2X4
−P1X4
−P3X4
P0X4
P4X4
P6X4
P5X4
−P5X4
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
,
Y5 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
−P5X5
P2X5
−P2X5
−P1X5
−P3X5
P0X5
P4X5
P6X5
P5X5
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
, Y6 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
P5X6
−P5X6
P2X6
−P2X6
−P1X6
−P3X6
P0X6
P4X6
P5X6
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
, Y7 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
P5X7
P5X7
−P5X7
P2X7
−P2X7
−P1X7
−P3X7
P0X7
P4X7
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
, Y8 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
P4X8
P5X8
P5X8
−P5X8
P2X8
−P2X8
−P1X8
−P3X8
P0X8
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (6.25)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
222 Chapter 6
From Eqs. (6.19a)–(6.19d) and the above-given equalities for Yi, we can see thatin order to compute all transforms PiXj i = 0, 1, . . . , 6, j = 0, 1, . . . , 8 resultingin Yi, i = 0, 1, . . . , 8, 108 addition operations are necessary, as in the block-cyclic,block-symmetric case. Hence, the complexity of the block-cyclic, skew-symmetricHT can be calculated by the formula
Cs(H4n) = 4n(n + 2). (6.26)
The computation of the vectors Yi, i = 0, 1, . . . , 8 is given schematically in Fig. 6.2.Now, using repetitions of additions of vectors Yi and the same notations as in
the previous subsection, the complexity of the HT can be represented as
Crs(H4n) = 4n
⎡⎢⎢⎢⎢⎢⎢⎣n + 2 −m∑
i=2
tm∑j=1
(Nm, j − 1)(i − 1)
⎤⎥⎥⎥⎥⎥⎥⎦ . (6.27)
In Appendix A.5, the first block rows of block-cyclic, skew-symmetricHadamard matrices of the Williamson type of order 4n, n = 3, 5, . . . , 25 [47–50]with marked cyclic congruent circuits are reflected.
6.5 Block-Cyclic, Skew-Symmetric Fast Williamson–HadamardTransform in Add/Shift Architectures
Let us introduce the notations r1 = x1 + x2 + x3, r2 = r1 − x0. From these notationsand Eqs. (6.19a)–(6.19d), it follows that
y00 = r1 + x0, y1
0 = r2 − 2x3, y20 = r2 − 2x1, y3
0 = r2 − 2x2;
y01 = y0
0 − 2x3, y11 = −y0
0 + 2x1, y21 = r2, y3
1 = y00 − 2x2;
y02 = y3
1, y12 = r2, y2
2 = −y11, y3
2 = −y01;
y03 = −y2
0, y13 = y3
0, y23 = y0
0, y33 = −y1
0;
y04 = −y1
1, y14 = y0
1, y24 = −y0
2, y34 = r2;
y05 = −y1
0, y15 = −y2
0, y25 = y1
0, y35 = y0
0;
y06 = −y1
0, y16 = y0
0, y26 = −y3
0, y36 = y2
0;
y07 = −r2, y1
7 = y02, y2
7 = y01, y3
7 = −y11.
(6.28)
Analysis of the 4-point transforms given above shows that their joint com-putation requires fewer operations than does their separate computations. Forexample, the transforms P0X and P1X require 14 addition/subtraction operationsand three one-bit shifts; however, for their joint computation, only 10 addition/subtraction operations and three one-bit shifts are necessary.
One can show that formulas of the complexity, in this case, are similar to onesin the case of symmetric Williamson–Hadamard matrices, i.e.,
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Skew Williamson–Hadamard Transforms 223
Figure 6.2 Flow graphs of the computation of Yi vectors.
C±s (H4n) = 2n(2n + 3),
Cshs (H4n) = 3n;
Crs(H4n)± = 2n
⎡⎢⎢⎢⎢⎢⎢⎣2n + 3 − 2m∑
i=2
tm∑j=1
(Nm, j − 1)(i − 1)
⎤⎥⎥⎥⎥⎥⎥⎦ ,Csh
s (H4n) = 3n.
(6.29)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
224 Chapter 6
Table 6.1 Complexities of block-cyclic, skew-symmetric fast Williamson–Hadamardtransforms in add/shift architectures.
n Cs(H4n) Csh(H4n) Crs(H4n) Cr
s(H4n)±
3 60 9 60 545 140 15 140 1307 252 21 252 2389 396 27 360 342
11 572 33 484 46213 780 39 676 65015 1020 45 900 87017 1292 51 1088 105419 1596 57 1216 117821 1932 63 1596 155423 2300 69 1840 179425 2700 75 2100 2050
Some numerical results of complexities of block-cyclic, skew-symmetric fastWilliamson–Hadamard transforms in add/shift architectures are given in Table 6.1.
References
1. R. Craigen, “Hadamard matrices and designs,” in The CRC Handbook ofCombinatorial Designs, C. J. Colbourn and J. H. Dinitz, Eds., pp. 370–377CRC Press, Boca Raton (1996).
2. D. Z. Djokovic, “Skew Hadamard matrices of order 4 × 37 and 4 × 43,”J. Combin. Theory, Ser. A 61, 319–321 (1992).
3. D. Z. Djokovic, “Ten new orders for Hadamard matrices of skew type,” Univ.Beograd. Pupl. Electrotehn. Fak., Ser. Math. 3, 47–59 (1992).
4. D. Z. Djokovic, “Construction of some new Hadamard matrices,” Bull.Austral. Math. Soc. 45, 327–332 (1992).
5. D. Z. Djokovic, “Good matrices of order 33, 35 and 127 exist,” J. Combin.Math. Combin. Comput. 14, 145–152 (1993).
6. D. Z. Djokovic, “Five new orders for Hadamard matrices of skew type,”Australas. J. Combin. 10, 259–264 (1994).
7. D. Z. Djokovic, “Six new orders for G-matrices and some new orthogonaldesigns,” J. Combin. Inform. System Sci. 20, 1–7 (1995).
8. R. J. Fletcher, C. Koukouvinos, and J. Seberry, “New skew-Hadamardmatrices of order 4 · 49 and new D-optimal designs of order 2 · 59,” DiscreteMath. 286, 251–253 (2004).
9. S. Georgiou and C. Koukouvinos, “On circulant G-matrices,” J. Combin.Math. Combin. Comput. 40, 205–225 (2002).
10. S. Georgiou and C. Koukouvinos, “Some results on orthogonal designs andHadamard matrices,” Int. J. Appl. Math. 17, 433–443 (2005).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Skew Williamson–Hadamard Transforms 225
11. S. Georgiou, C. Koukouvinos, and J. Seberry, “On circulant best matrices andtheir applications,” Linear Multilin. Algebra 48, 263–274 (2001).
12. S. Georgiou, C. Koukouvinos, and S. Stylianou, “On good matrices, skewHadamard matrices and optimal designs,” Comput. Statist. Data Anal. 41,171–184 (2002).
13. S. Georgiou, C. Koukouvinos, and S. Stylianou, “New skew Hadamardmatrices and their application in edge designs,” Utilitas Math. 66, 121–136(2004).
14. S. Georgiou, C. Koukouvinos, and S. Stylianou, “Construction of new skewHadamard matrices and their use in screening experiments,” Comput. Stat.Data Anal. 45, 423–429 (2004).
15. V. Geramita and J. Seberry, Orthogonal Designs: Quadratic Forms andHadamard Matrices, Marcel Dekker, New York (1979).
16. J. M. Goethals and J. J. Seidel, “A skew Hadamard matrix of order 36,”J. Austral. Math. Soc. 11, 343–344 (1970).
17. H. Kharaghani and B. Tayfeh-Rezaie, “A Hadamard matrix of order 428,”J. Combin. Des. 13, 435–440 (2005).
18. C. Koukouvinos and J. Seberry, “On G-matrices,” Bull. ICA 9, 40–44 (1993).
19. S. Kounias and T. Chadjipantelis, “Some D-optimal weighing designs forn ≡ 3 (mod 4),” J. Statist. Plann. Inference 8, 117–127 (1983).
20. R. E. A. C. Paley, “On orthogonal matrices,” J. Math. Phys. 12, 311–320(1933).
21. J. Seberry Wallis, “A skew-Hadamard matrix of order 92,” Bull. Austral. Math.Soc. 5, 203–204 (1971).
22. J. Seberry Wallis, “On skew Hadamard matrices,” Ars Combin. 6, 255–275(1978).
23. J. Seberry Wallis and A. L. Whiteman, “Some classes of Hadamard matriceswith constant diagonal,” Bull. Austral. Math. Soc. 7, 233–249 (1972).
24. J. Seberry and M. Yamada, “Hadamard matrices, sequences and blockdesigns,” in Contemporary Design Theory—A Collection of Surveys, J. H.Dinitz and D. R. Stinson, Eds., 431–560 Wiley, Hoboken, NJ (1992).
25. E. Spence, “Skew-Hadamard matrices of order 2(q + 1),” Discrete Math. 18,79–85 (1977).
26. G. Szekeres, “A note on skew type orthogonal ±1 matrices,” in Combinatorics,Colloquia Mathematica Societatis, Vol. 52, J. Bolyai, A. Hajnal, L. Lovász,and V. T. Sòs, Eds., 489–498 North-Holland, Amsterdam (1988).
27. W. D. Wallis, A. P. Street and J. Seberry Wallis, Combinatorics: RoomSquares, Sum-Free Sets, Hadamard Matrices, Lecture Notes in Mathematics,292, Springer, New York, 1972.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
226 Chapter 6
28. A. L. Whiteman, “An infinite family of skew-Hadamard matrices,” Pacific J.Math. 38, 817–822 (1971).
29. A. L. Whiteman, “Skew Hadamard matrices of Goethals–Seidel type,”Discrete Math. 2, 397–405 (1972).
30. H. Baartmans, C. Lin, and W. D. Wallis, “Symmetric and skew equivalenceof Hadamard matrices of order 28,” Ars. Combin. 41, 25–31 (1995).
31. http://rangevoting.org/SkewHad.html.
32. K. Balasubramanian, “Computational strategies for the generation of equiva-lence classes of Hadamard matrices,” J. Chem. Inf. Comput. Sci. 35, 581–589(1995).
33. K. B. Reid and E. Brown, “Doubly regular tournaments are equivalent to skewHadamard matrices,” J. Combinatorial Theory, Ser. A 12, 332–338 (1972).
34. P. Solé and S. Antipolis, “Skew Hadamard designs and their codes,” http://www.cirm.univ-mrs.fr/videos/2007/exposes/13/Sole.pdf (2007).
35. C. J. Colbourn and J. H. Dinitz, Handbook of Combinatorial Designs, 2nd ed.,CRC Press, Boca Raton (2006).
36. D. Blatt and G. Szekeres, “A skew Hadamard matrix of order 52,” Can. J.Math. 21, 1319–1322 (1969).
37. J. Seberry, “A new construction for Williamson-type matrices,” GraphsCombin. 2, 81–87 (1986).
38. J. Wallis, “Some results on configurations,” J. Aust. Math. Soc. 12, 378–384(1971).
39. A. C. Mukopadhyay, “Some infinite classes of Hadamard matrices,” J. Comb.Theory Ser. A 25, 128–141 (1978).
40. J. Seberry, “Some infinite classes of Hadamard matrices,” J. Aust. Math. Soc.,Ser. A 25, 128–141 (1980).
41. J. S. Wallis, “Some matrices of Williamson type,” Utilitas Math. 4, 147–154(1973).
42. J. S. Wallis, “Williamson matrices of even order”, in Combinatorial Matrices,Lecture Notes in Mathematics 403, Springer-Verlag Berlin 1974.
43. J. S. Wallis, “Construction of Williamson type matrices,” Linear MultilinearAlgebra 3, 197–207 (1975).
44. M. Yamada, “On the Williamson type j matrices of order 4.29, 4.41, and 4.37,”J. Comb. Theory, Ser. A 27, 378–381 (1979).
45. M. Yamada, “On the Williamson matrices of Turyn’s type and type j,”Comment. Math. Univ. St. Pauli 31 (1), 71–73 (1982).
46. C. Koukouvinos and S. Stylianou, “On skew-Hadamard matrices,” DiscreteMath. 308 (13), 2723–2731 (2008).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Skew Williamson–Hadamard Transforms 227
47. S. S. Agaian, Hadamard Matrices and Their Applications, Lecture Notes inMathematics 1168, Springer-Verlag, New York, 1985.
48. H. Sarukhanyan, S. Agaian, K. Egiazarian and J. Astola, “On fast Hadamardtransforms of Williamson type,” in Proc. EUSIPCO-2000, Tampere, FinlandSept. 4–8, 2, pp. 1077–1080 (2000).
49. H. Sarukhanyan, A. Anoyan, S. Agaian, K. Egiazarian and J. Astola, “FastHadamard transforms,” in Proc of. Int. TICSP Workshop on Spectral Methodsand Multirate Signal Processing, SMSP-2001, June 16–18, Pula, Croatia,TICSP Ser. 13, pp. 33–40 (2001).
50. S. Agaian, H. Sarukhanyan, K. Egiazarian and J. Astola, “Williamson–Hadamard transforms: design and fast algorithms,” in Proc. of 18 Int. ScientificConf. on Information, Communication and Energy Systems and Technologies,ICEST-2003, Oct. 16–18, Sofia, Bulgaria, 199–208 (2003).
51. H. Sarukhanyan, S. Agaian, J. Astola, and K. Egiazarian, “Binary matrices,decomposition and multiply-add architectures,” Proc. SPIE 5014, 111–122(2003) [doi:10.1117/12.473134].
52. S. Agaian, H. Sarukhanyan, and J. Astola, “Skew Williamson–Hadamardtransforms,” J. Multiple-Valued Logic Soft Comput. 10 (2), 173–187 (2004).
53. S. Agaian, H. Sarukhanyan, and J. Astola, “Multiplicative theorem basedfast Williamson–Hadamard transforms,” Proc. SPIE 4667, 82–91 (2002)[doi:10.1117/12.467969].
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Chapter 7Decomposition of HadamardMatrices
We have seen in Chapter 1 that Hadamard’s original construction of Hadamard ma-trices states that the Kronecker product of Hadamard matrices of orders m and n isa Hadamard matrix of order mn. The multiplicative theorem was proposed in 1981by Agaian and Sarukhanyan1 (see also Ref. 2). They demonstrated how to multiplyWilliamson–Hadamard matrices in order to obtain a Williamson–Hadamard matrixof order mn/2. This result has been extended by the following:
• Craigen et al.3 show how to multiply four Hadamard matrices of orders m, n, p,q in order to obtain a Hadamard matrix of order mnpq/16.• Agaian2 and Sarukhanyan et al.4 show how to multiply several Hadamard
matrices of orders ni, i = 1, 2, . . . , k + 1, to obtain a Hadamard matrix oforder (n1n2 . . . nk+1)/2k, k = 1, 2, . . . . They obtained a similar result for A(n, k)-type Hadamard matrices and for Baumert–Hall, Plotkin, and Geothals–Seidelarrays.5
• Seberry and Yamada investigated the multiplicative theorem of Hadamardmatrices of the generalized quaternion type using the M-structure.6
• Phoong and Chang7 show that the Agaian and Sarukhanyan theorem results canbe generalized to the case of antipodal paraunitary (APU) matrices. A matrixfunction H(z) is said to be paraunitary (PU) if it is unitary for all values ofthe parameters z, H(z)HT (1/z) = nInn ≥ 2. One attractive feature of thesematrices is their energy preservation properties that can reduce the noise or erroramplification problem. For further details of PU matrices and their applications,we refer the reader to Refs. 8–10. A PU matrix is said to be an APU matrixif all of its coefficient matrices have ±1 as their entries. For the special caseof constant (memoryless) matrices, APU matrices reduce to the well-knownHadamard matrices.
The analysis of the above-stated results relates to the solution of the followingproblem:
Problem 1:2,11 Let Xi and Ai, i = 1, 2, . . . , k be (0, ±1) and (+1, −1) matrices ofdimensions p1 × p2 and q1 × q2, respectively, and p1q1 = p2q2 = n ≡ 0 (mod 4).
229
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
230 Chapter 7
(a) What conditions must matrices Xi and Ai satisfy for
H =k∑
i=1
Xi ⊗ Ai (7.1)
to be a Hadamard matrix of order n, and(b) How are these matrices constructed?
In this chapter, we develop methods for constructing matrices Xi and Ai, makingit possible to construct new Hadamard matrices and orthogonal arrays. We alsopresent a classification of Hadamard matrices based on their decomposabilityby orthogonal (+1, −1) vectors. We will present multiplicative theorems ofconstruction of a new class of Hadamard matrices and Baumert–Hall, Plotkin,and Geothals–Seidel arrays. Particularly, we will show that if there be kHadamard matrices of order m1,m2, . . . ,mk, then a Hadamard matrix of order(m1m2 . . .mk)/2k+1 exists. As an application of multiplicative theorems, one mayfind an example in Refs. 12–14.
7.1 Decomposition of Hadamard Matrices by (+1, −1) Vectors
In this section, a particular case of the problem given above is studied, i.e., the casewhen Ai is (+1, −1) vectors.
Theorem 7.1.1: For matrix H [see Eq. (7.1)] to be an Hadamard matrix of order n,it is necessary and sufficient that there be (0,±1) matrices Xi and (+1,−1) matricesAi, i = 1, 2, . . . , k of dimensions p1 × p2 and q1 × q2, respectively, satisfying thefollowing conditions:
1. p1q1 = p2q2 = n ≡ 0 (mod 4),2. Xi ∗ Xj = 0, i � j, i, j = 1, 2, . . . , k, * is Hadamard product,
3.k∑
i=1
Xi is a (+1,−1) matrix,
4.k∑
i=1
XiXTi ⊗ AiA
Ti +
k∑i, j=1
XiXTj ⊗ AiA
Tj = nIn, i � j,
5.k∑
i=1
XTi Xi ⊗ AT
i Ai +
k∑i, j=1
XTi X j ⊗ AT
i A j = nIn, i � j.
The first three conditions are evident. The two last conditions are jointly equivalentto conditions
HHT = HT H = nIn. (7.2)
Now, let us consider the case where Ai are (+1, −1) vectors. Note that anyHadamard matrix Hn of order n can be represented as
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Decomposition of Hadamard Matrices 231
(a) Hn = (++) ⊗ X + (+−) ⊗ Y,
(b) Hn =
8∑i=1
vi ⊗ Ai,(7.3)
where X, Y are (0, ±1) matrices of dimension n × (n/2), Ai are (0, ±1) matrices ofdimension n × (n/4), and vi are the following four-dimensional (+1, −1) vectors:
v1 = (+ + ++), v2 = (+ + −−), v3 = (+ − −+), v4 = (+ − +−),v5 = (+ − −−), v6 = (+ − ++), v7 = (+ + +−), v8 = (+ + −+).
(7.4)
Here, we give the examples of decomposition of the following Hadamard matrices:
H4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + + +
+ − + −+ + − −+ − − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , H8 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + + + + + + +
+ − + − + − + −+ + − − + + − −+ − − + + − − ++ + + + − − − −+ − + − − + − ++ + − − − − + ++ − − + − + + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (7.5)
We use the following notations:
w1 = (++), w2 = (+−), v1 = (+ + ++), v2 = (+ − +−),
v3 = (+ + −−), v4 = (+ − −+).(7.6)
Example 7.1.1: The Hadamard matrix H4 and H8 can be decomposed as follows:
(1) Via two vectors:
H4 = w1 ⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ +
0 0+ −0 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ + w2 ⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝0 0+ +
0 0+ −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ,
H8 = w1 ⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + + +
0 0 0 0+ − + −0 0 0 0+ + − −0 0 0 0+ − − +0 0 0 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠+ w2 ⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
0 0 0 0+ + + +
0 0 0 0+ − + −0 0 0 0+ + − −0 0 0 0+ − − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (7.7)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
232 Chapter 7
(2) Via four vectors:
H4 = v1 ⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+
000
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ + v2 ⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝0+
00
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ + v3 ⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝00+
0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ + v4 ⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝000+
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ,
H8 = v1 ⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ +
0 00 00 0+ −0 00 00 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠+ v2 ⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
0 0+ +
0 00 00 0+ −0 00 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠+ v3 ⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
0 00 0+ +
0 00 00 0+ −0 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠+ v4 ⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
0 00 00 0+ +
0 00 00 0+ −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (7.8)
Now, let us introduce the following matrices:
B1 = A1 + A2 + A7 + A8, B2 = A3 + A4 + A5 + A6,
B3 = A1 − A2 − A5 + A6, B4 = −A3 + A4 + A7 − A8.(7.9)
Theorem 7.1.2:15 For the existence of Hadamard matrices of order n, the existenceof (0,±1) matrices Bi, i = 1, 2, 3, 4 of dimension n × (n/4) is necessary andsufficient, satisfying the following conditions:
1. B1 ∗ B2 = 0, B3 ∗ B4 = 0,
2. B1 ± B2, B3 ± B4 are (+1,−1)-matrices,
3.4∑
i=1
BiBTi =
n2
In,
4. BTi Bj = 0, i � j, i, j = 1, 2, 3, 4,
5. BTi Bi =
n2
In/4, i, j = 1, 2, 3, 4.
(7.10)
Proof: Necessity: Let Hn be a Hadamard matrix of order n. According to Eq. (7.1),we have
Hn = v1 ⊗ A1 + v2 ⊗ A2 + · · · + v8 ⊗ A8. (7.11)
From this representation, it follows that
Ai ∗ Aj = 0, i � j, i, j = 1, 2, . . . , 8,
A1 + A2 + · · · + A8 is a (+1,−1)-matrix.(7.12)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Decomposition of Hadamard Matrices 233
On the other hand, it is not difficult to show that the matrix Hn can also be presentedas
Hn = [(++) ⊗ B1 + (+−) ⊗ B2, (++) ⊗ B3 + (+−) ⊗ B4] . (7.13)
Now, let us show that matrices Bi satisfy the conditions of Eq. (7.10). From therepresentation of Eq. (7.13) and from Eq. (7.12) and HnHT
n = nIn, the first threeconditions of Eq. (7.10) will follow. Because Hn is a Hadamard matrix of ordern, from the representation of Eq. (7.13), we find the following system of matrixequations:
BT1 B1 + BT
1 B2 + BT2 B1 + BT
2 B2 = nIn/4,
BT1 B1 − BT
1 B2 + BT2 B1 − BT
2 B2 = 0,
BT1 B3 + BT
1 B4 + BT2 B3 + BT
2 B4 = 0,
BT1 B3 + BT
1 B4 − BT2 B3 − BT
2 B4 = 0;
(7.14a)
BT1 B1 + BT
1 B2 − BT2 B1 − BT
2 B2 = 0,
BT1 B1 − BT
1 B2 − BT2 B1 + BT
2 B2 = nIn/4,
BT1 B3 + BT
1 B4 − BT2 B3 − BT
2 B4 = 0,
BT1 B3 − BT
1 B4 − BT2 B3 + BT
2 B4 = 0;
(7.14b)
BT3 B1 + BT
3 B2 + BT4 B1 + BT
4 B2 = 0,
BT3 B1 − BT
3 B2 + BT4 B1 − BT
4 B2 = 0,
BT3 B3 + BT
3 B4 + BT4 B3 + BT
4 B4 = nIn/4,
BT3 B3 − BT
3 B4 + BT4 B3 − BT
4 B4 = 0;
(7.14c)
BT3 B1 + BT
3 B2 − BT4 B1 − BT
4 B2 = 0,
BT3 B1 − BT
3 B2 − BT4 B1 + BT
4 B2 = 0,
BT3 B3 + BT
3 B4 − BT4 B3 − BT
4 B4 = 0,
BT3 B3 − BT
3 B4 − BT4 B3 + BT
4 B4 = nIn/4;
(7.14d)
which are equivalent to
BTi Bi =
n2
In/4, i = 1, 2, 3, 4,
BTi Bj = 0, i � j, i = 1, 2, 3, 4.
(7.15)
Sufficiency: Let (0, ±1) matrices Bi, i = 1, 2, 3, 4 of dimensions n × (n/4) satisfythe conditions of Eq. (7.10). We can directly verify that Eq. (7.13) is a Hadamardmatrix of order n.
Corollary 7.1.1: The (+1,−1) matrices
Q1 = (B1 + B2)T , Q2 = (B1 − B2)T ,
Q3 = (B3 + B4)T , Q4 = (B3 − B4)T (7.16)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
234 Chapter 7
of dimensions n/4 × n satisfy the conditions
QiQTj = 0, i � j, i = 1, 2, 3, 4,
QiQTi = nIn/4, i = 1, 2, 3, 4.
(7.17)
Corollary 7.1.2:3 If there be Hadamard matrices of order n, m, p, q, then theHadamard matrix of order mnpq/16 also exists.
Proof: According to Theorem 7.1.2, there are (0, ±1) matrices Ai and Bi, i =1, 2, 3, 4 of dimensions m × (m/4) and n × (n/4), respectively, satisfying theconditions in Eq. (7.10).
We introduce the following (+1, −1) matrices of orders mn/4:
X = A1 ⊗ (B1 + B2)T + A2 ⊗ (B1 − B2)T ,
Y = A3 ⊗ (B3 + B4)T + A4 ⊗ (B3 − B4)T .(7.18)
It is easy to show that matrices X, Y satisfy the conditions
XYT = XT Y = 0,
XXT + YYT = XT X + YT Y =mn2
Imn/4.(7.19)
Again, we rewrite matrices X, Y in the following form:
X = [(++) ⊗ X1 + (+−) ⊗ X2, (++) ⊗ X3 + (+−) ⊗ X4],
Y = [(++) ⊗ Y1 + (+−) ⊗ Y2, (++) ⊗ Y3 + (+−) ⊗ Y4],(7.20)
where Xi, Yi, i = 1, 2, 3, 4 are (0, ±1) matrices of dimensions (mn/4) × (mn/16),satisfying the conditions
X1 ∗ X2 = X3 ∗ X4 = Y1 ∗ Y2 = Y3 ∗ Y4 = 0,
X1 ± X2, X3 ± X4, Y1 ± Y2, Y3 ± Y4 are (+1,−1) matrices,4∑
i=1
XiYTi =
4∑i=1
XTi Yi = 0,
4∑i=1
(XiX
Ti + YiY
Ti
)=
4∑i=1
(XT
i Xi + YTi Yi
)=
mn4
Imn/4.
(7.21)
(+1, −1) matrices P and Q of orders pq/4 can be constructed in a manner similarto the construction of Hadamard matrices of order p and q, with the conditions ofEq. (7.19).
Now, consider the following (0, ±1) matrices:
Z =P + Q
2, W =
P − Q2,
Ci = Xi ⊗ Z + Yi ⊗W, i = 1, 2, 3, 4.(7.22)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Decomposition of Hadamard Matrices 235
It is not difficult to show that matrices Z and W satisfy the conditions
Z ∗W = 0,
ZWT = WZT ,
ZZT = ZT Z = WWT = WT W =pq8
Ipq/4,
(7.23)
assuming that matrices Ci of dimension (mnpq/16) × (mnpq/64) satisfy theconditions of Eq. (7.10).
Hence, according to Theorem 7.1.2, the matrix
[(++) ⊗C1 + (+−) ⊗C2, (++) ⊗C3 + (+−) ⊗C4] (7.24)
is a Hadamard matrix of order mnpq/16.
Corollary 7.1.3: If Hadamard matrices of orders ni, i = 1, 2, . . . , k + 1, exist, thenthere are also Hadamard matrices of orders (n1n2 . . . nk+1)/2k, k = 1, 2, . . ..
Proof: By Theorem 7.1.2, (0, ±1) matrices Ai and Bi, i = 1, 2, 3, 4 of dimensionsn1 × (n1/4) and n2 × (n2/4), respectively, exist, satisfying the conditions ofEq. (7.10).
Consider the following representations of matrices:
Q1 = (B1 + B2)T = (++) ⊗ X1 + (+−) ⊗ X2,
Q2 = (B1 − B2)T = (++) ⊗ Y1 + (+−) ⊗ Y2,
Q3 = (B3 + B4)T = (++) ⊗ Z1 + (+−) ⊗ Z2,
Q4 = (B3 − B4)T = (++) ⊗W1 + (+−) ⊗W2,
(7.25)
where Xi, Yi, Zi, Wi, i = 1, 2 are (0, ±1) matrices of the dimension (n2/4) × (n2/2).From the condition of Eq. (7.17) and the representation of Eq. (7.25), we find
that
X1 ∗ X2 = Y1 ∗ Y2 = Z1 ∗ Z2 = W1 ∗W2 = 0,
X1 ± X2, Y1 ± Y2, Z1 ± Z2, W1 ±W2 are (+1,−1) matrices,
X1YT1 + X2YT
2 = 0, X1ZT1 + X2ZT
2 = 0, X1WT1 + X2WT
2 = 0,
Y1ZT1 + Y2ZT
2 = 0, Y1WT1 + Y2WT
2 = 0, Z1WT1 + Z2WT
2 = 0,
X1XT1 + X2XT
2 = Y1YT1 + Y2YT
2 = Z1ZT1 + Z2ZT
2 = W1WT1 +W2WT
2 =n2
2In2/4.
(7.26)
Now, we define the following matrices:
C1 =
(X1 ⊗ A1 + Y1 ⊗ A2
Z1 ⊗ A1 +W1 ⊗ A2
), C2 =
(X2 ⊗ A1 + Y2 ⊗ A2
Z2 ⊗ A1 +W2 ⊗ A2
),
C3 =
(X1 ⊗ A3 + Y1 ⊗ A4
Z1 ⊗ A3 +W1 ⊗ A4
), C4 =
(X2 ⊗ A3 + Y2 ⊗ A4
Z2 ⊗ A3 +W2 ⊗ A4
).
(7.27)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
236 Chapter 7
It is easy to show that (0, ±1) matrices Ci, i = 1, 2, 3, 4 of dimensions (n1n2/2)×(n1n2/8) satisfy the conditions of Eq. (7.10). Hence, according to Theorem 7.1.2,a Hadamard matrix of order n1n2/2 exists. Subsequently, Corollary 7.1.2 impliesCorollary 7.1.3.
Corollary 7.1.4: If there are Hadamard matrices of order ni, i = 1, 2, . . ., thenthere is also a Hadamard matrix of order (n1n2 . . . nk+3)/2k+3, k = 1, 2, . . ..
Proof: The case for k = 1 was proved in Corollary 7.1.2. According toCorollary 7.1.3, from Hadamard matrices of orders n1n2n3n4/16, n5, n6, . . . , nk, wecan construct a Hadamard matrix of the order (n1n2 . . . nk)/2k, k = 4, 5, . . ..
Theorem 7.1.3: For any natural numbers k and t, there is a Hadamard matrixof order [n1n2 . . . nt(k+2)+1]/2t(k+3), where ni ≥ 4 are orders of known Hadamardmatrices.
Proof: The case for t = 1 and k = 1, 2, . . . was proved in Corollary 7.1.4. Let t > 1and assume that the assertion is correct for t = t0 > 1, i.e., there is a Hadamardmatrix of order [n1n2 · · · nt0(k+3)]/2t0(k+3).
Prove the theorem for t = t0+1. We have k+3 Hadamard matrices of the followingorders:
m1 =n1n2 . . . nt0(k+2)+1
2t0(k+3), nt0(k+2)+2, . . . , nt0(k+2)+k+3. (7.28)
According to Corollary 7.1.4, we can construct a Hadamard matrix of the order
n1n2 . . . n(t0+1)(k+2)+1
2(t0+1)(k+3). (7.29)
Now, prove the following lemma.
Lemma 7.1.1: There are no Hadamard matrices of order n represented as
Hn =
k∑i=1
wi ⊗ Ai, k � 4, 8, wi ∈ {v1, v2, . . . , v8}. (7.30)
Proof: We prove the lemma for k = 3, 5. For the other value k, the proof is similar.For k = 3, allow a Hadamard matrix Hn of order n of the type in Eq. (7.30) to exist,i.e.,
Hn = (+ + ++) ⊗ A1 + (+ + −−) ⊗ A2 + (+ − +−) ⊗ A3. (7.31)
This matrix can be written as
Hn = [(++) ⊗ (A1 + A2) + (+−) ⊗ A3, (++) ⊗ (A1 − A2) + (+−) ⊗ (−A3)]. (7.32)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Decomposition of Hadamard Matrices 237
According to Theorem 7.1.2, the (0, ±1) matrices
B1 = A1 + A2, B2 = A3, B3 = A1 − A2, B4 = −A3 (7.33)
of dimension n × (n/4) must satisfy all the conditions in Eq. (7.10). In particular,the following conditions should be satisfied:
BT2 B4 = 0, BT
2 B2 =n2
In/4. (7.34)
That is, on the one hand AT3 A3 = 0, and on the other hand AT
3 A3 = (n/2)In/4, whichis impossible. Now we consider the case k = 5.
Let
Hn = (+ + ++) ⊗ A1 + (+ + −−) ⊗ A2 + (+ − −+) ⊗ A3 + (+ − +−) ⊗ A4 + (+ − −+)A5.
(7.35)
The matrix Hn can be easily written as
Hn = [(++) ⊗ (A1 + A2) + (+−) ⊗ (A3 + A4 + A5),
(++) ⊗ (A1 − A2 − A5) + (+−) ⊗ (−A3 + A4)]. (7.36)
According to Theorem 7.1.2, the (0, ±1) matrices
B1 = A1 + A2, B2 = A3 + A4 + A5, B3 = A1 − A2 − A5, B4 = −A3 + A4
(7.37)
must satisfy the conditions of Eq. (7.10). We can see that the conditions
BT1 B1 = BT
3 B3 =n2
In/4 (7.38)
mean that any column of matrices B1 and B3 contains precisely n/2 nonzeroelements. From this point, we find that A5 = 0, which contradicts the conditionof Lemma 7.1.1.
7.2 Decomposition of Hadamard Matrices and Their Classification
In this section, we consider the possibility of decomposing Hadamard matricesusing four orthogonal vectors of length 4. Let vi, i = 1, 2, . . . , k be mutuallyorthogonal k-dimensional (+1, −1) vectors. We investigate Hadamard matrices oforder n that have the following representation:
Hn = v1 ⊗ B1 + v2 ⊗ B2 + · · · + vk ⊗ Bk. (7.39)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
238 Chapter 7
We call the Hadamard matrices having the representation in Eq. (7.39) an A(n, k)-type Hadamard matrix or simply an A(n, k)-matrix.
Theorem 7.2.1:16,17 A matrix of order n is an A(n, k)-type Hadamard matrix if andonly if, nonzero (0,±1) matrices Bi, i = 1, 2, . . . , k of dimensions n× n/k satisfyingthe following conditions exist:
Bi ∗ Bj = 0, i � j, i, j = 1, 2, . . . , k,k∑
i=1
Bi is a (+1,−1) matrix,
k∑i=1
BiBTi =
nk
In,
BTi Bj = 0, i � j, i, j = 1, 2, . . . , k,
BTi Bi =
nk
In/k, i = 1, 2, . . . , k.
(7.40)
Proof: Necessity: To avoid excessive formulas, we prove the theorem for the casek = 4. The general case is then a direct extension. Let Hn be a Hadamard matrix oftype A(n, 4), i.e., Hn has the form of Eq. (7.39), where
vivTj = 0, i � j, i, j = 1, 2, 3, 4,
vivTi = 4, i = 1, 2, 3, 4.
(7.41)
We shall prove that (0, ±1) matrices Bi, i = 1, 2, 3, 4 of dimensions n × n/4satisfy the conditions of Eq. (7.40). First, two conditions are obvious. The thirdcondition follows from the relationship
HHT = 44∑
i=1
BiBTi = nIn. (7.42)
Consider the last two conditions of Eq. (7.40). Note that the Hadamard matrix Hn
has the form
Hn = (+ + ++) ⊗ B1 + (+ + −−) ⊗ B2 + (+ − −+) ⊗ B3 + (+ − +−) ⊗ B4. (7.43)
We can also rewrite Hn as
Hn = [(++) ⊗C1 + (+−) ⊗C2, (++) ⊗C3 + (+−) ⊗C4], (7.44)
where, by Theorem 7.1.2,
C1 = B1 + B2, C2 = B3 + B4, C3 = B1 − B2, C4 = B3 − B4 (7.45)
satisfies the conditions of Eq. (7.10).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Decomposition of Hadamard Matrices 239
Hence, taking into account the last two conditions of Eq. (7.40), we can see thatthe matrices Bi satisfy the following equations:
BT1 B1 + BT
1 B2 + BT2 B1 + BT
2 B2 = nIn/4,
BT1 B1 − BT
1 B2 + BT2 B1 − BT
2 B2 = 0,
BT1 B3 + BT
1 B4 + BT2 B3 + BT
2 B4 = 0,
BT1 B3 + BT
1 B4 − BT2 B3 − BT
2 B4 = 0;
BT1 B1 + BT
1 B2 − BT2 B1 − BT
2 B2 = 0,
BT1 B1 − BT
1 B2 − BT2 B1 + BT
2 B2 = nIn/4,
BT1 B3 + BT
1 B4 − BT2 B3 − BT
2 B4 = 0,
BT1 B3 − BT
1 B4 − BT2 B3 + BT
2 B4 = 0;
BT3 B1 + BT
3 B2 + BT4 B1 + BT
4 B2 = 0,
BT3 B1 − BT
3 B2 + BT4 B1 − BT
4 B2 = 0,
BT3 B3 + BT
3 B4 + BT4 B3 + BT
4 B4 = nIn/4,
BT3 B3 − BT
3 B4 + BT4 B3 − BT
4 B4 = 0;
BT3 B1 + BT
3 B2 − BT4 B1 − BT
4 B2 = 0,
BT3 B1 − BT
3 B2 − BT4 B1 + BT
4 B2 = 0,
BT3 B3 + BT
3 B4 − BT4 B3 − BT
4 B4 = 0,
BT3 B3 − BT
3 B4 − BT4 B3 + BT
4 B4 = nIn/4.
(7.46)
Solving these systems, we find that
BTi Bj = 0, i � j, i, j = 1, 2, 3, 4,
BTi Bi =
n4
In/4, i = 1, 2, 3, 4.(7.47)
Sufficiency: Let (0, ±1) matrices Bi, i = 1, 2, 3, 4 satisfy the conditions ofEq. (7.40). We shall show that the matrix in Eq. (7.43) is a Hadamard matrix.Indeed, calculating HnHT
n and HTn Hn, we find that
HnHTn = 4
4∑i=1
BiBTi = HT
n Hn =
4∑i=1
vTi vi ⊗ n
4In/4 = nIn. (7.48)
From Theorems 7.1.2 and 7.2.1, the following directly follows:
Corollary 7.2.1: Any Hadamard matrix of order n is an A(n, 2) matrix.
From the condition of mutual orthogonality of k-dimensional (+1, −1) vectorsvi, i = 1, 2, . . . , k and from the condition of Eq. (7.48), it follows that k = 2 ork ≡ 0 (mod 4).
Theorem 7.2.2 reveals a relationship between the order of the A(n, k) matrix andthe dimension of vectors vi.
Theorem 7.2.2:15,17 Let Hn be an A(n, k) matrix . Then, n ≡ 0 (mod 2k).
Proof: According to Theorem 7.2.1, the matrix Hn can be written as
Hn = v1 ⊗ B1 + v2 ⊗ B2 + · · · + vk ⊗ Bk, (7.49)
where (0, ±1) matrices Bi of dimension n×n/k satisfy the conditions of Eq. (7.40).Note that the fifth condition of Eq. (7.40) means that BT
i are orthogonal (0, ±1)matrices and any row of this matrix contains n/k nonzero elements. Hence, for
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
240 Chapter 7
matrix BTi , a matrix B
Ti corresponds to it having the following form:
BT1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝/// · · · · · · · · ·............
/// · · · · · · · · ·............
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, B
T2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝· · · /// · · · · · ·............
· · · /// · · · · · ·............
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, . . . , B
Tk =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝· · · · · · · · · ///............
· · · · · · · · · ///............
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠,
(7.50)
where the shaded portions of rows contain ±1, and other parts of these rows arefilled with zeros.
From the condition BTi Bi = (n/k)In/k, it follows that the shaded pieces of i’th
rows of matrices BTi contain an even number of ±1s, and from the condition
BTi Bj = 0, i � j, (7.51)
it follows that other parts of the i’th row also contain an even number of ±1s. Itfollows that n/k is an even number, i.e., n/k = 2l; hence, n ≡ 0 (mod 2k).
Naturally, the following problem arises:For any n, n ≡ 0 (mod 2k), construct an A(n, k)-type Hadamard matrix.Next, we present some properties of A(n, k)-type Hadamard matrices.
Property 7.2.1: (a) If A(n, k)- and A(m, r)-type Hadamard matrices exist, then anA(mn, kr)-type Hadamard matrix also exists.
(b) If a Hadamard matrix of order n exists, then there also exists an A(2i−1n, 2i)-type Hadamard matrix, i = 1, 2, . . ..
(c) If Hadamard matrices of order ni, i = 1, 2, . . . exist, then a Hadamard matrixof type A{[n1n2 . . . nt(r+2)+2]/2t(k+3), 4} exists, where k, t = 1, 2, . . ..
Theorem 7.2.3:15,17 Let there be a Hadamard matrix of order m and an A(n, k)matrix . Then, for any even number r such as m, n ≡ 0 (mod r), there are (0,±1)matrices Bi, j, i = 1, 2, . . . , (r/2), j = 1, 2, . . . , k, of dimension (mn/r) × (mn/r)satisfying the following conditions:
Bp,i ∗ Bp, j = 0, i � j, i, j = 1, 2, . . . , k, p = 1, 2, . . . ,r2,
k∑i=1
Bp,i is a (+1,−1) matrix, p = 1, 2, . . . ,r2,
k∑p=1
Bi,pBTj,p = 0, i � j, i, j = 1, 2, . . . ,
r2,
k/2∑p=1
k∑i=1
Bp,iBTp.i =
mn2k
Imn/r.
(7.52)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Decomposition of Hadamard Matrices 241
Proof: Represent the A(n, k) matrix Hn as HTn =
(PT
1 , PT2 , . . . , P
Tr
), where (+1, −1)
matrices Pi of dimension n/r × n have the form
Pi =
k∑t=1
vt ⊗ Ai,t, i = 1, 2, . . . , r, (7.53)
and vt are mutually orthogonal k-dimensional (+1, −1) vectors.We can show that (0, ±1) matrices Ai, j of order n/r satisfy the following
conditions:
At,i ⊗ At, j = 0, i � j, t = 1, 2, . . . , r, i, j = 1, 2, . . . , k,k∑
i=1
At,i is a (+1,−1) matrix, t = 1, 2, . . . , r,
k∑t=1
Ai,tATj,t = 0, i � j, i, j = 1, 2, . . . , r,
k∑t=1
Ai,tATi,t =
nr
In/r, i = 1, 2, . . . , r.
(7.54)
Now, for the Hadamard matrix Hm of order m, we present as Hm = (Q1,Q2,. . . ,Qk), where it is obvious that (+1, −1) matrices Qi have a dimension m × m/kand satisfy the condition
k∑i=1
QiQTi = mIm. (7.55)
Let us introduce the following matrices:
U2i−1 =Q2i−1 + Q2i
2, U2i =
Q2i−1 − Q2i
2, i = 1, 2, . . . ,
k2. (7.56)
We can show that these matrices satisfy the conditions
U2i−1 ∗ U2i = 0, i = 1, 2, . . . ,k2,
U2i−1 ± U2i is a (+1,−1) matrix,k∑
i=1
UiUTi =
m2
Im.
(7.57)
Now, we consider (0, ±1) matrices of dimension (mn/r) × (mn/rk):
Bt,i = U2t−1 ⊗ A2t−1,i + U2t ⊗ A2t,i, t = 1, 2, . . . ,r2, i = 1, 2, . . . , k. (7.58)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
242 Chapter 7
By using the conditions of Eqs. (7.54) and (7.57), we can verify that these matricessatisfy the conditions of Eq. (7.52). From Theorem 7.2.3, some useful corollariesfollow.
Corollary 7.2.2:1,2,18 The existence of Hadamard matrices of orders m and nimplies the existence of a Hadamard matrix of order mn/2.
Indeed, according to Theorem 7.2.3, for k = r = 2, there are (0, ±1) matricesB1,1 and B1,2, satisfying the conditions of Eq. (7.52). Now it is not difficult to showthat (++) ⊗ B1,1 + (+−) ⊗ B1,2 is a Hadamard matrix of order mn/2.
Corollary 7.2.3:19 If Hadamard matrices of order m and n exist, then there are(0,±1) matrices X, Y of order mn/4, satisfying the conditions
XYT = 0,
XXT + YYT =mn2
Imn/4.(7.59)
According to Theorem 7.2.3, for k = 2 and r = 4, we have two pairs of (0,±1) matrices B1,1, B1,2 and B2,1, B2,2 of dimension mn/4 × mn/8 satisfying theconditions of Eq. (7.52). We can show that matrices
X = (++) ⊗ B1,1 + (+−) ⊗ B1,2,
Y = (++) ⊗ B2,1 + (+−) ⊗ B2,2(7.60)
satisfy the conditions of Eq. (7.59).
Corollary 7.2.4: If an A(n, 4) matrix and a Hadamard matrix of order m exist,then there are (0,±1) matrices X, Y of order mn/4 of the form
X =4∑
i=1
vi ⊗ B1,i, Y =4∑
i=1
vi ⊗ B2,i (7.61)
satisfying the conditions of Eq. (7.59). Here, vi are mutually orthogonal four-dimensional (+1,−1) vectors. The proof of this corollary follows from Theorem7.2.3 for r = k = 4. As mentioned, the length of k mutually orthogonal (+1,−1)-vectors is equal to 2 or k ≡ 0 (mod 4).
Below, we consider vectors of the dimension k = 2t. Denote the set of allHadamard matrices by C and the set of A(n, k)-type Hadamard matrices by Ck.From Theorem 7.2.1, it follows that C = C2, and from Corollary 7.2.1 it directlyfollows that
C = C2 ⊃ C4 ⊃ C8 ⊃ · · · ⊃ C2k . (7.62)
Now, from Theorem 7.2.2, the following can be derived:
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Decomposition of Hadamard Matrices 243
Corollary 7.2.5: If Hn ∈ C2k , then n ≡ 0 (mod 2k+1).
Theorem 7.2.4: Let Hn ∈ Ck, Hm ∈ Cr, and k ≤ r. Then, there are Hmn/k ∈ Cr.
Proof: According to Theorem 7.2.1, (0, ±1) matrices Bi, i = 1, 2, . . . , k exist ofdimensions n × n/k satisfying the conditions of Eq. (7.40). The matrix Hm can bewritten as
Hm =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
r∑i=1
vi ⊗ A1,i
r∑i=1
vi ⊗ A2,i
...r∑
i=1
vi ⊗ Ak,i
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (7.63)
where (0, ±1) matrices Ai, j of dimensions n/k × m/r satisfy the conditions
Ai,t ∗ Ai,p = 0, i = 1, 2, . . . , k, t � p, t, p = 1, 2, . . . , r,r∑
t=1
Ai,t is a (+1,−1)-matrix, i = 1, 2, . . . , k,
r∑t=1
Ai,tATp,t = 0, i � p, i, p = 1, 2, . . . , k,
r∑t=1
Ai,tATi,t =
mr
Im/k, i = 1, 2, . . . , k.
(7.64)
Now, we introduce (0, ±1) matrices Di of dimension mn/k × mn/r:
Di =
k∑t=1
Bt ⊗ At,i, i = 1, 2, . . . , r. (7.65)
One can show that matrices Di satisfy the conditions of Eq. (7.40). Accordingto Theorem 7.2.1, this means that there is a Hadamard matrix of type A(mn/k, r),where Hmn/k ∈ Cr, thus proving the theorem.
7.3 Multiplicative Theorems of Orthogonal Arrays and HadamardMatrix Construction
Now, we move on to orthogonal arrays and Hadamard matrix construction usingproperties of Hadamard matrix decomposability.
Theorem 7.3.1: If there is an A(n, k) matrix and a Hadamard matrix of orderm [m ≡ 0 (mod k)], then a Hadamard matrix of order mn/k exists.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
244 Chapter 7
Proof: According to Theorem 7.2.1, there are matrices Bi, i = 1, 2, . . . , k ofdimensions n×n/k satisfying the conditions of Eq. (7.40). Represent the Hadamardmatrix Hm as HT
m =(HT
1 ,HT2 , . . . ,H
Tk
), where Hi are (+1, −1) matrices of
dimension n/k × m, satisfying the conditions
HiHTj = 0, i � j, i, j = 1, 2, . . . , k,
k∑i=1
HiHTi = mIm.
(7.66)
Now, it is not difficult to show that the matrix∑k
i=1 Hi ⊗ Bi is the Hadamard matrixof order mn/k.
Theorem 7.3.2: Let there be an A(n, k) matrix and Hadamard matrices of ordersm, p, q. Then, an A[(mnpq/16), k] matrix also exists.
Proof: We present the Hadamard matrix H1 of type A(n, k) in the following form:
H1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
k∑i=1
vi ⊗ A1,i
k∑i=1
vi ⊗ A2,i
k∑i=1
vi ⊗ A3,i
k∑i=1
vi ⊗ A4,i
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
, (7.67)
where vi are mutually orthogonal k-dimensional (+1, −1) vectors, and Ai, j are (0,±1) matrices of the dimension n/4 × n/k satisfying the conditions
At,i ∗ At, j = 0, t = 1, 2, 3, 4, i � j, i, j = 1, 2, . . . , k,k∑
i=1
At,i is a (+1,−1) matrix, t = 1, 2, 3, 4,
k∑i=1
At,iATr,i = 0, t � r, t, r = 1, 2, 3, 4,
k∑i=1
At,iATt,i =
nk
In/4, t = 1, 2, 3, 4.
(7.68)
Now, we represent the Hadamard matrix H2 of order m as H2 = [P1, P2, P3, P4],and introduce the following (0, ±1) matrices of dimension m × m/4:
U1 =P1 + P2
2, U2 =
P1 − P2
2, U3 =
P3 + P4
2, U4 =
P3 − P4
2. (7.69)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Decomposition of Hadamard Matrices 245
We can show that these matrices satisfy the conditions
U1 ∗ U2 = U3 ∗ U4 = 0,
U1 ± U2, U3 ± U4 are (+1,−1) matrices,4∑
i=1
UiUTi =
m2
Im.
(7.70)
Furthermore, we introduce (+1, −1) matrices of order mn/k:
X =k∑
i=1
vi ⊗ (U1 ⊗ A1,i + U1 ⊗ A2,i
),
Y =k∑
i=1
vi ⊗ (U3 ⊗ A3,i + U4 ⊗ A4,i
).
(7.71)
One can show that these matrices satisfy the conditions of Eq. (7.59). Accordingto Corollary 7.2.3, from the existence of Hadamard matrices of orders p and q,the existence of (+1, −1) matrices X1, Y1 of order pq/4 follows, satisfying theconditions of Eq. (7.59). Now we can show that (0, ±1) matrices
Z =X1 + Y1
2, W =
X1 − Y1
2, (7.72)
satisfy the conditions
Z ∗W = 0,
Z ±W is a (+1,−1) matrix,
ZWT = WZT ,
ZZT = WWT =pq8
Ipq/4.
(7.73)
Finally, we introduce (0, ±1) matrices Bi, i = 1, 2, . . . , k of dimensions (mnpq/16)×(mnpq/16):
Bi =(U1 ⊗ A1,i + U2 ⊗ A2,i
) ⊗ Z +(U3 ⊗ A3,i + U4 ⊗ A4,i
) ⊗W. (7.74)
We can show that the matrices Bi satisfy the conditions of Theorem 7.2.1. Hence,there is a Hadamard matrix of type A[(mnpq/16), k].
From Corollary 7.2.2 and Theorems 7.3.1 and 7.3.2, the following ensues:
Corollary 7.3.1: (a) If there is an A(n1, k) matrix and Hadamard matrices oforders ni, i = 2, 3, 4, . . ., then a Hadamard matrix also exists of type
A(n1n2 . . . n3t+1
24t, k
), t = 1, 2, . . . . (7.75)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
246 Chapter 7
(b) If there are Hadamard matrices of orders ni, i = 1, 2, . . ., then there are also
A(n1n2 . . . n3t+1
24t−1, 4
)and A
(n1n2 . . . n3t+1
24t−2, 8
)(7.76)
matrices, t = 1, 2, . . ..(c) If Hadamard matrices of orders ni, i = 1, 2, . . ., exist, then there is also a
Hadamard matrix of order (n1n2 . . . n3i+2)/24i+1.
Theorem 7.3.3: If there is an A(n, k) matrix and orthogonal design OD(m; m
k ,mk , . . . ,
mk
), then orthogonal design OD
(mnk ; mn
k2 ,mnk2 , . . . ,
mnk2
)exists.
The proof of this theorem is similar to the proof of Theorem 7.3.1.
Corollary 7.3.2: (a) If there is an A(n, 4) matrix and Baumert–Hall (Geothals–Seidel) array of order m, then a Baumert–Hall (Geothals–Seidel) array oforder mn/4 exists.
(b) If an A(n, 8) matrix and Plotkin array of order m exists, then a Plotkin array oforder mn/8 exists.
Theorem 7.3.4: (Baumert–Hall20) If there are Williamson matrices of order nand a Baumert–Hall array of order 4t, then there is a Hadamard matrix of order4nt.
From Theorems 7.1.3 and 7.3.4, and Corollaries 7.2.3, 7.3.1, and 7.3.2 we find:
Corollary 7.3.3: Let wi be orders of known Williamson-type matrices and ti beorders of known T matrices. Then, there are
(a) Baumert–Hall arrays of order
2n(k+1)+4w1w2 . . .wn(k+2)+2t1t2 . . . tn(k+2)+3,
22k+1w1w2 . . .w3k+1t1t2 . . . t3k+2;(7.77)
(b) Plotkin arrays of order
22(k+2)w1w2 . . .w3k+1t1t2 . . . t3k+1,
3 · 22(k+2)w1w2 . . .w3k+1t1t2 . . . t3k+1;(7.78)
(c) Hadamard matrices of order
2n(k+1)+4w1w2 . . .wn(k+2)+3t1t2 . . . tn(k+2)+3,
22k+1w1w2 . . .w3k+2t1t2 . . . t3k+2;
22(k+2)w1w2 . . .w3k+3t1t2 . . . t3k+1,
3 · 22(k+2)w1w2 . . .w3k+3t1t2 . . . t3k+1.
(7.79)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Decomposition of Hadamard Matrices 247
References
1. S. S. Agaian and H. G. Sarukhanyan, “Recurrent formulae for construction ofWilliamson type matrices,” Math. Notes 30 (4), 603–617 (1981).
2. S. S. Agaian, Hadamard Matrices and Their Applications, Lecture Notes inMathematics 1168, Springer-Verlag, Berlin, (1985).
3. R. Craigen, J. Seberry, and X. Zhang, “Product of four Hadamard matrices,”J. Combin. Theory, Ser. A 59, 318–320 (1992).
4. H. Sarukhanyan, S. Agaian, J. Astola, and K. Egiazarian, “Binary matrices,decomposition and multiply-add architectures,” Proc. SPIE 5014, 111–122(2003) [doi:10.1117/12.473134].
5. H. Sarukhanyan, S. Agaian, K. Egiazarian and J. Astola, “Construction ofWilliamson type matrices and Baumert–Hall, Welch and Plotkin arrays,” inProc. First Int. Workshop on Spectral Techniques and Logic Design for FutureDigital Systems, Tampere, Finland SPECLOG’2000, TICSP Ser. 10, 189–205(2000).
6. J. Seberry and M. Yamada, “On the multiplicative theorem of Hadamardmatrices of generalize quaternion type using M-structure,” http://www.uow.edu.au/∼jennie/WEB/WEB69-93/max/183_1993.pdf.
7. S. M. Phoong and K. Y. Chang, “Antipodal paraunitary matrices andtheir application to OFDM systems,” IEEE Trans. Signal Process. 53 (4),1374–1386 (2005).
8. W. A. Rutledge, “Quaternions and Hadamard matrices,” Proc. Am. Math. Soc.3 (4), 625–630 (1952).
9. M.J.T. Smith and T.P. Barnwell III, “A procedure for designing exactreconstruction filter banks for tree-structured subband coders,” in Proc. ofIEEE Int. Conf. Acoust. Speech, Signal Process, San Diego, 27.11–27.14 (Mar.1984).
10. P. P. Vaidyanathan, “Theory and design of M-channel maximally decimatedquadrature mirror filters with arbitrary M, having perfect reconstructionproperty,” IEEE Trans. Acoust., Speech, Signal Process. ASSP-35, 476–492(Apr. 1987).
11. S.S. Agaian, “Spatial and high dimensional Hadamard matrices,” in Mathe-matical Problems of Computer Science (in Russian), NAS RA, Yerevan,Armenia, 12, 5–50 (1984).
12. H. Sarukhanyan, S. Agaian, K. Egiazarian and J. Astola, “Decomposition ofHadamard matrices,” in Proc. of First Int. Workshop on Spectral Techniquesand Logic Design for Future Digital Systems, 2–3 June 2000 Tampere, FinlandSPECLOG’2000, TICSP Ser. 10, pp. 207–221 (2000).
13. S. Agaian, H. Sarukhanyan, and J. Astola, “Multiplicative theorem basedfast Williamson–Hadamard transforms,” Proc. SPIE 4667, 82–91 (2002)[doi:10.1117/12.467969].
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
248 Chapter 7
14. http://www.uow.edu.au/∼jennie.
15. H.G. Sarukhanyan, “Hadamard Matrices and Block Sequences,” Doctoralthesis, Institute for Informatics and Automation Problems of NAS RA,Yerevan, Armenia (1998).
16. H. G. Sarukhanyan, “Decomposition of Hadamard matrices by orthogonal(−1, +1) vectors and algorithm of fast Hadamard transform,” Rep. Acad. Sci.Armenia 97 (2), 3–6 (1997) (in Russian).
17. H. G. Sarukhanyan, “Decomposition of the Hadamard matrices and fastHadamard transform”, Computer Analysis of Images and Patterns, LectureNotes in Computer Science 1296, pp. 575–581 (1997).
18. H. G. Sarukhanyan, S. S. Agaian, J. Astola, and K. Egiazarian, “Decomposi-tion of binary matrices and fast Hadamard transforms,” Circuits, Systems, andSignal Processing 24 (4), 385–400 (1993).
19. J. Seberry and M. Yamada, “Hadamard matrices, sequences and blockdesigns”, in Surveys in Contemporary Design Theory, Wiley-InterscienceSeries in Discrete Mathematics, 431–560 John Wiley & Sons, Hoboken, NJ(1992).
20. J. S. Wallis, “On Hadamard matrices,” J. Combin. Theory, Ser. A 18, 149–164(1975).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Chapter 8Fast Hadamard Transforms forArbitrary Orders
Hadamard matrices have received much attention in recent years, owing to theirnumerous known and promising applications. The FHT algorithm was developedfor N = 2n, 12 · 2n, 4n. In this chapter, a general and efficient algorithm to compute4t-point (t is an “arbitrary” integer) HTs is developed. The proposed schemerequires no zero padding of the input data to make it the size equal to 2n. Thedifficulty of the construction of the N ≡ 0 (mod 4)-point HT is related to theHadamard problem, namely, we do not know if, for every integer n, there is oris not an orthogonal 4n × 4n matrix of plus and minus ones. The number of realoperations is reduced from O(N2) to O(N log2 N). Comparative estimates revealingthe efficiency of the proposed algorithms with respect to those known are given. Inparticular, it is demonstrated that, in typical applications, the proposed algorithmis significantly more efficient than the conventional WHT. Note that the generalalgorithm is more efficient for Hadamard matrices of orders ≥96 than the classicalWHT, whose order is a power of 2. The algorithm renders a simple and symmetricstructure. Note that there are many approaches and algorithms concerning HTs.1–49
In this chapter, we present new algorithms for fast computation of HTs of anyexisting order. Additionally, using the structures of those matrices, we reducethe number of operations. The chapter is organized as follows. Section 8.1presents three algorithms of Hadamard matrix construction. Sections 8.2 and 8.3present the decomposition of the arbitrary Hadamard matrix by {(1, 1), (1,−1)}and by the {(1, 1, 1, 1), (1, 1,−1,−1), (1,−1,−1, 1), (1,−1, 1,−1)} vector system.Section 8.4 describes these decompositions based on N ≡ 0 (mod 4)-point FHTalgorithms. Section 8.5 describes a multiply/add instruction-based FHT algorithmthat primarily uses shifted operations. Section 8.6 presents the complexity ofdeveloped algorithms, as well as comparative estimates, revealing the efficiencyof the proposed algorithms with respect to those known.
8.1 Hadamard Matrix Construction Algorithms
In this section, we describe the Hadamard matrix construction algorithms. Thefirst algorithm is based on Sylvester (Walsh–Hadamard) matrix construction. Thesecond and the third algorithms are based on the multiplicative theorem.1–4
249
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
250 Chapter 8
Algorithm 8.1.1: Sylvester matrix construction.
Input: H1(1).Step 1. For k = 1, 2, . . . , n construct
H2k =
(H2k−1 H2k−1
H2k−1 −H2k−1
), k = 1, 2, . . . , n. (8.1)
Output:Hadamard matrix of order 2n.
Below, the Sylvester-type matrices of orders 2 and 4, respectively, are given:
(+ +
+ −),
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + + +
+ − + −+ + − −+ − − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (8.2)
Algorithm 8.1.2: Hadamard matrix construction algorithm via two Hadamardmatrices.
Input: Hadamard matrices H1 and H2 of order m and n.Step 1. Split the matrix H1 as
H1 =
(PQ
). (8.3)
Step 2. Decompose H2 as
H2 = (++) ⊗ A1 + (+−) ⊗ A2. (8.4)
Step 3. Construct the matrix via
Hmn/2 = P ⊗ A1 + Q ⊗ A2. (8.5)
Note that Hmn/2 is a Hadamard matrix of order mn/2.Step 4. For a given number k, k = 2, 3, . . . , using steps 1–3, construct a
Hadamard matrix of the order n(m/2)k.Output:A Hadamard matrix of the order n(m/2)k.
Algorithm 8.1.3: Hadamard matrix construction algorithm via four Hadamardmatrices.
Input: Hadamard matrices H1, H2, H3, and H4 of orders m, n, p, and q,respectively.Step 1. Split each H1 and H2 into two parts,
H1 = [D1, D2], H2 = [D3, D4]. (8.6)
Step 2. Decompose H1 and H2 as
H1 = [(++) ⊗ A1 + (+−) ⊗ A2, (++) ⊗ A3 + (+−) ⊗ A4] ,H2 = [(++) ⊗ B1 + (+−) ⊗ B2, (++) ⊗ B3 + (+−) ⊗ B4] .
(8.7)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Fast Hadamard Transforms for Arbitrary Orders 251
Step 3. Construct (+1,−1) matrices via
X = B1 ⊗ (A1 + A2)T + B2 ⊗ (A1 − A2)T ,
Y = B3 ⊗ (A3 + A4)T + B4 ⊗ (A3 − A4)T .(8.8)
Step 4. Split matrices H3 and H4 into two parts,
H3 = [F1, F2], H4 = [F3, F4]. (8.9)
Step 5. Decompose H3 and H4 as
H3 = [(++) ⊗ P1 + (+−) ⊗ P2, (++) ⊗ P3 + (+−) ⊗ P4] ,H4 = [(++) ⊗ Q1 + (+−) ⊗ Q2, (++) ⊗ Q3 + (+−) ⊗ Q4] .
(8.10)
Step 6. Construct (+1,−1) matrices
Z = Q1 ⊗ (P1 + P2)T + Q2 ⊗ (P1 − P2)T ,
W = Q3 ⊗ (P3 + P4)T + Q4 ⊗ (P3 − P4)T .(8.11)
Step 7. Design the following matrices:
P =Z +W
2, Q =
Z −W2. (8.12)
Step 8. Construct the Hadamard matrix as
Hmnpq/16 = X ⊗ P + Y ⊗ Q. (8.13)
Output:The Hadamard matrix Hmnpq/16 of order mnpq/16.
8.2 Hadamard Matrix Vector Representation
In this section, we consider a representation of the Hadamard matrix Hn of ordern by (+1,−1) vectors as follows. Let vi, i = 1, 2, . . . , k be k-dimensional mutuallyorthogonal (+1,−1) vectors. The Hadamard matrix of order n of the form
Hn = v1 ⊗ A1 + v2 ⊗ A2 + · · · + vk ⊗ Ak (8.14)
is called the Hadamard matrix of type A(n, k), or the A(n, k) matrix,1,2,4–6 wherevi are orthogonal (+1,−1) vectors of length k, and Ai are (0,±1) matrices ofdimension n × n/k.
Theorem 8.2.1: A matrix Hn of order n is an A(n, k)-type Hadamard matrix if andonly if, there are nonzero (0,±1) matrices Ai, i = 1, 2, . . . , k of size n×n/k satisfyingthe following conditions:
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
252 Chapter 8
Ai ∗ Aj, i � j, i, j = 1, 2, . . . , k,k∑
i=1
Ai is a (+1,−1) matrix,
k∑i=1
AiATi =
nk
In,
ATi A j = 0, i � j, i, j = 1, 2, . . . , k,
ATi Ai =
nk
In/k, i = 1, 2, . . . , k.
(8.15)
Proof: Necessity: In order to avoid excessive formulas, we prove the theorem forthe case k = 4. The general case is then a straightforward extension of the proof.Let Hn be a Hadamard matrix of type A(n, k), i.e., Hn has the form of Eq. (8.14),where
vivTi = 4, viv
Tj = 0, i � j, i, j = 1, 2, 3, 4. (8.16)
We shall prove that (0,±1) matrices Ai, i = 1, 2, . . . , k of size n × n/k satisfythe conditions of Eq. (8.15). First, two conditions are obvious. The third conditionfollows from the relationship
HnHTn = 4
4∑i=1
AiATi = nIn. (8.17)
Consider the last two conditions of Eq. (8.15). Note that the Hadamard matrix Hn
has the form
Hn = (+ + ++) ⊗ A1 + (+ + −−) ⊗ A2 + (+ − −+) ⊗ A3 + (+ − +−) ⊗ Ak. (8.18)
We can also write Hn as
Hn = [(++) ⊗C1 + (+−) ⊗C2, (++) ⊗C3 + (+−) ⊗C4] , (8.19)
Where, by Theorem 7.1.1 (see Chapter 7),
C1 = A1 + A2, C2 = A3 + A4, C3 = A1 − A2, C4 = A3 − A4 (8.20)
satisfies the conditions of Eq. (7.12).Hence, taking into account the last two conditions of Eq. (7.10), the matrices Ai
satisfy the following equations:
AT1 A1 + AT
1 A2 + AT2 A1 + AT
2 A2 =n2
In/4,
AT1 A3 + AT
1 A4 + AT2 A3 + AT
2 A4 = 0,
AT1 A1 − AT
1 A2 + AT2 A1 − AT
2 A2 = 0,
AT1 A3 − AT
1 A4 + AT2 A3 − AT
2 A4 = 0;
(8.21a)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Fast Hadamard Transforms for Arbitrary Orders 253
AT3 A1 + AT
3 A2 + AT4 A1 + AT
4 A2 = 0,
AT3 A3 + AT
3 A4 + AT4 A3 + AT
4 A4 =n2
In/4,
AT3 A1 − AT
3 A2 + AT4 A1 − AT
4 A2 = 0,
AT3 A3 − AT
3 A4 + AT4 A3 − AT
4 A4 = 0;
(8.21b)
AT1 A1 + AT
1 A2 − AT2 A1 − AT
2 A2 = 0,
AT1 A3 + AT
1 A4 − AT2 A3 − AT
2 A4 = 0,
AT1 A1 − AT
1 A2 − AT2 A1 + AT
2 A2 =n2
In/4,
AT1 A3 − AT
1 A4 − AT2 A3 + AT
2 A4 = 0;
(8.21c)
AT3 A1 + AT
3 A2 − AT4 A1 − AT
4 A2 = 0,
AT3 A3 + AT
3 A4 − AT4 A3 − AT
4 A4 =, 0
AT3 A1 − AT
3 A2 − AT4 A1 + AT
4 A2 = 0,
AT3 A3 − AT
3 A4 − AT4 A3 + AT
4 A4 =n2
In/4.
(8.21d)
Solving these systems, we find that
ATi A j = 0, i � j, AT
i Ai =n4
In/4, i, j = 1, 2, 3, 4. (8.22)
Sufficiency: Let (0,±1) matrices Ai = 0, i = 1, 2, 2, 4 satisfy the conditions ofEq. (8.15). We shall show that the matrix in Eq. (8.18) is a Hadamard matrix.Indeed, calculating HnHT
n and HTn Hn, we find that
HnHTn = 4
4∑i=1
AiATi = HT
n Hn =
4∑i=1
vTi vi ⊗ n
4In/4 = nIn. (8.23)
Now, we formulate a Hadamard matrix construction theorem, which makes itpossible to decompose it by orthogonal vectors of size 2n, with n = 1, 2, 3, . . . , 2k.
Theorem 8.2.2: The Kronecker product of k Hadamard matrices H1⊗H2⊗· · ·⊗Hk
may be decomposed by 2k orthogonal (+1,−1) vectors of size 2k.
Proof: Let Hi, i = 1, 2, . . . , k be Hadamard matrices of orders ni. According toEq. (8.3), matrices H1 and H2 can be represented as
H1 = (++) ⊗ A11 + (+−) ⊗ A1
2,
H2 = (++) ⊗ A21 + (+−) ⊗ A2
2.(8.24)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
254 Chapter 8
We can see that
H1 ⊗ H2 = [(++) ⊗ A11 + (+−) ⊗ A1
2] ⊗ [(++) ⊗ A21 + (+−) ⊗ A2
2]
= (+ + ++) ⊗ [A11 ⊗ A2
1] + (+ + −−) ⊗ [A11 ⊗ A2
2]
+ (+ − −+) ⊗ [A12 ⊗ A2
2] + (+ − +−) ⊗ [A12 ⊗ A2
1]
= (+ + ++) ⊗ D1 + (+ + −−) ⊗ D2
+ (+ − −+) ⊗ D3 + (+ − +−) ⊗ D4, (8.25)
where D1 = A11 ⊗ A2
1,D2 = A11 ⊗ A2
2,D3 = A12 ⊗ A2
2,D4 = A12 ⊗ A2
1.
This means that H1⊗H2 is an A(n1n2, 4)-type Hadamard matrix. Continuing theabove construction for 3, 4, . . . , k matrices, we prove Theorem 8.2.2 correct.
Below, we give an algorithm based on this theorem. Note that any Hadamardmatrix Hn of order n can be presented as
Hn = (++) ⊗ X + (+−) ⊗ Y, (8.26)
where X, Y are (0,±1) matrices with dimension n × n/2. Examples of thedecomposition of Hadamard matrices are given below.
Example 8.2.1: (1) The following Hadamard matrix of order 4 can be decom-posed:
(a) via two vectors (++), (+−),
H4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + + +
+ − + −+ + − −+ − − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ = (++) ⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ +
0 0+ −0 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ + (+−) ⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝0 0+ +
0 0+ −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (8.27)
(b) via four vectors (+ + ++), (+ − +−), (+ + −−), (+ − −+),
H4 = (+ + ++) ⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+
000
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ + (+ − +−) ⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝0+
00
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
+ (+ + −−) ⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝00+
0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ + (+ − −+) ⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝000+
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (8.28)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Fast Hadamard Transforms for Arbitrary Orders 255
(2) The following Hadamard matrix of order 8 can be decomposed:
H8 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + + + + + + +
+ − + − + − + −+ + − − + + − −+ − − + + − − ++ + + + − − − −+ − + − − + − ++ + − − − − + ++ − − + − + + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠(8.29)
(a) via two vectors (++), (+−),
H8 = (++) ⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + + +
0 0 0 0+ − + −0 0 0 0+ + − −0 0 0 0+ − − +0 0 0 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠+ (+−) ⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
0 0 0 0+ + + +
0 0 0 0+ − + −0 0 0 0+ + − −0 0 0 0+ − − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (8.30)
(b) via four vectors (+ + ++), (+ − +−), (+ + −−), (+ − −+),
H8 = (+ + ++) ⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ +
0 00 00 0+ −0 00 00 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠+ (+ − +−) ⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
0 0+ +
0 00 00 0+ −0 00 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
+ (+ + −−) ⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
0 00 0+ +
0 00 00 0+ −0 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠+ (+ − −+) ⊗
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
0 00 00 0+ +
0 00 00 0+ −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (8.31)
Algorithm 8.2.1: Construct the Hadamard matrix by decomposing four orthogo-nal vectors.
Input: The Hadamard matrices H1 and H2 of orders m and n, respectively.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
256 Chapter 8
Step 1. Represent matrices H1 and H2 as follows:
H1 = (++) ⊗ X + (+−) ⊗ Y,
H2 = (++) ⊗ Z + (+−) ⊗W.(8.32)
Step 2. Construct the following matrices:
P1 = X ⊗ Z, P2 = X ⊗W, P3 = Y ⊗ Z, P4 = Y ⊗W. (8.33)
Step 3. Design the matrix of order mn as follows:
Hmn = (+ + ++) ⊗ P1 + (+ + −−) ⊗ P2 + (+ − −+) ⊗ P3 + (+ − +−) ⊗ P4. (8.34)
Output: The Hadamard matrix Hmn of order mn.
8.3 FHT of Order n ≡ 0 (mod 4)
In this section, fast algorithms for more general HTs will be derived. As wasmentioned above, a classical fast WHT operates only with 2k-dimensional vectors.Below, we give an algorithm of FHT for the cases when the order of Hadamardmatrix is not a power of 2.2,5,6 The forward HT is defined as F = HX. Below, wederive the FHT algorithm based on Theorem 8.2.1.
Algorithm 8.3.1: General FHT algorithm.
Input: An A(n, k)-type Hadamard matrix Hn, X = (x1,x2, . . . , xn)T signal vectorand Pi column vectors of dimension n/k, whose i’th element is equal to 1,and whose remaining elements are equal to 0.Step 1. Decompose Hn as
Hn = v1 ⊗ A1 + v2 ⊗ A2 + · · · + vk ⊗ Ak. (8.35)
Step 2. Split the input vector X into n/k parts as follows:
X =n/k∑i=1
Xi ⊗ Pi, (8.36)
where Xi is a column vector of the form
XTi =
(fk(i−1)+1, fk(i−1)+2, . . . , fk(i−1)+k
), i = 1, 2, . . . ,
nk. (8.37)
Step 3. Perform the fast WHTs:
Ci = H2k Xi =(ci
1, ci2, . . . , c
ik
), i = 1, 2, . . . ,
nk. (8.38)
Step 4. Compute
Bi =
k∑j=1
cijA
ij, i = 1, 2, . . . ,
nk, (8.39)
where Aij is the i’th column of matrix Aj.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Fast Hadamard Transforms for Arbitrary Orders 257
Step 5. Compute the spectral elements of transform as
F = B1 + B2 + · · · + Bn/k. (8.40)
Output:The n ≡ 0 (mod 4)-point HT coefficients.
Now we give an example of the HT derived above.
Example 8.3.1: The 12-point FHT algorithm. Consider the block-cyclic Hada-mard matrix H12 of order 12 with first block row (Q0,Q1,Q1), i.e.,
H12 = Q0 ⊗ I3 + Q1 ⊗ U + Q1 ⊗ U2, (8.41)
where
Q0 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + + +
− + − +− + + −− − + +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , Q1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ − − −+ + + −+ − + ++ + − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (8.42)
Algorithm 8.3.2:
Input: An A(12, 2)-type Hadamard matrix H12, X = (x1, x2, . . . , x12)T signalvector and Pi column vectors of dimension 12/2 = 6, whose i’th elementis equal to 1, and whose remaining elements are equal to 0.
Step 1. Decompose H12 as H12 = (++) ⊗ A1 + (+−) ⊗ A2, where
A1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + 0 − 0 −0 0 + 0 + 00 0 0 + 0 +− + + 0 + 00 − + + 0 −+ 0 0 0 + 00 + 0 0 0 ++ 0 − + + 00 − 0 − + ++ 0 + 0 0 00 + 0 + 0 0+ 0 + 0 − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
, A2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
0 0 + 0 + 0− − 0 + 0 +− + + 0 + 00 0 0 − 0 −+ 0 0 0 + 00 + − − 0 ++ 0 − + + 00 − 0 0 0 −+ 0 + 0 0 00 + 0 + − −+ 0 + 0 − +0 − 0 − 0 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (8.43)
Step 2. Split the input vector X into six parts as
f = X1 ⊗ P1 + X2 ⊗ P2 + · · · + X6 ⊗ P6, (8.44)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
258 Chapter 8
Table 8.1 Computations of 12-dimensional vectors Bj, j = 1, 2, 3, 4, 5, 6.
B1 B2 B3 B4 B5 B6
f1 + f2 f3 + f4 f5 − f6 − f7 − f8 f9 − f10 − f11 − f12
− f1 + f2 − f3 + f4 f5 + f6 f7 − f8 f9 + f10 f11 − f12
− f1 + f2 f3 − f4 f5 − f6 f7 + f8 f9 − f10 f11 + f12
− f1 − f2 f3 + f4 f5 + f6 − f7 + f8 f9 + f10 − f11 + f12
f1 − f2 − f3 − f4 f5 + f6 f7 + f8 f9 − f10 − f11 − f12
f1 + f2 f3 − f4 − f5 + f6 − f7 + f8 f9 + f10 f11 − f12
f1 − f2 f3 + f4 − f5 + f6 f7 − f8 f9 − f10 f11 + f12
f1 + f2 − f3 + f4 − f5 − f6 f7 + f8 f9 + f10 − f11 + f12
f1 − f2 − f3 − f4 f5 − f6 − f7 − f8 f9 + f10 f11 + f12
f1 + f2 f3 − f4 f5 + f6 f7 − f8 − f9 + f10 − f11 + f12
f1 − f2 f3 + f4 f5 − f6 f7 + f8 − f9 + f10 f11 − f12
f1 + f2 f3 + f4 f5 + f6 − f7 + f8 − f9 − f10 f11 + f12
where
X1 =
(f1f2
), X2 =
(f3f4
), X3 =
(f5f6
),
X4 =
(f7f8
), X5 =
(f9f10
), X6 =
(f11
f12
).
(8.45)
Step 3. Perform the fast WHTs(+ +
+ −)
Xi for i = 1, 2, . . . , 6. (8.46)
Step 4. Compute
Bj = (++)Xj ⊗ A1Pj + (+−)Xj ⊗ A2Pj, j = 1, 2, . . . , 6, (8.47)
where the values of Bj are shown in Table 8.1.Step 5. Compute the spectral elements of the transform as F = B1 +B2 +
· · · + B6.Output:The 12-point HT coefficients.
Flow graphs of 12-dimensional vectors Bj, j = 1, 2, . . . , 6 computations aregiven in Fig. 8.1.
Note that A1 and A2 are block-cyclic matrices of dimension 12 × 6 with the firstblock rows represented as (R0,R1,R1) and (T0, T1, T1), where
R0 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ +
0 00 0− +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , R1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝0 −+ 00 ++ 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , T0 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝0 0− −− +0 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , T1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ 00 ++ 00 −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (8.48)
Thus,
A1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎝R0 R1 R1
R1 R0 R1
R1 R1 R0
⎞⎟⎟⎟⎟⎟⎟⎟⎠ , A2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎝T0 T1 T1
T1 T0 T1
T1 T1 T0
⎞⎟⎟⎟⎟⎟⎟⎟⎠ . (8.49)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Fast Hadamard Transforms for Arbitrary Orders 259
f1 f2 f3 f4
f5 f6 f7 f8
f9 f10f11 f12
1 –2 –3 –4 5 6 7 8 9 10 11 12 1 –2 3 4 –5 6 7 –8 –9 10 11 –12
1 2 3 4 5 –6 –7 –8 9 10 11 12 –1 2 3 –4 5 –6 7 8 –9 10 11 –12
–1 2 3 –4 –5 6 7 –8 9 –10 11 121 2 3 4 5 6 7 8 9 –10 –11 –12
Figure 8.1 Flow graphs of the 12-dimensional vectors Bj, j = 1, 2, . . . , 6 computations.
Note that above we ignored the interior structure of a Hadamard matrix. Nowwe examine it in more detail. We see that (1) the Hadamard matrix H12 is a block-cyclic, block-symmetric matrix; (2) the matrices in Eq. (8.46) are also block-cyclic,block-symmetric matrices; and (3) the 12-point HT requires only 60 additionoperations.
Let us prove the last statement. In reality, to compute all elements of vectorsB1 and B2, it is necessary to perform four addition operations, i.e., two 2-pointHTs are necessary. Then, it is not difficult to see that the computation of the sumB1+B2 requires only 12 additions because there are four repetition pairs, as well asB1(4+i)+B2(4+i) = B1(8+i)+B2(8+i), for i = 1, 2, 3, 4. A similar situation occursfor computing B3+B4 and B5+B6. Hence, the complete 12-point HT requires only60 addition operations.
Now, we continue Example 8.3.1 for an inverse transform. Note that the 12-pointinverse HT can be computed as
X =1
12HT
12Y. (8.50)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
260 Chapter 8
Algorithm 8.3.3: The 12-point inverse HT.
Input: An A(12, 2)-type Hadamard matrix HT12, Y = (y1, y2, . . . , y12)T signal-
vector (or spectral coefficients) and Pi column-vectors of dimension12/2 = 6, whose i’th element is equal to 1, and whose remaining elementsare equal to 0.Step 1. Decompose HT
12 as HT12 = (++) ⊗ B1 + (+−) ⊗ B2, where
B1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
0 − + + + ++ 0 0 0 0 00 + 0 0 0 0+ 0 − + − ++ + 0 − + +0 0 + 0 0 00 0 0 + 0 0− + + 0 − ++ + + + 0 −0 0 0 0 + 00 0 0 0 0 +− + − + + 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
, B2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ 0 0 0 0 00 + − − − −+ 0 − + − +0 − 0 0 0 00 0 + 0 0 0− − 0 + − −− + + 0 − +0 0 0 + 0 00 0 0 0 + 0− − − − 0 +− + − + + 00 0 0 0 0 +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (8.51)
Step 2. Split the input vector Y into six parts as Y = Y1⊗P1+Y2⊗P2+· · ·+Y6 ⊗ P6, where
Y1 =
(y1
y2
), Y2 =
(y3
y4
), Y3 =
(y5
y6
),
Y4 =
(y7
y8
), Y5 =
(y9
y10
), Y6 =
(y11
y12
).
(8.52)
Step 3. Perform the WHTs,(+ +
+ −)
Yi for i = 1, 2, . . . , 6. (8.53)
Step 4. Compute Dj = (++)Yj ⊗ B1Pj + (+−)Yj ⊗ B2Pj, j = 1, 2, . . . , 6,where the values of Dj are shown in Table 8.2.
Step 5. Compute F = D1 + D2 + · · · + D6.Output:The 12-point inverse HT coefficients (i.e., input signal x).
Note that
B1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎝R1 R0 R0
R0 R1 R0
R0 R0 R1
⎞⎟⎟⎟⎟⎟⎟⎟⎠ , B2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎝T1 T0 T0
T0 T1 T0
T0 T0 T1
⎞⎟⎟⎟⎟⎟⎟⎟⎠ , (8.54)
where R0,R1 and T0, T1 have the form of Eq. (8.51). Note also that flow graphsof computation vectors Di have a similar structure as that shown in Fig. 8.1.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Fast Hadamard Transforms for Arbitrary Orders 261
Table 8.2 Computations of 12-dimensional vectors Dj, j = 1, 2, 3, 4, 5, 6.
D, D2 D3 D4 D5 D6
y1 − y2 −y3 − y4 y5 + y6 y7 + y8 y9 + y10 y11 + y12
y1 + y2 y3 − y4 −y5 + y6 −y7 + y8 −y9 + y10 −y11 + y12
y1 − y2 y3 + y4 −y5 + y6 y7 − y8 −y9 + y10 y11 − y12
y1 + y2 −y3 + y4 −y5 − y6 y7 + y8 −y9 − y10 y11 + y12
y1 + y2 y3 + y4 y5 − y6 −y7 − y8 y9 + y10 y11 + y12
y1 − y2 −y3 + y4 −y5 − y6 y7 − y8 −y9 + y10 −y11 + y12
y1 − y2 y3 − y4 y5 − y6 y7 + y8 −y9 + y10 y11 − y12
−y1 − y2 y3 + y4 y5 + y6 −y7 − y8 −y9 − y10 y11 + y12
y1 + y2 y3 + y4 y5 + y6 y7 + y8 y9 − y10 −y11 − y12
y1 − y2 −y3 + y4 −y5 + y6 −y7 + y8 y9 + y10 y11 − y12
y1 − y2 y3 − y4 −y5 + y6 y7 − y8 y9 − y10 y11 + y12
−y1 − y2 y3 + y4 −y5 − y6 y7 + y8 y9 + y10 −y11 + y12
Example 8.3.2: The 20-point FHT algorithm.Consider block-cyclic Hadamard matrix H20 of order 20 with the first block row
(Q0,Q1,Q2,Q2,Q1), where
Q0 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + + +
− + − +− + + −− − + +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , Q1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝− − + −+ − + +− − − ++ − − −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , Q2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝− − − ++ − − −+ + − +− + − −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (8.55)
Input: An A(20, 2)-type Hadamard matrix H20, f = ( f1, f2, . . . f20)T signal vectorand Pi column vectors of length 10, whose i’th element is equal to 1, andwhose remaining elements are equal to 0.Step 1. Decompose H20 by H20 = (++)⊗A1+(+−)⊗A2, where A1 and A2
are block-cyclic matrices with first block row (R0,R1,R2,R2,R1)and (T0, T1, T2, T2, T1), respectively, and
R0 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ +
0 00 0− +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , R1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝− 00 +− 00 −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , R2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝− 00 −+ 00 −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ,
T0 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝0 0− −− +0 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , T1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝0 ++ 00 −+ 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , T2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝0 −+ 00 −− 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ .(8.56)
Step 2. Split the input vector f into 10 parts as f = X1 ⊗ P1 + X2 ⊗ P2 +
· · · + X10 ⊗ P10, where
Xi =
(f2i−1
f2i
), i = 1, 2, . . . , 10. (8.57)
Step 3. Perform the fast WHTs(+ +
+ −)
Xi.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
262 Chapter 8
Table 8.3 Computations of 20-dimensional vectors Bj, j = 1, 2, 3, 4, 5.
B1 B2 B3 B4 B5
f1 + f2 f3 + f4 − f5 − f6 f7 − f8 − f9 − f10
− f1 + f2 − f3 + f4 f5 − f6 f7 + f8 f9 − f10
− f1 + f2 f3 − f4 − f5 − f6 − f7 + f8 f9 + f10
− f1 − f2 f3 + f4 f5 − f6 − f7 − f8 − f9 + f10
− f1 − f2 f3 − f4 f5 + f6 f7 + f8 − f9 − f10
f1 − f2 f3 + f4 − f5 + f6 − f7 + f8 f9 − f10
− f1 − f2 − f3 + f4 − f5 + f6 f7 − f8 − f9 − f10
f1 − f2 − f3 − f4 − f5 − f6 f7 + f8 f9 − f10
− f1 − f2 − f3 + f4 − f5 − f6 f7 − f8 f9 + f10
f1 − f2 − f3 − f4 f5 − f6 f7 − f8 − f9 + f10
f1 + f2 − f3 + f4 − f5 − f6 − f7 + f8 − f9 + f10
− f1 + f2 − f3 − f4 f5 − f6 − f7 − f8 − f9 − f10
− f1 − f2 − f3 + f4 − f5 − f6 − f7 + f8 − f9 − f10
f1 − f2 − f3 − f4 f5 − f6 − f7 − f8 f9 − f10
f1 + f2 − f3 + f4 f5 + f6 − f7 + f8 − f9 − f10
− f1 + f2 − f3 − f4 − f5 + f6 − f7 − f8 f9 − f10
− f1 − f2 f3 − f4 − f5 − f6 − f7 + f8 − f9 − f10
f1 − f2 f3 + f4 f5 − f6 − f7 − f8 f9 − f10
− f1 − f2 − f3 + f4 f5 + f6 − f7 + f8 f9 + f10
f1 − f2 − f3 − f4 − f5 + f6 − f7 − f8 − f9 + f10
Table 8.4 Computations of 20-dimensional vectors Bj, j = 6, 7, 8, 9, 10.
B6 B7 B8 B9 B10
− f11 + f12 − f13 − f14 − f15 + f16 − f17 − f18 f19 − f20
− f11 − f12 f13 − f14 − f15 − f16 f17 − f18 f19 + f20
− f11 + f12 f13 + f14 − f15 + f16 − f17 − f18 − f19 + f20
− f11 − f12 − f13 + f14 − f15 − f16 f17 − f18 − f19 − f20
f11 − f12 − f13 − f14 − f15 + f16 − f17 − f18 f19 + f20
f11 + f12 f13 − f14 − f15 − f16 f17 − f18 − f19 − f20
− f11 + f12 f13 + f14 − f15 + f16 f17 + f18 − f19 + f20
− f11 − f12 − f13 + f14 − f15 − f16 − f17 − f18 − f19 − f20
f11 + f12 − f13 − f14 f15 − f16 − f17 − f18 − f19 + f20
− f11 + f12 f13 − f14 f15 + f16 f17 − f18 − f19 − f20
f11 − f12 − f13 − f14 − f15 + f16 f17 + f18 − f19 + f20
f11 + f12 f13 − f14 − f15 − f16 − f17 + f18 − f19 − f20
f11 − f12 f13 + f14 f15 + f16 − f17 − f18 f19 − f20
f11 + f12 − f13 + f14 − f15 + f16 f17 − f18 f19 + f20
− f11 + f12 − f13 + f14 f15 − f16 − f17 − f18 − f19 + f20
− f11 − f12 − f13 − f14 f15 + f16 f17 − f18 − f19 − f20
− f11 + f12 − f13 − f14 f15 − f16 f17 + f18 f19 + f20
− f11 − f12 f13 − f14 f15 + f16 − f17 + f18 − f19 + f20
− f11 + f12 − f13 − f14 − f15 + f16 − f17 + f18 f19 − f20
− f11 − f12 f13 − f14 − f15 − f16 − f17 − f18 f19 + f20
Step 4. Compute Bj = (++)Xj⊗A1Pj+(+−)Xj⊗A2Pj, where the valuesof Bj are given in Tables 8.3 and 8.4.
Step 5. Compute the spectral elements of the transform as F = B1 +B2 +
· · · + B10.Output:20-point HT coefficients.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Fast Hadamard Transforms for Arbitrary Orders 263
a1
f2f1a1
a2
b1
b2
a1
a2
b1
b2
= +
(1)C1
(.)C1 (.)C2
=b1a1 +
+
(2)C1=b2–a2 –
(3)C1=b2–a2
+ (4)C1=b1–a1
(5)C2=b1a1 +
+
(6)C2=b2–a2 –
(7)C2=b2–a2
+ (8)C2=b1–a1
+
f2f1a2 = –
–
f4f3b1 =
f6f5a1 = +
+
f6f5a2 = –
f8f7b1 =
– f8f7b2 =f4f3b2 =
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
1
2
3
4
9
11
10
12
13
14
15
16
17
18
19
20
Figure 8.2 Flow graphs of 20-dimensional vectors C1 and C2 computations.
Note that above we ignored the interior structure of the Hadamard matrix. Nowwe examine it in more detail. We can see that to compute all of the elements ofvector Bi, it is necessary to perform two addition operations, i.e., 2-point HTs arenecessary. Then, it is not difficult to see that the computation of the sum B2i–1 + B2i
requires only eight additions. Hence, the complete 20-point HT requires only 140addition operations.
We introduce the notation Ci = B2i–1 + B2i, i = 1, 2, 3, 4, 5; the spectral elementsof the vector F can be calculated as F = C1 + C2 + · · · + C5. The flow graphs ofcomputation of Ci are given in Figs. 8.2–8.4.
8.4 FHT via Four-Vector Representation
In this section, we present the four-vector-based N-point FHT algorithm. Wedemonstrate this algorithm using the above-mentioned example. We can demon-strate it for N = 24.
Algorithm 8.4.1:
Input: The signal vector f = ( f1, f2, . . . , f24)T .
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
264 Chapter 8
a1
f10f9a1
a2
b1
b2
a1
a2
b1
b2
= +
(9)C3
(.)C3 (.)C4
=b1a1 +
+
(10)C3=b2–a2 –
(11)C3=b2–a2
+ (12)C3=b1–a1
(13)C4=b1a1 +
+
(14)C4=b2–a2 –
(15)C4=b2–a2
+ (16)C4=b1–a1
+
f10f9a2 = –
–
f12f11b1 =
f14f13a1 = +
+
f14f13a2 = –
f16f15 b1 =
– f16b2 =f12f11b2 =
1
2
3
4
5
7
6
8
9
10
11
12
17
18
19
20
1
2
3
4
5
6
7
8
13
14
15
16
17
18
19
20
Figure 8.3 Flow graphs of 20-dimensional vectors C3 and C4 computations.
Step 1. Construct the following matrices:
D1 =
(A1
O12
), D2 =
(O12
A1
), D3 =
(A2
O12
), D4 =
(O12
A2
), (8.58)
where O12 is a zero matrix of dimension 12 × 6, and matrices A1
and A2 have the form of Eq. (8.46).Step 2. Decompose H24 matrix as
H24 = v1 ⊗ D1 + v2 ⊗ D2 + v3 ⊗ D3 + v4 ⊗ D4. (8.59)
Step 3. Make vectors
X1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝f1f2f3f4
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , X2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝f5f6f7f8
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , X3 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝f9f10
f11
f12
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ,
X4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝f13
f14
f15
f16
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , X5 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝f17
f18
f19
f20
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , X6 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝f21
f22
f23
f24
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ .(8.60)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Fast Hadamard Transforms for Arbitrary Orders 265
a1
f18f17a1
a2
b1
b2
= +
(17)C5=b1a1 +
+
(18)C5=b2–a2 –
(19)C5=b2–a2
+ (20)C5=b1–a1
+
f18f17a2 = –
–
f20f19b1 =
f20f19b2 =
1
2
3
4
5
7
6
8
9
10
11
12
13
14
15
16
(.)C5
Figure 8.4 Flow graph of 20-dimensional vectors C5 computations.
Step 4. Perform the 4-point fast WHTs on the vectors Xi,
Ci = H4Xi =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + + +
+ − + −+ + − −+ − − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ Xi, i = 1, 2, . . . , 6. (8.61)
Step 5. Calculate
R1( j) = v1Xj ⊗ D1Pj, R2( j) = v2Xj ⊗ D2Pj,
R3( j) = v3Xj ⊗ D2Pj, R4( j) = v4Xj ⊗ D4Pj.(8.62)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
266 Chapter 8
Table 8.5 Computations of vectors Bj, j = 1, 2, 3.
B1 B2 B3
f1 + f2 + f3 + f4 f5 + f6 + f7 + f8 f9 + f10 − f11 − f12
− f1 − f2 + f3 + f4 − f5 − f6 + f7 + f8 f9 + f10 + f11 + f12
− f1 − f2 + f3 + f4 f5 + f6 − f7 − f8 f9 + f10 − f11 − f12
− f1 − f2 − f3 − f4 f5 + f6 + f7 + f8 f9 + f10 + f11 + f12
f1 + f2 − f3 − f4 − f5 − f6 − f7 − f8 f9 + f10 + f11 + f12
f1 + f2 + f3 + f4 f5 + f6 − f7 − f8 − f9 − f10 + f11 + f12
f1 + f2 − f3 − f4 f5 + f6 + f7 + f8 − f9 − f10 + f11 + f12
f1 + f2 + f3 + f4 − f5 − f6 + f7 + f8 f9 + f10 + f11 + f12
f1 + f2 − f3 − f4 − f5 − f6 − f7 − f8 f9 + f10 − f11 − f12
f1 + f2 + f3 + f4 f5 + f6 − f7 − f8 f9 + f10 + f11 + f12
f1 + f2 − f3 − f4 f5 + f6 + f7 + f8 f9 + f10 − f11 − f12
f1 + f2 + f3 + f4 − f5 − f6 + f7 + f8 f9 + f10 + f11 + f12
f1 − f2 + f3 − f4 f5 − f6 + f7 − f8 f9 − f10 − f11 + f12
− f1 + f2 + f3 − f4 − f5 + f6 + f7 − f8 f9 − f10 + f11 − f12
− f1 + f2 + f3 − f4 f5 − f6 − f7 + f8 f9 − f10 − f11 + f12
− f1 + f2 − f3 + f4 f5 − f6 + f7 − f8 f9 − f10 + f11 − f12
f1 − f2 − f3 + f4 − f5 + f6 − f7 + f8 f9 − f10 + f11 − f12
f1 − f2 + f3 − f4 f5 − f6 − f7 + f8 − f9 + f10 + f11 − f12
f1 − f2 − f3 + f4 f5 − f6 + f7 − f8 − f9 + f10 + f11 − f12
f1 − f2 + f3 − f4 − f5 + f6 + f7 − f8 f9 − f10 + f11 − f12
f1 − f2 − f3 + f4 − f5 + f6 − f7 + f8 f9 − f10 − f11 + f12
f1 − f2 + f3 − f4 f5 − f6 − f7 + f8 f9 − f10 + f11 − f12
f1 − f2 − f3 + f4 f5 − f6 + f7 − f8 f9 − f10 − f11 + f12
f1 − f2 + f3 − f4 − f5 + f6 + f7 − f8 f9 − f10 + f11 − f12
Step 6. Compute Bj = R1( j)+R2( j)+R3( j)+R4( j), j = 1, 2, . . . , 6,wherethe values of Bj are given in Tables 8.5 and 8.6.
Step 7. Compute the spectral elements of transform as
F = B1 + B2 + · · · + B6. (8.63)
Output:24-point HT coefficients.
8.5 FHT of Order N ≡ 0 (mod 4) on Shift/Add Architectures
In this section, we describe a multiply/add instruction based on fast 2n and N ≡0 (mod 4)-point HT algorithms. This algorithm is similar to the general FHTalgorithm (see Algorithm 8.3.1).
The difference is only in step 3, which we now perform via multiply/addarchitecture. We will start with an example. Let X = (x0, x1, x2, x3)T and Y =(y0, y1, y2, y3)T be the input and output vectors, respectively. Consider the 4-pointHT
Y = H4X =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + + +
+ − + −+ + − −+ − − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝x0
x1
x2
x3
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ =⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝x0 + x1 + x2 + x3
x0 − x1 + x2 − x3
x0 + x1 − x2 − x3
x0 − x1 − x2 + x3
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (8.64)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Fast Hadamard Transforms for Arbitrary Orders 267
Table 8.6 Computations of 20-dimensional vectors Bj, j = 4, 5, 6, 7.
B4 B5 B6
− f13 − f14 − f15 − f16 f17 + f18 − f19 − f20 − f21 − f22 − f23 − f24
f13 + f14 − f15 − f16 f17 + f18 + f19 + f20 f21 + f22 − f23 − f24
f13 + f14 + f15 + f16 f17 + f18 − f19 − f20 f21 + f22 + f23 + f24
− f13 − f14 + f15 + f16 f17 + f18 + f19 + f20 − f21 − f22 + f23 + f24
f13 + f14 + f15 + f16 f17 + f18 − f19 − f20 − f21 − f22 − f23 − f24
− f13 − f14 + f15 + f16 f17 + f18 + f19 + f20 f21 + f22 − f23 − f24
f13 + f14 − f15 − f16 f17 + f18 − f19 − f20 f21 + f22 + f23 + f24
f13 + f14 + f15 + f16 f17 + f18 + f19 + f20 − f21 − f22 + f23 + f24
− f13 − f14 − f15 − f16 f17 + f18 + f19 + f20 f21 + f22 + f23 + f24
f13 + f14 − f15 − f16 − f17 − f18 + f19 + f20 − f21 − f22 + f23 + f24
f13 + f14 + f15 + f16 − f17 − f18 + f19 + f20 f21 + f22 − f23 − f24
f13 + f14 − f15 − f16 − f17 − f18 − f19 − f20 f21 + f22 + f23 + f24
− f13 + f14 − f15 + f16 f17 − f18 − f19 + f20 − f21 + f22 − f23 + f24
f13 − f14 − f15 + f16 f17 − f18 + f19 − f20 f21 − f22 − f23 + f24
f13 − f14 + f15 − f16 f17 − f18 − f19 + f20 f21 − f22 + f23 − f24
− f13 + f14 + f15 − f16 f17 − f18 + f19 − f20 − f21 + f22 + f23 − f24
f13 − f14 + f15 − f16 f17 − f18 − f19 + f20 − f21 + f22 − f23 + f24
− f13 + f14 + f15 − f16 f17 − f18 + f19 − f20 f21 − f22 − f23 + f24
f13 − f14 − f15 + f16 f17 − f18 − f19 + f20 f21 − f22 + f23 − f24
f13 − f14 + f15 − f16 f17 − f18 + f19 − f20 − f21 + f22 + f23 − f24
− f13 + f14 + f15 − f16 f17 − f18 + f19 − f20 f21 − f22 + f23 − f24
f13 − f14 − f15 + f16 − f17 + f18 + f19 − f20 − f21 + f22 + f23 − f24
f13 − f14 + f15 − f16 − f17 + f18 + f19 − f20 f21 − f22 − f23 + f24
− f13 + f14 + f15 − f16 − f17 + f18 − f19 + f20 f21 − f22 + f23 − f24
Denoting z0 = x1 + x2 + x3, z1 = x0 − z0, we can rewrite Eq. (8.11) as follows:
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝y0
y1
y2
y3
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ =⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝x0 + x1 + x2 + x3
x0 − x1 + x2 − x3
x0 + x1 − x2 − x3
x0 − x1 − x2 + x3
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ =⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝z0 + x0
z1 + 2x2
z1 + 2x1
z1 + 2x3
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (8.65)
Thus, the 4-point WHT can be computed by seven additions and three one-bit shiftoperations (two operations to calculate z0, one for z1, and four for y0, y1, y2, y3, andthree one-bit shift operations).
We next demonstrate the full advantages of shift/add architecture on the 16-pointFHT algorithm.
Algorithm 8.5.1: 16-point FHT.
Input: The signal vector X = (x0, x1, . . . , x15)T .Step 1. Split input vector X as
X0 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝x0
x1
x2
x3
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , X1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝x4
x5
x6
x7
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , X2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝x8
x9
x10
x11
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , X3 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝x12
x13
x14
x15
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (8.66)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
268 Chapter 8
Step 2. Perform 4-point FHT with shift operations
Pi = H4Xi, i = 0, 1, 2, 3. (8.67)
Step 3. Define the vectors
r0 = P1 + P2 + P3, r1 = P0 − r0. (8.68)
Step 4. Compute the vectors
Y0 = (y0, y1, y2, y3)T = r0 + P0, Y1 = (y4, y5, y6, y7)T = r1 + 2P2,
Y2 = (y8, y9, y10, y11)T = r1 + 2P1, Y3 = (y12, y13, y14, y15)T = r1 + 2P3.
(8.69)
Output:The 16-point FHT coefficients, i.e.,(Y0, Y1, Y2, Y3
)T.
We conclude that a 1D WHT of order 16 requires only 56 addition/subtractionoperations and 24 one-bit shifts. In Fig. 8.5, the flow graph of a 1D WHT withshifts for N = 16 is given.
8.6 Complexities of Developed Algorithms
8.6.1 Complexity of the general algorithmWe calculated the complexity of the n ≡ 0 (mod 4)-point forward HT algorithm inSection 8.3. The forward HT of the vector f is given by
Hn f =k∑
i=1
n/k∑j=1
viX j ⊗ AiPj. (8.70)
Now, let us consider the j’th item of the sum of Eq. (8.73),
Bj = v1Xj ⊗ A1Pj + v2Xj ⊗ A2Pj + · · · + vkX j ⊗ AkPj. (8.71)
From the definition of the matrix Pj, it follows that AiPj is a j’th columnof a matrix Ai, which has n/k nonzero elements according to the condition ofEq. (8.15). A product viX j ⊗ AiPj means that the i’th element of the WHT ofvector Xj is located in n/k positions of the j’th column of a matrix Ai. Becauseof the condition of Eq. (8.15) only k log2 k additions are needed to compute all ofthe elements of the n-dimensional vector of Eq. (8.74). Hence, for a realization ofthe HT given in Eq. (8.73), it is necessary to perform
D1 = n log2 k + n(nk− 1
)(8.72)
addition operations. Note that the complexity of an inverse transform is the sameas that of the forward transform [see Eq. (8.75)].
In general, let Hn be an A(n, 2k) matrix. We can see that in order to obtain allof the elements of vector Bi (see Algorithm 8.3.1), we need only 2k log2 2k = k2k
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Fast Hadamard Transforms for Arbitrary Orders 269
x0
x1
x2
x3
x4
x5
x6
x7
x8
x9
x10
x11
x12
x13
x14
x15
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
r0
r0
r1
r1
r2
r3
Figure 8.5 Flow graph of fast WHT with shifts.
operations, and in order to obtain each sum B2i–1 + B2i, i = 1, 2, . . . , n/2k, we needonly 2k+2 operations. Hence, the complexity of the Hn f transform can be calculatedas
C(n, 2k) =n2k
k2k +n
2k+12k+2 + n
( n2k+1− 1
)= n
(k +
n2k+1+ 1
). (8.73)
Now, if n = m2k+1, where m is odd and k ≥ 1, then we have C(m2k+1, 2k) =m(k + m + 1)2k+1.
Denote by D = N log2 N the number of operations for a Walsh–Hadamardfast transform (here N is a power of 2, N = 2p). In Table 8.7, several valuesof n = m2k+1, m, N, p, k, and the corresponding numbers of additions for theWalsh–Hadamard fast algorithm and the algorithm developed above are given.
From this table, we see that for n = 3 ·2k+1 and n = 5 ·2k+1, the new algorithm ismore effective than the classical version. We can also see that instead of using the72-point transform, it is better to use the 80-point HT.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
270 Chapter 8
Table 8.7 Complexity of the general algorithm with shifts.
N M k p N D D1 C(c, 2k) Direct comp.
12 3 1 4 16 64 72 60 13224 3 2 5 32 160 168 144 55248 3 3 6 64 384 384 336 225696 3 4 7 128 896 864 768 912020 5 1 5 32 160 200 140 38040 5 2 6 64 384 440 320 156080 5 3 7 128 896 960 720 6320160 5 4 8 256 2048 2080 1600 25,440320 5 5 9 512 4608 4480 3520 102,08028 7 1 5 32 160 392 252 76656 7 2 6 64 384 840 560 3080112 7 3 7 128 896 1792 1232 12432224 7 4 8 256 2048 3808 2688 49,95236 9 1 6 64 384 648 396 126072 9 2 7 128 896 1458 864 5112144 9 3 8 256 2048 2880 1872 20,592288 9 4 9 512 4608 6048 4032 82,656
8.6.2 Complexity of the general algorithm with shifts
We recall that the N = 2k-point FHT algorithm with shift operation has com-plexity7
C(N) ={
7k · 2k−3, k is even,(7k + 1)2k−3, k is odd,
Cs(N) ={
3k · 2k−3, k is even,3(k − 1)2k−3, k is odd,
(8.74)
where C(N) denotes the number of addition/subtraction operations, and Cs(N)denotes the number of shifts. Now, we use the concept of multiply/add oraddition/subtraction shift architectures for the A(n, 2k)-type HT. Denote thecomplexity of this transform by C(n, 2k) for addition/subtraction operations andby Cs(n, 2k) for shifts. Using Eqs. (8.73) and (8.74) for n = m2k+1 (m is odd), weobtain
C(n, 2k) ={
m(7k + 8m + 8)2k−2, k is even,m(7k + 8m + 9)2k−2, k is odd,
Cs(n, 2k) ={
3mk · 2k−2, k is even,3m(k − 1)2k−2, k is odd.
(8.75)
References
1. S. S. Agaian and H. G. Sarukhanyan, Hadamard matrices representation by(−1,+1)-vectors, in Proc. of Int. Conf. Dedicated to Hadamard Problem’sCentenary, Australia, (1993).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Fast Hadamard Transforms for Arbitrary Orders 271
2. H. G. Sarukhanyan, “Decomposition of the Hadamard matrices and fastHadamard transform,” in Computer Analysis of Images and Patterns, LectureNotes in Computer Science, 1296 575–581 Springer, Berlin (1997).
3. J. Seberry and M. Yamada, “Hadamard matrices, sequences and blockdesigns,” in Surveys in Contemporary Design Theory, Wiley-InterscienceSeries in Discrete Mathematics, Wiley, Hoboken, NJ (1992).
4. H. Sarukhanyan, S. Agaian, K. Egiazarian and J. Astola, Decomposition ofHadamard matrices, in Proc. of 1st Int. Workshop on Spectral Techniquesand Logic Design for Future Digital Systems, Tampere, Finland, Jun. 2–3,pp. 207–221 (2000).
5. H. G. Sarukhanyan, Hadamard matrices: construction methods and applica-tions, in Proc. of Workshop on Transforms and Filter Banks, Feb. 21–27,Tampere, Finland, 95–129 (1998).
6. H. Sarukhanyan, “Decomposition of Hadamard matrices by orthogonal(−1,+1)-vectors and algorithm of fast Hadamard transform,” Rep. Acad. Sci.Armenia 97 (2), 3–6 (1997) (in Russian).
7. D. Coppersmith, E. Feig, and E. Linzer, “Hadamard transforms on multiply/add architectures,” IEEE Trans. Signal Process. 42 (4), 969–970 (1994).
8. S. S. Agaian, Hadamard Matrices and Their Applications, Lecture Notes inMathematics, 1168, Springer-Verlag, Berlin (1985).
9. R. Stasinski and J. Konrad, “A new class of fast shape-adaptive orthogonaltransforms and their application to region-based image compression,” IEEETrans. Circuits Syst. Video Technol. 9 (1), 16–34 (1999).
10. M. Barazande-Pour and J. W. Mark, Adaptive MHDCT, in Proc. ImageProcess. IEEE Int. Conf. ICIP-94., IEEE International Conference, Nov.13–16, Austin, TX, pp. 90–94 (1994).
11. G. R. Reddy and P. Satyanarayana, Interpolation algorithm using Walsh–Hadamard and discrete Fourier/Hartley transforms, in IEEE Proc. 33rdMidwest Symp. Circuits and Systems, Vol. 1, 545–547 (1991).
12. C.-F. Chan, Efficient implementation of a class of isotropic quadratic filters byusing Walsh–Hadamard transform, in Proc. of IEEE Int. Symp. on Circuits andSystems, Jun. 9–12, Hong Kong, 2601–2604 (1997).
13. B. K. Harms, J. B. Park, and S. A. Dyer, “Optimal measurement techniquesutilizing Hadamard transforms,” IEEE Trans. Instrum. Meas. 43 (3), 397–402(1994).
14. A. Chen, D. Li and R. Zhou, A research on fast Hadamard transform (FHT)digital systems, in Proc. of IEEE TENCON 93, Beijing, 541–546 (1993).
15. N. Ahmed and K. R. Rao, Orthogonal Transforms for Digital Signal Process-ing, Springer-Verlag, Berlin (1975).
16. R. R. K. Yarlagadda and E. J. Hershey, Hadamard Matrix Analysis andSynthesis with Applications and Signal/Image Processing, Kluwer, (1997).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
272 Chapter 8
17. J. J. Sylvester, “Thoughts on inverse orthogonal matrices, simultaneous signsuccessions and tesselated pavements in two or more colours, with applicationsto Newton’s rule, ornamental till-work, and the theory of numbers,” Phil. Mag.34, 461–475 (1867).
18. K. G. Beauchamp, Walsh Functions and Their Applications, Academic Press,London (1975).
19. S. Samadi, Y. Suzukake and H. Iwakura, On automatic derivation of fastHadamard transform using generic programming, in Proc. of 1998 IEEE Asia-Pacific Conf. on Circuit and Systems, Thailand, 327–330 (1998)..
20. http://www.cs.uow.edu.au/people/jennie/lifework.html.
21. Z. Li, H. V. Sorensen and C. S. Burus, FFT and convolution algotithms anDSP microprocessors, in Proc. of IEEE Int. Conf. Acoust., Speech, SignalProcessing, 289–294 (1986).
22. R. K. Montoye, E. Hokenek, and S. L. Runyon, “Design of the IBM RISCSystem/6000 floating point execution unit,” IBM J. Res. Dev. 34, 71–77 (1990).
23. A. Amira, A. Bouridane and P. Milligan, An FPGA based Walsh Hadamardtransforms, in Proc of IEEE Int. Symp. on Circuits and Systems, ISCAS 2001,2, 569–572 (2001).
24. W. Philips, K. Denecker, P. de Neve, and S. van Asche, “Lossless quantizationof Hadamard transform coefficients,” IEEE Trans. Image Process. 9 (11),1995–1999 (2000).
25. A. M. Grigoryan and S. S. Agaian, “Method of fast 1D paired transforms forcomputing the 2D discrete Hadamard transform,” IEEE Trans. Circuits Syst. II47 (10), 1098–1103 (2000).
26. I. Valova and Y. Kosugi, “Hadamard-based image decomposition and compre-ssion,” IEEE Trans. Inf. Technol. Biomed. 4 (4), 306–319 (2000).
27. A. M. Grigoryan and S. S. Agaian, “Split manageable efficient algorithmfor Fourier and Hadamard transforms,” IEEE Trans. Signal Process. 48 (1),172–183 (2000).
28. J. H. Jeng, T. K. Truong, and J. R. Sheu, “Vision, Fast fractal image compre-ssion using the Hadamard transform,” IEEE Proc. Image Signal Process. 147(6), 571–574 (2000).
29. H. Bogucka, Application of the new joint complex Hadamard-inverse Fouriertransform in a OFDM/CDMA wireless communication system, in Proc.of IEEE 50th Vehicular Technology Conference, VTS 1999, 5, 2929–2933(1999).
30. R. Hashemian and S. V. J. Citta, A new gate image encoder: algorithm, designand implementation, in Proc. of 42nd Midwest Symp. Circuits and Systems, 1,418–421 (2000).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Fast Hadamard Transforms for Arbitrary Orders 273
31. M. Skoglund and P. Hedelin, “Hadamard-based soft decoding for vectorquantization over noisy channels,” IEEE Trans. Inf. Theory 45 (2), 515–532(1999).
32. S.-Y. Choi and S.-I. Chae, “Hierarchical motion estimation in Hadamardtransform domain,” Electron. Lett. 35 (25), 2187–2188 (1999).
33. P. Y. Cochet and R. Serpollet, Digital transform for a selective channelestimation (application to multicarrier data transmission), in Proc. of IEEEInt. Conf. on Communications, ICC 98 Conf. Record, 1, 349–354 (1998).
34. S. Muramatsu, A. Yamada and H. Kiya, The two-dimensional lappedHadamard transform, in Proc. of IEEE Int. Symp. on Circuits and Systems,ISCAS ’98, Vol. 5, 86–89 (1998).
35. L. E. Nazarov and V. M. Smolyaninov, “Use of fast Walsh–Hadamardtransformation for optimal symbol-by-symbol binary block-code decoding,”Electron. Lett. 34 (3), 261–262 (1998).
36. D. Sundararajan and M. O. Ahmad, “Fast computation of the discrete Walshand Hadamard transforms,” IEEE Trans. Image Process. 7 (6), 898–904(1998).
37. Ch.-P. Fan and J.-F. Yang, “Fixed-pipeline two-dimensional Hadamardtransform algorithms,” IEEE Trans. Signal Process. 45 (6), 1669–1674 (1997).
38. Ch.-Fat Chan, Efficient implementation of a class of isotropic quadratic filtersby using Walsh–Hadamard transform, in Proc. of IEEE Int. Symp. Circuits andSystems, ISCAS ’97, 4, 2601–2604 (1997).
39. A. R. Varkonyi-Koczy, Multi-sine synthesis and analysis via Walsh–Hadamardtransformation, in Proc. of IEEE Int. Symp. Circuits and Systems, ISCAS ’96,Connecting the World, 2, 457–460 (1996).
40. M. Colef and B. J. Vision, “NTSC component separation via Hadamardtransform,” IEEE Image Signal Process. 141 (1), 27–32 (1994).
41. T. Beer, “Walsh transforms,” Am. J. Phys 49 (5), 466–472 (1981).
42. G.-Z. Xiao and J. L. Massey, “A spectral characterization of correlation-immune combining functions,” IEEE Trans. Inf. Theory 34 (3), 569–571(1988).
43. C. Yuen, “Testing random number generators by Walsh transform,” IEEETrans. Comput C-26 (4), 329–333 (1977).
44. H. Larsen, “An algorithm to compute the sequency ordered Walsh transform,”IEEE Trans. Acoust. Speech Signal Process. ASSP-24, 335–336 (1976).
45. S. Agaian, H. Sarukhanyan, K. Egiazarian and J. Astola, Williamson-Hadamard transforms: design and fast algorithms, in Proc. of 18 Int. ScientificConf. on Information, Communication and Energy Systems and Technologies,ICEST-2003, Sofia, Bulgaria, Oct. 16–18, pp. 199–208 (2003).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
274 Chapter 8
46. H. Sarukhanyan, S. Agaian, J. Astola, and K. Egiazarian, “Binary matrices,decomposition and multiply-add architectures,” Proc. SPIE 5014, 111–122(2003) [doi:10.1117/12.473134].
47. S. Agaian, H. Sarukhanyan, and J. Astola, “Skew Williamson–Hadamardtransforms,” J. Multiple-Valued Logic Soft Comput. 10 (2), 173–187 (2004).
48. S. Agaian, H. Sarukhanyan, and J. Astola, “Multiplicative theorem basedfast Williamson–Hadamard transforms,” Proc. SPIE 4667, 82–91 (2002)[doi:10.1117/12.46969].
49. H. Sarukhanyan, A. Anoyan, S. Agaian, K. Egiazarian and J. Astola, FastHadamard transforms, in Proc of. Int. TICSP Workshop on Spectral Methodsand Multirate Signal Processing, SMMSP-2001, June 16–18, Pula, Croatia,TICSP Ser. 13, 33–40 (2001).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Chapter 9Orthogonal Arrays
We have seen that one of the basic Hadamard matrix building methods is basedon the construction of a class of “special-component” matrices that can beplugged into arrays (templates) to generate Hadamard matrices. In Chapter 4, wediscussed how to construct these special-component matrices. In this chapter, wefocus on the second component of the plug-in template method: construction ofarrays/templates. Generally, the arrays into which suitable matrices are plugged areorthogonal designs (ODs), which have formally orthogonal rows (and columns).
The theory of ODs dates back over a century.1–3 ODs have several variationssuch as the Goethals–Seidel arrays and Wallis–Whiteman arrays. Numerousapproaches for construction of these arrays/templates have been developed.4–101
A survey of OD applications, particularly space–time block coding, can be foundin Refs. 3–7, 23, 24, 34, 87–91.
The space–time block codes are particularly attractive because they canprovide full transmit diversity while requiring a very simple decoupled maximum-likelihood decoding method.80–91 The combination of space and time diversity hasmoved the capacity of wireless communication systems toward theoretical limits;this technique has been adopted in the 3G standard in the form of an Alamouti codeand in the newly proposed standard for wireless LANs IEEE 802.11n.87
In this chapter, two plug-in template methods of construction of Hadamardmatrices are presented. We focus on construction of only Baumert–Hall, Plotkin,and Welch arrays, which are the subsets of ODs.
9.1 ODs
The original definition of OD was proposed by Geramita et al.6 Dr. Seberry(see Fig. 9.1), a co-author of that paper, is world renowned for her discoverieson Hadamard matrices, ODs, statistical designs, and quaternion OD (QOD).She also did important work on cryptography. Her studies of the applicationof discrete mathematics and combinatorial computing via bent functions andS -box design have led to the design of secure crypto algorithms and stronghashing algorithms for secure and reliable information transfer in networks andtelecommunications. Her studies of Hadamard matrices and ODs are also appliedin CDMA technologies.11
An OD of order n and type (s1, s2, . . . , sk), denoted by OD(n; s1, s2, . . . , sk) isan n × n matrix D with entries from the set (0,±x1,±x2, . . . ,±xk) where each xi
275
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
276 Chapter 9
Figure 9.1 Dr. Jennifer Seberry.
occurs si times in each row and column, such that the distinct rows are pairwiseorthogonal, i.e.,
D(x1, x2, . . . , xk)DT (x1, x2, . . . , xk) =k∑
i=1
six2i In, (9.1)
where In is an identity matrix of order n, and superscript T is a transposition sign.The OD example of orders 2, 4, 4, 4, 8 and types (2, 1, 1), (4, 1, 1, 1, 1), (4, 1,
1, 2), (4, 1, 1), and (8, 1, 1, 1, 1, 1, 1, 1, 1) are given as follows:
OD(2; 1, 1) =(a bb −a
), OD(4; 1, 1, 1, 1) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝a −b −c −db a −d cc d a −bd −c b a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ,
OD(4; 1, 1, 2) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝a b b d−b a d −b−b −d a b−d b −b a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , OD(4; 1, 1) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝a 0 −c 00 a 0 cc 0 a 00 −c 0 a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ,
OD(8, 1, 1, 1, 1, 1, 1, 1, 1),=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
a b c d e f g h−b a d −c f −e −h g−c −d a b g h −e − f−d c −b a h −g f −e−e − f −g −h a b c d− f e −h g −b a −d c−g h e − f −c d a −b−h −g f e −d −c b a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (9.2)
It is well known that the maximum number of variables that may appear in anOD is given by Radon’s function ρ(n), which is defined by ρ(n) = 8c + 2d, wheren = 2ab, b is an odd number, and a = 4c + d, 0 ≤ d < 4 (see, for example, Ref. 5).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Orthogonal Arrays 277
Now we present two simple OD construction methods.5 Let two cyclic matricesA1, A2 of order n exist, satisfying the condition
A1AT1 + A2AT
2 = f In. (9.3)
If f is the quadratic form f = s1x21 + s2x2
2, then an OD, OD(2n; s1, s2), exists.
Proof: It can be verified that the matrices(A1 A2
−AT2 AT
1
)or
(A1 A2R
−A2R A1
)(9.4)
are OD(2n; s1, s2), where R is the back-diagonal identity matrix of order n.Similarly, if Bi, I = 1, 2, 3, 4 are cyclic matrices of order n with entries (0, x1,x2, . . . , xk) satisfying the condition
4∑i=1
BiBTi =
k∑i=1
xis2i In, (9.5)
then the Goethals–Seidel array⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝B1 B2R B3R B4R−B2R B1 −BT
4 R BT3 R
−B3R BT4 R B1 −BT
2 R−B4R −BT
3 R BT2 R B1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ (9.6)
is an OD(4n; s1, s2, . . . , sk) (see Ref. 4, p. 107, for more details).
Below we present the well-known OD construction methods:5,76,92
• If there is an OD, OD(n; s1, s2, . . . , sk), then an OD, OD(2n; s1, s1, es2, . . . , esk),exists, where e = 1 or 2.• If there is an OD, OD(n; s1, s2, . . . , sk), on the commuting variables
(0,±x1,±x2, . . . ,±xk), then there is an OD, OD(n; s1, s2, . . . , si + s j, . . . , sk)and OD(n; s1, s2, . . . s j−1, s j+1, . . . , sk), on the k − 1 commuting variables(0,±x1,±x2, . . . ,±x j−1,±x j+1, . . . ,±xk).• If n ≡ 0 (mod 4), then the existence of W(n, n − 1) implies the existence of
a skew-symmetric W(n, n − 1). The existence the skew-symmetric W(n, k) isequivalent to the existence of OD(n; 1, k).• An OD, OD(n; 1, k), can only exist in order n ≡ 4 (mod 8) if k is the sum of
three squares. An OD, OD(n; 1, n − 2), can only exist in order n ≡ 4 (mod 8) ifn − 2 is the sum of two squares.• If four cyclic matrices A, B, C, D of order n exist satisfying
AAT + BBT +CCT + DDT = f In, (9.7)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
278 Chapter 9
then ⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝A BR CR DR
−BR A DT R −CT R
−CR −DT R A BT R
−DR CT R −BT R A
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠(9.8)
is a W(4n, f ) when A, B, C, D are (0,−1,+1) matrices, and an OD, OD(4n; s1,s2, . . . , sk), on x1, x2, . . . , xk when A, B, C, D are entries from (0,±x1,±x2, . . . ,±xk) and f =
∑ki=1 six2
i . Here, R is a back-diagonal identity matrix of order n.• If there are four sequences A, B, C, D of length n with entries from (0,±x1,±x2,±x3,±x4) with zero periodic or nonperiodic autocorrelation functions, thenthese sequences can be used as the first rows of cyclic matrices that can be usedin the Goethals–Seidel array to form an OD(4n; s1, s2, s3, s4). Note that if thereare sequences of length n with zero nonperiodic autocorrelation functions, thenthere are sequences of length n + m for all m ≥ 0.• OD of order 2t = (m − 1)n and type (1,m − 1,mn − m − n) exist.• If two Golay sequences of length m and a set of two Golay sequences of length
k exist, then a three-variable full OD, OD[4(m + 2k); 4m, 4k, 4k], exists.76
Recently, Koukouvinon and Simos have constructed equivalent Hadamardmatrices based on several new and old full ODs, using circulant and symmetricblock matrices. In addition, they have provided several new constructions for ODsderived from sequences with zero autocorrelation. The ODs used to constructthe equivalent Hadamard matrices are produced from theoretical and algorithmicconstructions.76
Problem for exploration: An OD, OD(4n; t, t, t, t), for every positive integer texists.
Several generalizations of real square ODs have followed, including generalizedreal ODs, complex ODs (CODs), generalized CODs, generalized complex linearprocessing ODs, and QODs.
9.1.1 ODs in the complex domain
Geramita and Geramita first studied ODs in the complex domain.6 Complex ODsCOD(n; s1, s2, . . . sk) of type (s1, s2, . . . sk) are n × n matrices C with entries in theset {0,±x1,±jx1,±x2,±jx2, . . . ,±xk,±jxk}( j =
√−1) satisfying the conditions
CCH = CHC =k∑
i=1
sis2i In, (9.9)
where H denotes the Hermitian transpose (the transpose complex conjugate).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Orthogonal Arrays 279
An orthogonal complex array of size n is an n × n matrix with entries (z1, z2,. . . , zn), their conjugations, or their products by j ( j =
√−1) such that
S HS =k∑
i=1
|zi |2 In. (9.10)
For example, Alamouti’s 2 × 2 matrix is defined by(z1 z2
z∗2 −z∗1
). (9.11)
This definition can be generalized to include r × n rectangular designs. Ther × n rectangular designs apply to space–time block coding for multiple-antennawireless communications.
Finally, in Refs. 87, 88, 93, the authors generalized the definition of complexODs by introducing ODs over the quaternion domain. They made the first step inbuilding a theory of these novel quaternion ODs. The noncommutative quaternions,invented by Hamilton in 1843, can be viewed as a generalization of complexnumbers. The noncommutative quaternions Q = [±1,±i,± j,±k] satisfy i2 = j2 =k2 = ijk = −1. A quaternion variable a = a1 + a2i + a3 j + a4k, where a1, a2, a3, a4
are real variables, has a quaternion conjugate defined by a∗ = a1 − a2i − a3 j − a4k.More information about quaternions and their properties can be found in Ref. 94.Several construction methods for obtaining QODs over quaternion variables havebeen introduced.3,87,88,93,95,96
Next, we present a definition of the quaternion orthogonal array and a simpleexample:
Definition 9.1.1:93 A QOD for commuting real variables x1, x2, . . . , xu of type(s1, s2, . . . , su) is an r × n matrix A with entries from {0,±q1x1,±q2x2, . . . ±quxu}, qh ∈ Q that satisfies
AQA =u∑
h=1
shx2hIn. (9.12)
This design is denoted by QOD(r, n; s1, s2, . . . su). When r = n, we have
AQA = AAQ =
u∑h=1
shx2hIn. (9.13)
Similarly, we define a QOD for commuting complex variables z1, z2, . . . , zu
of type (s1, s2, . . . , su) as an n × r matrix A with entries from a set {0,±q1z1,±q∗1z∗1, . . . ,±quzu,±q∗uz∗u}, q ∈ Q, that satisfies
AQA =u∑
h=1
sh|zh|2In. (9.14)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
280 Chapter 9
Finally, we define a QOD for quaternion variables a1, a2, . . . , au of type(s1, s2, . . . , su) as an n × r matrix A with entries from a set {0,±a1,±aQ
1 ,±a2,±aQ2 ,
. . . ,±au,±aQu }, q ∈ Q, that satisfies
AQA =u∑
h=1
sh|zh |2 In. (9.15)
We can generalize these definitions to allow the design entries to be real linearcombinations of the permitted variables and their quaternion multipliers, in whichcase we say the design is by linear processing.
Examples:
• The matrix X =(−x1 ix2− jx2 kx1
)is a QOD on real variables x1, x2.
• The matrix Z =(
iz1 iz2− jz∗2 jz∗1
)is a QOD on complex variables z1, z2.
• The matrix A =(
a 00 a
)is the most obvious example of a QOD on quaternion
variable a. Note that QODs on quaternion variables are the most difficult toconstruct.
Theorem 9.1.1:93 Let A and B be CODs, COD(n, n; s1, s2, . . . , sk) and COD(n, n;t1, t2, . . . , tk), respectively, on commuting complex variables z1, z2, . . . , zk. If HH Bis symmetric, then A + jB is QOD QOD(n, n; s1 + t1, s2 + t2, . . . , sk6 + tk) on thecomplex variables z1, z2, . . . , zk, where AH is the quaternion transpose.
9.2 Baumert–Hall Arrays
Baumert–Hall arrays admit generalizations of Williamson’s theorem. Unfortu-nately, it is very difficult in general to find a Baumert–Hall array of order n,even for a small n. The Baumert–Hall array of order 12 given below is the firstBaumert–Hall array constructed in Refs. 7 and 97. The class of Baumert–Hall ar-rays of order 4t was constructed using T matrices and the Geothals–Seidel array oforder 4.
Definition 9.2.1:7,97 A square matrix H(a, b, c, d) of order 4t is called aBaumert–Hall array of order 4t if it satisfies the following conditions:
(1) Each element of H(a, b, c, d) has the form ±x, x ∈ {a, b, c, d}.(2) In any row there are exactly t entries ±x, x ∈ {a, b, c, d}.(3) The rows (columns) of H(a, b, c, d) are formally orthogonal.
Example 9.2.1: (a) Baumert–Hall array of order 4 (also a Williamson array):⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝x y z w−y x −w z−z w x −y−w −z y x
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (9.16)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Orthogonal Arrays 281
(b) Baumert–Hall array of order 12:
A(x, y, z,w) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
y x x x −z z w y −w w z −y−x y x −x w −w z −y −z z −w −y−x −x y x w −y −y w z z w −z−x x −x y −w −w −z w −z −y −y −z−y −y −z −w z x x x −w −w z −y−w −w −z y −x z x −x y y −z −w
w −w w −y −x −x z x y −z −y −z−w −z w −z −x x −x z −y y −y w−y y −z −w −z −z w y w x x x
z −z −y −w −y −y −w −z −x w x −x−z −z y z −y −w y −w −x −x w x
z −w −w z y −y y z −x x −x w
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (9.17)
Theorem 9.2.1: If a Baumert–Hall array of order t and Williamson matrices oforder n exist, then a Hadamard matrix of order 4nt also exists.
Definition 9.2.2:10,92 Square (0,±1) matrices X1, X2, X3, X4 of orders k are calledT matrices if the following conditions are satisfied:
Xi ∗ Xj = 0, i � j, i, j = 1, 2, 3, 4;
XiXj = XjXi, i, j = 1, 2, 3, 4;
XiRXTj = XjRXT
i , i, j = 1, 2, 3, 4;4∑
i=1
Xi is a (+1,−1) matrix;
4∑i=1
XiXTi = kIk.
(9.18)
Note that in Refs. 10 and 92, only cyclic T matrices were constructed. In this case,the second and the third conditions of Eq. (9.18) are automatically satisfied.
The first rows of some examples of cyclic T matrices of orders 3, 5, 7, 9 aregiven as follows:
n = 3: X1 = (1, 0, 0), X2 = (0, 1, 0), X3 = (0, 0, 1);
n = 5: X1 = (1, 1, 0, 0, 0), X2 = (0, 0, 1, −1, 0), X3 = (0, 0, 0, 0, 1);
n = 7: X1 = (1, 0, 1, 0, 0, 0, 0), X2 = (0, 0, 0, −1, −1, 1, 0),
X3 = (0, 0, 0, 0, 0, 0, 1), X4 = (0, 1, 0, 0, 0, 0, 0);
n = 9: X1 = (1, 0, 1, 0, 1, 0, −1, 0, 0), X2 = (0, 1, 0, 1, 0, −1, 0, 1, 0),
X3 = (0, 0, 0, 0, 0, 0, 0, 0, 1).
(9.19)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
282 Chapter 9
Remark 9.2.1: There are T matrices of order n, n ∈ M = {1, 3, 5, . . . , 59, 61,2a10b26c + 1}, where a, b, and c are non-negative integers.10,11,92 See alsoTheorem 9.3.2, by which the Baumert–Hall arrays of order 335 and 603 are alsoconstructed.
Theorem 9.2.2: (Cooper-Seberry92,97,98) Let X1, X2, X3, X4 be T matrices oforder k. Then, the matrix ⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
A BR CR DR−BR A −DT R CT R−CR DT R A −BT R−DR −CT R BT R A
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ (9.20)
is a Baumert–Hall array of order 4k, where
A = aX1 + bX2 + cX3 + dX4,
B = −bX1 + aX2 − dX3 + cX4,
C = −cX1 + dX2 + aX3 − bX4,
D = −dX1 − cX2 + bX3 + aX4.
(9.21)
Analyzing the expression of Eq. (9.21), we see that if instead of parameters a,b, c, d we substitute parametric commutative matrices of order t, then the matrixin Eq. (9.20) will be a Baumert–Hall array of order 4kt. Furthermore, we shall callsuch matrices parametric Williamson matrices. The methods of their constructionare given in the next chapter.
It is also obvious that if there is a Baumert–Hall array of the form (AAi, j)i, j=1 of
order 4k, where Ai, j are parametric commutative matrices, it is possible to constructsimilar matrices [Eq. (9.21)] suitable to the array in Eq. (9.20). The representationsof matrices in a block form and their properties and methods of constructionare considered in the subsequent chapters. Below, we give two orthogonalarrays of orders 20 and 36 constructed by Welch and Ono–Sawade–Yamamoto(see p. 363).7
9.3 A Matrices
Definition 9.3.1:92,98 A square matrix H(x1, x2, . . . , x1) of order m with elements±xi we will call an A matrix depending on l parameters if it satisfies the condition
H(x11, x1
2, . . . , x1l )HT (x2
1, x22, . . . , x2
l ) + H(x21, x2
2, . . . , x2l )HT (x1
1, x12, . . . , x1
l )
=2ml
l∑i=1
x1i x2
i Im. (9.22)
Note that the concept of an A matrix of order m depending on l parameterscoincides with the following:
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Orthogonal Arrays 283
• Baumert–Hall array if l = 4.• Plotkin array if l = 8.• Yang array if l = 2.
The A matrix of order 12 depending on three parameters is given as follows:
A(a, b, c) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
a b c b −c a c b −a c a −bc a b −c a b b −a c a −b cb c a a b −c −a c b −b c a−b c −a a b c −c b −a c −a b
c −a −b c a b b −a −c −a b c−a −b c b c a −a −c b b c −a−c −b a c −b a a b c −b −a c−b a −c −b a c c a b −a c −b
a −c −b a c −b b c a c −b −a−c −a b −c a −b b a −c a b c−a b −c a −b −c a −c b c a b
b −c −a −b −c a −c b a b c a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (9.23)
Note that for a, b, c = ±1, the above-given matrix is the Hadamard matrix oforder 12. For a = 1, b = 2, and c = 1, this matrix is the integer orthogonal matrix
A(1, 2, 1) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 2 1 2 −1 1 1 2 −1 1 1 −21 1 2 −1 1 2 2 −1 1 1 −2 12 1 1 1 2 −1 −1 1 2 −2 1 1−2 1 −1 1 2 1 −1 2 −1 1 −1 2
1 −1 −2 1 1 2 2 −1 −1 −1 2 1−1 −2 1 2 1 1 −1 −1 2 2 1 −1−1 −2 1 1 −2 1 1 2 1 −2 −1 1−2 1 −1 −2 1 1 1 1 2 −1 1 −2
1 −1 −2 1 1 −2 2 1 1 1 −2 −1−1 −1 2 −1 1 −2 2 1 −1 1 2 1−1 2 −1 1 −2 −1 1 −1 2 1 1 2
2 −1 −1 −2 −1 1 −1 2 −1 2 1 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (9.24)
We can see that if H(x1, x2, . . . , xl) is an A matrix, then H(±1,±1, . . . ,±1) is theHadamard matrix.
Theorem 9.3.1:14,15,98 For the existence of an A matrix of order m dependingon l parameters, it is necessary and sufficient that there are (0,±1) matrices Ki,I = 1, 2, . . . , l satisfying the conditions
Ki ∗ Kj = 0, i � j, i, j = 1, 2, . . . , l,
KiKTj + KjK
Ti = 0, i � j, i, j = 1, 2, . . . , l,
KiKTi =
ml
Im, i = 1, 2, . . . , l.
(9.25)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
284 Chapter 9
The set of matrices Ki, I = 1, 2, . . . , l satisfying the conditions of Eq. (9.25) iscalled an l-basic frame.
Lemma 9.3.1: 14,98 If an A matrix of order m depending on l parameters (l =2, 4, 8) exists, then there are A matrices Hi(x1, x2, . . . , xi), i = 1, 2, . . . , l satisfyingthe conditions
Hi(x1, x2, . . . , xl)HTj (x1, x2, . . . , xl) + Hj(x1, x2, . . . , xl)HT
i (x1, x2, . . . , xl)
= 0, i � j. (9.26)
Proof: By Theorem 9.3.1, there is an l-basic frame Ki, I = 1, 2, . . . , l. Now weprovide A matrices that satisfy the conditions of Eq. (9.26).
Case 1. l = 2,
H1(x1, x2) = x1K1 + x2K2, H2(x1, x2) = H1(−x2, x1). (9.27)
Case 2. l = 4,
H1(x1, x2, x3, x4) = x1K1 + x2K2 + x3K3 + x4K4,
H2(x1, x2, x3, x4) = H1(−x2, x1,−x4, x3),
H3(x1, x2, x3, x4) = H1(−x3, x4, x1,−x2),
H4(x1, x2, x3, x4) = H1(−x4,−x3, x2, x1).
(9.28)
Case 3. l = 8,
H1(x1, x2, . . . , x8) = x1K1 + x2K2 + · · · + x8K8,
H2(x1, x2, . . . , x8) = H1(−x2, x1, x4,−x3, x6,−x5,−x8, x7),
H3(x1, x2, . . . , x8) = H1(−x3,−x4, x1, x2, x7, x8,−x5,−x5),
H4(x1, x2, . . . , x8) = H1(−x4, x3,−x2, x1, x8,−x7, x6,−x5),
H5(x1, x2, . . . , x8) = H1(−x5,−x6,−x7,−x8, x1, x2, x3, x4),
H6(x1, x2, . . . , x8) = H1(−x6, x5,−x8, x7,−x2, x1,−x4, x3),
H7(x1, x2, . . . , x8) = H1(−x7, x8, x5,−x6,−x3, x4, x1,−x2),
H8(x1, x2, . . . , x8) = H1(−x8,−x7, x6, x5,−x4,−x3, x2, x1).
(9.29)
The following lemma relates to the construction of 4-basic frame using T matrices.
Lemma 9.3.2: Let X1, X2, X3, X4 be T matrices of order n. Then, the followingmatrices form a 4-basic frame of order 4n:
K1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
X1Bn X2 X3 X4
−X2 X1Bn −XT4 XT
3
−X3 −XT4 −X1Bn XT
2
−X4 XT3 −XT
2 −X1Bn
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, K2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
X2Bn −X1 X4 −X3
X1 X2Bn XT3 XT
4
X4 XT3 −X2Bn −XT
1
X3 XT4 XT
1 −X2Bn
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (9.30a)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Orthogonal Arrays 285
K3 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
X3Bn −X4 −X1 X2
X4 X3Bn −XT2 −XT
1
−X1 −XT2 −X3Bn −XT
4
X2 −XT1 XT
4 −X3Bn
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, K4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
X4Bn X3 −X2 −X1
−X3 X4Bn XT1 −XT
2
−X2 XT1 −X4Bn XT
3
−X1 −XT2 −XT
3 −X4Bn
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (9.30b)
Example 9.3.1: The 4-basic frame of order 12. Using the following T matrices:
X1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ 0 00 + 00 0 +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , X2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝0 + 00 0 ++ 0 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , X3 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝0 0 ++ 0 00 + 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (9.31)
we obtain
K1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
0 0 + 0 + 0 0 0 + 0 0 0
0 + 0 0 0 + + 0 0 0 0 0
+ 0 0 + 0 0 0 + 0 0 0 0
0 − 0 0 0 + 0 0 0 0 + 0
0 0 − 0 + 0 0 0 0 0 0 +
− 0 0 + 0 0 0 0 0 + 0 0
0 0 − 0 0 0 0 0 − 0 0 +
− 0 0 0 0 0 0 − 0 + 0 0
0 − 0 0 0 0 − 0 0 0 + 0
0 0 0 0 + 0 0 0 − 0 0 −0 0 0 0 0 + − 0 0 0 − 0
0 0 0 + 0 0 0 − 0 − 0 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
, (9.32a)
K2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
0 + 0 − 0 0 0 0 0 0 0 −+ 0 0 0 − 0 0 0 0 − 0 0
0 0 + 0 0 − 0 0 0 0 − 0
+ 0 0 0 + 0 0 + 0 0 0 0
0 + 0 + 0 0 0 0 + 0 0 0
0 0 + 0 0 + + 0 0 0 0 0
0 0 0 0 + 0 0 − 0 − 0 0
0 0 0 0 0 + − 0 0 0 − 0
0 0 0 + 0 0 0 0 − 0 0 −0 0 + 0 0 0 + 0 0 0 − 0
+ 0 0 0 0 0 0 + 0 − 0 0
0 + 0 0 0 0 0 0 + 0 0 −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
, (9.32b)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
286 Chapter 9
K3 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ 0 0 0 0 0 − 0 0 0 + 00 0 + 0 0 0 0 − 0 0 0 +0 + 0 0 0 0 0 0 − + 0 00 0 0 + 0 0 0 0 − − 0 00 0 0 0 0 + − 0 0 0 − 00 0 0 0 + 0 0 − 0 0 0 −− 0 0 0 0 − 0 0 − 0 0 00 − 0 − 0 0 0 − 0 0 0 00 0 − 0 − 0 − 0 0 0 0 00 + 0 − 0 0 0 0 0 − 0 00 0 + 0 − 0 0 0 0 0 0 −+ 0 0 0 0 − 0 0 0 0 − 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
, (9.32c)
K4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
0 0 0 0 0 + 0 − 0 − 0 00 0 0 + 0 0 0 0 − 0 − 00 0 0 0 + 0 − 0 0 0 0 −0 0 − 0 0 0 + 0 0 0 0 −− 0 0 0 0 0 0 + 0 − 0 00 − 0 0 0 0 0 0 + 0 − 00 − 0 + 0 0 0 0 0 0 + 00 0 − 0 + 0 0 0 0 0 0 +− 0 0 0 0 + 0 0 0 + 0 0− 0 0 0 0 − 0 − 0 0 0 00 − 0 − 0 0 0 0 − 0 0 00 0 − 0 − 0 − 0 0 0 0 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (9.32d)
Theorem 9.3.2: Let A0, B0,C0,D0 and X1, X2 be T matrices of order m and n,respectively. Then, the following matrices are T matrices of order mni, i = 1, 2, . . .:
Ai = X1 ⊗ Ai−1 − X2 ⊗ BTi−1,
Bi = X1 ⊗ Bi−1 + X2 ⊗ ATi−1,
Ci = X1 ⊗Ci−1 − X2 ⊗ DTi−1,
Di = X1 ⊗ Di−1 + X2 ⊗CTi−1.
(9.33)
Proof: Prove that the matrices in Eq. (9.33) satisfy the conditions of Eq. (9.18).Prove the theorem by induction. Let i = 1. Compute
A1 ∗ B1 = (X1 ∗ X1) ⊗ (A0 ∗ B0) + (X1 ∗ X2) ⊗ (A0 ∗ AT0 )
−(X2 ∗ X1) ⊗ (BT0 ∗ B0) − (X2 ∗ X2) ⊗ (BT
0 ∗ AT0 ). (9.34)
Since A0 ∗B0 = 0, X1 ∗X2 = 0, and BT0 ∗AT
0 = 0, we then conclude that A1 ∗B1 = 0.[Remember that ∗ is the sign of a Hadamard (point wise) product]. In a similar
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Orthogonal Arrays 287
manner, we can determine that
P ∗ Q = 0, P � Q, P,Q ∈ {A1, B1,C1,D1}. (9.35)
Prove the second condition of Eq. (9.18):
A1B1 = X21 ⊗ A0B0 + X1X2 ⊗ A0AT
0 − X2X1 ⊗ BT0 B0 − X2
2 ⊗ BT0 AT
0 .
B1A1 = X21 ⊗ B0A0 − X1X2 ⊗ B0BT
0 + X2X1 ⊗ AT0 A0 − X2
2 ⊗ AT0 BT
0 .(9.36)
The comparison of both relations gives A1B1 = B1A1. Now, compute A1RmnB1 withconsideration that Rmn = Rn ⊗ Rm.
A1RmnBT1 = X1RnXT
1 ⊗ A0RmBT0 + X1RnXT
2 ⊗ A0RmA0
− X2RnXT1 ⊗ BT
0 RmBT0 − X2RnXT
2 ⊗ BT0 RmA0,
B1RmnAT1 = X1RnXT
1 ⊗ B0RmAT0 − X1RnXT
2 ⊗ B0RmB0
+ X2RnXT1 ⊗ AT
0 RmAT0 − X2RnXT
2 ⊗ AT0 RmB0.
(9.37)
Hence, we have A1RmnBT1 = B1RmnAT
1 . Similarly, we can determine that
PRmnQT = QRmnPT , P � Q, P,Q ∈ {A1, B1,C1,D1}. (9.38)
The fourth condition of Eq. (9.18) follows from the equation
A1 + B1 +C1 + D1 = X1 ⊗ (A0 + B0 +C0 + D0) + X2
⊗ (AT0 − BT
0 +CT0 − DT
0 ). (9.39)
Now let us prove the fifth condition:
A1AT1 = X1XT
1 ⊗ A0AT0 − X1XT
2 ⊗ A0B0 − X2XT1 ⊗ BT
0 AT0 + X2XT
2 ⊗ BT0 B0,
B1BT1 = X1XT
1 ⊗ .B0BT0 + X1XT
2 ⊗ B0A0 + X2XT1 ⊗ AT
0 BT0 + X2XT
2 ⊗ AT0 A0,
C1CT1 = X1XT
1 ⊗ .C0CT0 − X1XT
2 ⊗C0D0 − X2XT1 ⊗ DT
0 CT0 + X2XT
2 ⊗ DT0 D0,
D1DT1 = X1XT
1 ⊗ .D0DT0 + X1XT
2 ⊗ D0C0 + X2XT1 ⊗CT
0 DT0 + X2XT
2 ⊗CT0 C0.
(9.40)
Summing these expressions, we find that
A1AT1 + B1BT
1 +C1CT1 + D1DT
1
= (X1XT1 + X2XT
2 ) ⊗ (A0AT0 + B0BT
0 +C0CT0 + D0DT
0 ) = mnImn. (9.41)
Now, assuming that matrices Ai, Bi,Ci,Di are T matrices for all i ≤ k, prove thetheorem for i = k + 1. We verify only the fifth condition of Eq. (9.18),
Ak+1ATk+1 + Bk+1BT
k+1 +Ck+1CTk+1 + Dk+1DT
k+1
= (X1XT1 + X2XT
2 ) ⊗ (AkATk + BkBT
k +CkCTk + DkDT
k ). (9.42)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
288 Chapter 9
Because X1, X2 and Ak, Bk,Ck,Dk are T matrices of order n and mnk, respectively,from the above-given equation we obtain
Ak+1ATk+1 + Bk+1BT
k+1 +Ck+1CTk+1 + Dk+1DT
k+1 = mnk+1Imnk+1 . (9.43)
The theorem is proved.
Lemma 9.3.3: If A0, B0,C0,D0 are T matrices of order m, then the followingmatrices:
Ai = I6 ⊗ Bi−1 + X1 ⊗Ci−1 + X2 ⊗ Di−1,
Bi = I6 ⊗ Ai−1 − X1 ⊗ Di−1 + XT2 ⊗Ci−1,
Ci = I6 ⊗ Di−1 + X1 ⊗ Ai−1 − XT2 ⊗ Bi−1,
Di = I6 ⊗Ci−1 + X1 ⊗ Bi−1 + X2 ⊗ Ai−1
(9.44)
are T matrices of order 6im, where i = 1, 2, . . . , and
X1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
0 0 0 + 0 00 0 0 0 + 00 0 0 0 0 ++ 0 0 0 0 00 + 0 0 0 00 0 + 0 0 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, X2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
0 + − 0 − −− 0 + − 0 −− − 0 + − 00 − − 0 + −− 0 − − 0 ++ − 0 − − 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (9.45)
For example, let
A0 =
⎛⎜⎜⎜⎜⎜⎜⎜⎝+ 0 00 + 00 0 +
⎞⎟⎟⎟⎟⎟⎟⎟⎠ , B0 =
⎛⎜⎜⎜⎜⎜⎜⎜⎝0 + 00 0 ++ 0 0
⎞⎟⎟⎟⎟⎟⎟⎟⎠ , C0 =
⎛⎜⎜⎜⎜⎜⎜⎜⎝0 0 ++ 0 00 + 0
⎞⎟⎟⎟⎟⎟⎟⎟⎠ , D0 =
⎛⎜⎜⎜⎜⎜⎜⎜⎝0 0 00 0 00 0 0
⎞⎟⎟⎟⎟⎟⎟⎟⎠ . (9.46)
From Lemma 9.3.3, we obtain T matrices of order 18,
A1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
B0 O0 O0 C0 O0 O0
O0 B0 O0 O0 C0 O0
O0 O0 B0 O0 O0 C0
C0 O0 O0 B0 O0 O0
O0 C0 O0 O0 B0 O0
O0 O0 C0 O0 O0 B0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, B1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
A0 −C0 −C0 O0 −C0 C0
C0 A0 −C0 −C0 O0 −C0
−C0 C0 A0 −C0 −C0 O0
O0 −C0 C0 A0 −C0 −C0
−C0 O0 −C0 C0 A0 −C0
−C0 −C0 O0 −C0 C0 A0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠,
C1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
D0 B0 B0 O0 B0 −B0
−B0 D0 B0 B0 O0 B0
B0 −B0 D0 B0 B0 O0
O0 B0 −B0 D0 B0 B0
B0 O0 B0 −B0 D0 B0
B0 B0 O0 B0 −B0 D0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, D1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
C0 A0 −A0 B0 −A0 −A0
−A0 C0 A0 −A0 B0 −A0
−A0 −A0 C0 A0 −A0 B0
B0 −A0 −A0 C0 A0 −A0
−A0 B0 −A0 −A0 C0 A0
A0 −A0 B0 −A0 −A0 C0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠.
(9.47)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Orthogonal Arrays 289
From Theorem 9.3.2 and Lemma 9.3.3, there follows the existence of T matricesof order 2n, where n takes its values from the following set of numbers:
{63, 65, 75, 77, 81, 85, 87, 91, 93, 95, 99, 111, 115, 117, 119, 123,125, 129, 133, 135, 141, 148, 145, 147, 153, 155, 159, 161, 165, 169, 171,175, 177, 185, 189, 195, 203, 205, 209, 215, 221, 225, 231, 235, 243, 245,247, 255, 259, 265, 273, 275, 285, 287, 295, 297, 299, 301, 303, 305, 315,323, 325, 329, 343, 345, 351, 357, 361, 371, 375, 377, 385, 387, 399, 403,405, 413, 425, 427, 429, 435, 437, 441, 455, 459, 465, 475, 481, 483, 495,505, 507, 513, 525, 533, 551, 555, 559, 567, 575, 585, 589, 603, 609, 611,615, 621, 625, 627, 637, 645, 651, 663, 665, 675, 689, 693, 703, 705, 707,715, 725, 729, 735, 741, 765, 767, 771, 775, 777, 779, 783, 793, 805, 817,819, 825, 837, 845, 855, 861, 875, 885, 891, 893, 903, 915, 925, 931, 945,963, 969, 975, 987, 999, 1005, 1007, 1025, 1029, 1045, 1053, 1071, 1075,1083, 1107, 1113, 1121, 1125, 1127, 1155, 1159, 1161, 1175, 1197, 1203,1215, 1225, 1235, 1239, 1251, 1269, 1275, 1281, 1285, 1305, 1313, 1323,1325, 1365, 1375, 1377, 1407, 1425, 1431, 1463, 1475, 1485, 1515, 1525,1539, 1563, 1575, 1593, 1605, 1625, 1647, 1677, 1701, 1755, 1799, 1827,1919, 1923, 1935, 2005, 2025, 2085, 2093, 2121, 2187, 2205, 2243, 2403,2415, 2451, 2499, 2525, 2565, 2613, 2625, 2709, 2717, 2727, 2807, 2835,2919, 3003, 3015, 3059}.
9.4 Goethals–Seidel Arrays
Goethals and Seidel17 provide the most useful tool for constructing ODs. TheGoethals–Seidel array is of the form⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
A BR CR DR−BR A −DT R −CT R−CR DT R A −BT R−DR −CT R BT R A
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (9.48)
where R is the back-diagonal identity matrix and A, B, C, and D are cyclic (−1,+1)matrices of order n satisfying
AAT + BBT +CCT + DDT = 4nIn. (9.49)
If A, B, C, and D are cyclic symmetric (−1,+1) matrices, then a Williamson arrayresults in
W =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝A B C D−B A −D C−C D A −B−D −C B A
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (9.50)
This is the Goethals–Seidel array, a generalization of Williamson arrays.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
290 Chapter 9
For detailed discussions of more Goethals–Seidel arrays, we recommendRefs. 28, 30, 33, 39, 40, 92, 97–99. In Ref. 26, the authors have constructedan infinite family of Goethals–Seidel arrays. Particularly, they prove that if q =4n − 1 ≡ 3 (mod 8) is a prime power, then there is a Hadamard matrix of order 4nof the Goethals–Seidel type.
Cooper and Wallis32 first defined T matrices of order t (see Definition 9.2.2)to construct OD(4t; t, t, t, t) (which at that time they called Hadamard arrays). Thefollowing important theorem is valid:
Theorem 9.4.1: (Cooper–Seberry–Turyn) Suppose there are T matrices T1, T2,T3, T4 of order t(assumed to be cyclic or block cyclic = type 1). Let a, b, c, d becommuting variables. Then,
A = aT1 + bT2 + cT3 + dT4,
B = −bT1 + aT2 + dT3 − cT4,
C = −cT1 − dT2 + aT3 + bT4,
D = −dT1 + cT2 − bT3 + aT4.
(9.51)
These can be used in the Goethals–Seidel array (or Seberry Wallis–Whitemanarray for block-cyclic, i.e., type 1 and type 2 matrices)⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
A BR CR DR−BR A DT R −CT R−CR −DT R A BT R−DR CT R −BT R A
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ (9.52)
to form an OD(4t; t, t, t), where R is the permutation matrix that transforms cyclicto back-cyclic matrices or type 1 to type 2 matrices.
Theorem 9.4.2: (Cooper–Seberry–Turyn) Suppose there are T matrices T1, T2,T3, T4 of order t (assumed to be cyclic or block cyclic = type 1). Let A, B, C, D beWilliamson matrices of order m. Then,
X = T1 ⊗ A + T2 ⊗ B + T3 ⊗C + T4 ⊗ D,
Y = −T1 ⊗ B + T2 ⊗ A + T3 ⊗ D − T4 ⊗C,
Z = −T1 ⊗C − T2 ⊗ D + T3 ⊗ A + T4 ⊗ B,
W = −T1 ⊗ D + T2 ⊗C − T3 ⊗ B + T4 ⊗ A
(9.53)
can be used in the Goethals–Seidel array
GS =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝X YR ZR WR
−YR X WT R −ZT R−ZR −WT R X YT R−WR ZT R −YT R X
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ (9.54)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Orthogonal Arrays 291
to form a Hadamard matrix of order 4mt. Geramita and Seberry provide anumber of 8 × 8 arrays that can be treated as part Williamson and partGoethals–Seidel (see Ref. 5, p. 102).
Definition 9.4.1:92 We will call a square matrix H(X1, X2, X3, X4) of order 4t aGoethals–Seidel array if the following conditions are satisfied:
(1) Each element of H has the form ±Xi,±XTi Bn,±XT
i Bn.(2) In each row (column) of H, the elements ±Xi,±XT
i ,±XiBn,±XTi Bn are entered
t times.(3) XiXi = XjXi, XiBnXT
i = XjBnXTi .
(4) H(X1, X2, X3, X4)HT (XT1 , X
T2 , X
T3 , X
T4 ) = t
∑4i=1 XiXT
i ⊗ I4t.
Note that for t = 1 and Bn = R, the defined array coincides with Eq. (9.20), andfor t = 1 and Bn = Rm ⊗ Ik (n = mk), the defined array coincides with the Wallisarray,13
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝A1 ⊗ B1 A2R ⊗ B2 A3R ⊗ B3 A4R ⊗ B4
−A2R ⊗ B2 A1 ⊗ B1 −AT4 R ⊗ B4 AT
3 R ⊗ B3
−A3R ⊗ B3 AT4 R ⊗ B4 A1 ⊗ B1 −AT
2 R ⊗ B2
−A4R ⊗ B4 −AT3 R ⊗ B3 AT
2 R ⊗ B2 A1 ⊗ B1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (9.55)
Here we give an example of a Hadamard matrix of order 12 constructed using aWallis array. Let
B1 = B2 = B3 = B4 = 1,
A1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎝+ + +
+ + +
+ + +
⎞⎟⎟⎟⎟⎟⎟⎟⎠ , A2 = A3 = A4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎝+ − −− + −− − +
⎞⎟⎟⎟⎟⎟⎟⎟⎠ . (9.56)
Then, the following is the Wallis-type Hadamard matrix of order 12:
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + + − − + − − + − − ++ + + − + − − + − − + −+ + + + − − + − − + − −+ + − + + + + + − − − ++ − + + + + + − + − + −− + + + + + − + + + − −+ + − − − + + + + + + −+ − + − + − + + + + − +− + + + − − + + + − + ++ + − + + − − − + + + +
+ − + + − + − + − + + +
− + + − + + + − − + + +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (9.57)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
292 Chapter 9
Theorem 9.4.3: (Generalized Goethals–Seidel Theorem97) If there are William-son-type matrices of order n and a Goethals–Seidel array of order 4t, then aHadamard matrix of order 4tn exists.
Lemma 9.4.1: Let A, B, C, D be Williamson-type matrices of order k and let⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝X YG ZG WG
−YG X −WTG ZTG−ZG WTG X −YTG−WG −ZTG YTG X
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ (9.58)
be a Goethals–Seidel array of order 4. Then, the matrix⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
X ⊗ E YG ⊗ E ZG ⊗ E WG ⊗ E X ⊗ F YG ⊗ F ZG ⊗ F WG ⊗ F−YG ⊗ E X ⊗ E −WT G ⊗ E ZT G ⊗ E YG ⊗ F −X ⊗ F −WT G ⊗ F ZT G ⊗ F−ZG ⊗ E WT G ⊗ E X ⊗ E −YT G ⊗ E ZG ⊗ F WT G ⊗ F −X ⊗ F −YT G ⊗ F−WG ⊗ E −ZT G ⊗ E YT G ⊗ E X ⊗ E WG ⊗ F −ZT G ⊗ F YT G ⊗ F −X ⊗ F−X ⊗ F −YG ⊗ F −ZG ⊗ F −WG ⊗ F X ⊗ E YG ⊗ E ZG ⊗ E WG ⊗ E−YG ⊗ F X ⊗ F −WT G ⊗ F ZT G ⊗ F −YG ⊗ E X ⊗ E −WT G ⊗ E ZT G ⊗ E−ZG ⊗ F WT G ⊗ F X ⊗ F −YT G ⊗ F −ZG ⊗ E WT G ⊗ E X ⊗ E −YT G ⊗ E−WG ⊗ F −ZT G ⊗ F YT G ⊗ F X ⊗ F −WG ⊗ E −ZT G ⊗ E YT G ⊗ E X ⊗ E
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠(9.59)
is the Goethals–Seidel array of order 16k, where
E =
(A B−B A
), F =
(C DD −C
). (9.60)
Consider a (0,±1) matrix P = (pi, j) of order 4n with elements pi, j, defined as⎧⎪⎪⎪⎨⎪⎪⎪⎩p2i−1,2i = 1, p2i,2i−1 = −1, i = 1, 2, . . . , 2n,pi, j = 0, if i � 2k − 1 &
j � 2k or i � 2k & j � 2k − 1, k = 1, 2, . . . , 2n.(9.61)
We can verify that PT = −P,PPT = I4n.
Lemma 9.4.2: If there is a Goethals–Seidel array of order 4t, then there are twoGoethals–Seidel arrays H1 and H2 of order 4t satisfying the condition
H1HT2 + H2HT
1 = 0. (9.62)
Proof: Let H(X1, X2, X3, X4) be a Goethals–Seidel array of order 4t. We prove thatmatrices H1 = H(X1, X2, X3, X4) and H2 = H(X1, X2, X3, X4)(Ik ⊗ P) satisfy thecondition of Eq. (9.62), where matrix P is defined by Eq. (9.61),
H1HT2 + H2HT
1 = H(Ik ⊗ PT )HT + H(Ik ⊗ P)HT
= −H(Ik ⊗ P)HT + H(Ik ⊗ P)HT = 0. (9.63)
Theorem 9.4.4: Let a Goethals–Seidel array of order 4t and Williamson-typematrices of order k exist. Then, there is a Goethals–Seidel array of order 4kt.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Orthogonal Arrays 293
Proof: Let A, B, C, and D be Williamson matrices of order k and H(X1, X2, X3, X4)be a Goethals–Seidel array of order 4t. According to Lemma 9.4.2, there are twoGoethals–Seidel arrays H1 and H2 satisfying the condition of Eq. (9.62). Considerthe array
S (X1, X2, X3, X4) = X ⊗ H1 + Y ⊗ H2, (9.64)
where matrices
X =12
(A + B C + DC + D −A − B
), Y =
12
(A − B C − D−C + D A − B
), (9.65)
satisfy the conditions
X ∗ Y = 0,
X ± Y is a (+1,−1) matrix,
XYT = YXT ,
XXT + YYT = 2kI2k.
(9.66)
We can prove that S (X1, X2, X3, X4) is the Goethals–Seidel array of order 4kt,i.e., it satisfies the conditions of Definition 9.4.1. Let us check only the fourthcondition,
S S T = XXT ⊗ H1HT1 + YYT ⊗ H2HT
2 + XYT ⊗ (H1HT2 + H2HT
1 )
= t(XXT + YYT ) ⊗4∑
i=1
XiXTi ⊗ I4t + XYT ⊗ (H1HT
2 + H2HT1 )
= kt4∑
i=1
XiXTi ⊗ I4kt. (9.67)
In particular, from Theorem 9.4.4, the existence of Goethals–Seidel arrays of order8n follow, where n ∈ {3, 5, 7, . . . , 33, 37, 39, 41, 43, 49, 51, 55, 57, 61, 63}.
9.5 Plotkin Arrays
Plotkin’s array100 is defined similarly to the Baumert–Hall array and depends oneight parameters. Plotkin’s result is that if there is a Hadamard matrix of order 4n,then there is a Plotkin array of orders 4n, 8n, and 16n depending on two, four, andeight symbols, which is obtained as follows. Let H4n be the Hadamard matrix oforder 4n. Introduce the following (0,−1,+1) matrices of order 4n:
S =12
(I2n −I2n
I2n I2n
)H4n, T =
12
(I2n I2n
−I2n I2n
)H4n,
U =12
(I2n −I2n
−I2n −I2n
)H4n, V =
12
(I2n I2n
I2n −I2n
)H4n.
(9.68)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
294 Chapter 9
Then, we obtain the following:
• Plotkin array of order 4n,
H4n(a, b) = aS + bT ; (9.69)
• Plotkin array of order 8n,
H8n(a, b, c, d) =(
H4n(a, b) H4n(c, d)H4n(−c, d) H4n(a,−b)
); (9.70)
• Plotkin array of order 16n,
H16n(a, b, . . . , g, h) =(H8n(a, b, c, d) B8n(e, f , g, h)B8n(−e, f , g, h) −H8n(−a, b, c, d)
), (9.71)
where
B8n(a, b, c, d) =(
aS + bT cU + dV−cU − dV aS + bT
). (9.72)
In Ref. 100, a Plotkin array of order 24 was presented, and the followingconjuncture was given.
Problem for exploration (Plotkin conjuncture): There are Plotkin arrays in everyorder 8n, where n is a positive integer. Only two Plotkin arrays of order 8t areknown at this time. These arrays of order 8 and 24 are given below.92,100
Example 9.5.1: (a) Plotkin (Williamson) array of order 8:
P8(a, b, . . . , h) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
a b c d e f g h−b a d −c f −e −h g−c −d a b g h −e − f−d c −b a h −g f −e−e − f −g −h a b c d− f e −h g −b a −d c−g h e − f −c d a −b−h −g f e −d −c b a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (9.73)
(b) Plotkin array of order 24:
P24(x1, x2, . . . , x8) =(
A(x1, x2, x3, x4) B(x5, x6, x7, x8)B(−x5, x6, x7, x8) −A(−x1, x2, x3, x4)
), (9.74)
where A(x1, x2, x3, x4) is from Section 9.2 (see Example 9.2.2), and
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Orthogonal Arrays 295
B(x, y, z,w) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
y x x x −w w z y −z z w −y−x y x −x −z z −w −y w −w z −y−x −x y x −y −w y −w −z −z w z−x x −x y w w −z −w −y z y z−w −w −z −y z x x x −y −y z −w
y y −z −w −x z x −x −w −w −z y−w w −w −y −x −x z x z y y z
z −w −w z −x x −x z y −y y wz −z y −w y y w −z w x x xy −y −z −w −z −z −w −y −x w x −xz z y −z w −y −y w −x −x w x−w −z w −z −y y −y z −x x −x w
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (9.75)
9.6 Welch Arrays
In this section, we describe a special subclass of OD(n; s1, s2, . . . , sk) known asWelch-type ODs. Welch arrays were originally defined over cyclic groups, and thedefinition was extended to matrices over finite Abelian groups.101 Next, we presenttwo examples of Welch arrays.
(a) A Welch array of order 2097 has the following form, [Welch]20 = (Wi, j)4j, j=1,
where
W1,1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
−d b −c −c −b−b −d b −c −c−c −b −d b −c−c −c −b −d b
b −c −c −b −d
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, W1,2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
c a −d −d −a−a c a −d −d−d −a c a −d−d −d −a c a
a −d −d −a c
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (9.76a)
W1,3 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
−b −a c −c −a−a −b −a c −c−c −a −b −a c
c −c −a −b −a−a c −c −a −b
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, W1,4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
a −b −d d −b−b a −b −d d
d −b a −b −d−d d −b a −b−b −d d −b a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (9.76b)
W2,1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
−c a d d −a−a −c a d d
d −a −c a dd d −a −c aa d d −a −c
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, W2,2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
−d −b −c −c bb −d −b −c −c−c b −d −b −c−c −c b −d −b−b −c −c b −d
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (9.76c)
W2,3 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
−a b −d d bb −a b −d dd b −a b −d−d d b −a b
b −d d b −a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, W2,4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
−b −a −c c −a−a −b −a −c c
c −a −b −a −c−c c −a −b −a−a −c c −a −b
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (9.76d)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
296 Chapter 9
W3,1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
−b −a −c c −d−d −b −a −c c
c −d −b −a −c−c c −d −b −a−a −c c −d −b
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, W3,2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
a b −d d bb a b −d dd b a b −d−d d b a b
b −d d b a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (9.76e)
W3,3 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
−d −b c c bb −d −b c cc b −d −b cc c b −d −b−b c c b −d
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, W3,4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
−c a −d −d −a−a −c a −d −d−d −a −c a −d−d −d −a −c a
a −d −d −a −c
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (9.76f)
W4,1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
−a −b −d d −b−b −a −b −d d
d −b −a −b −d−d d −b −a −b−b −d d −b −a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, W4,2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
b −a c −c −a−a b −a c −c−c −a b −a c
c −c −a b −a−a c −c −a b
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (9.76g)
W4,3 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
c a d d −a−a c a d d
d −a c a dd d −a c aa d d −a c
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, W4,4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
−d b c c −b−b −d b c c
c −b −d b cc c −b −d bb c c −b −d
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (9.76h)
(b) A Welch array of order 36, which was constructed by Ono–Sawade–Yamamoto.92,97 This array has the following form:
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝A1 A2 A3 A4
B1 B2 B3 B4
C1 C2 C3 C4
D1 D2 D3 D4
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (9.77)
where
A1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
a a a b c d −b −d −ca a a d b c −c −b −da a a c d b −d −c −b
−b −d −c a a a b c d−c −b −d a a a d b c−d −c −b a a a c d b
b c d −b −d −c a a ad b c −c −b −d a a ac d b −d −c −b a a a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
, A2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
b −a a b c −d b d −ca b −a −d b c −c b d−a a b c −d b d −c b
b c −d b −a a b c −d−d b c a b −a −d b c
c −d b −a a b c −d b
b c −d −a a b b −a a−d b c b −a a a b −a
c −d b a b −a −a a b
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
,
(9.78a)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Orthogonal Arrays 297
A3 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
c −a a −b c d b −d ca c −a d −b c c b −d−a a c c d −b −d c b
b −d c c −a a −b c dc b −d a c −a d −b c−d c b −a a c c d −b
−b c d b −d c c −a ad −b c c b −d a c −ac d −b −d c b −a a c
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
, A4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
d −a a b −c d −b d ca d −a d b −c c −b d−a a d −c d b d c −b
−b d c d −a a b −c dc −b d a d −a d b −cd c −b −a a d −c d b
b −c d −b d c d −a ad b −c c −b d a d −a−c d b d c −b −a a d
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
,
(9.78b)
B1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
−b a −a −b c −d −b d −c−a −b a −d −b c −c −b d
a −a −b c −d −b d −c −b
−b d −c −b a −a −b c −d−c −b d −a −b a −d −b c
d −c −b a −a −b c −d −b
−b c −d −b d −c −b a −a−d −b c −c −b d −a −b a
c −d −b d −c −b a −a −b
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
, B2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
a a a b −c −d −b d −ca a a −d b −c −c −b da a a −c −d b d −c −b
−b d −c a a a b −c −d−c −b d a a a −d b −c
d −c −b a a a −c −d b
b −c −d −b d −c a a a−d b −c −c −b d a a a−c −d b d −c −b a a a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
,
(9.78c)
B3 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
−d −a a b −c −d −b −d −ca −d −a −d b −c −c −b −d−a a −d −c −d b −d −c −b
−b −d −c −d −a a b c −d−c −b −d a −d −a −d b c−d −c −b −a a −d c −d b
b c −d −b −d −c −d −a a−d b c −c −b −d a −d −a
c −d b −d −c −b −a a −d
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
, B4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
c a −a b c d −b −d c−a c a d b c c −b −d
a −a c c d b −d c −b
−b −d c c a −a b c dc −b −d −a c a d b c−d c −b a −a c c d b
b c d −b −d c c a −ad b c c −b −d −a c ac d b −d c −b a −a c
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
,
(9.78d)
C1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
−c a −a −b −c d b −d −c−a −c a d −b −c c b −d
a −a −c −c d −b −d c b
b −d −c −c a −a −b −c d−c b −d −a −c a d −b −c−d −c b a −a −c −c d −b
−b −c d b −d −c −c a −ad −b −c −c b −d −a −c a−c d −b −d −c b a −a −c
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
, C2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
d a −a b c d −b d −c−a d a d b c −c −b d
a −a d c d b d −c −b
−b d −c d a −a b c d−c −b d −a d a d b c
d −c −b a −a d c d b
b c d −b d −c d a −ad b c −c −b d −a d ac d b d −c −b a −a d
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
,
(9.78e)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
298 Chapter 9
C3 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
a a a −b c −d b d −ca a a −d −b c −c b da a a c −d −b d −c b
b d −c a a a −b c −d−c b d a a a −d −b c
d −c b a a a c −d −b
−b c −d b d −c a a a−d −b c −c b d a a a
c −d −b d −c b a a a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
, C4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
−b −a a −b c d −b −d −ca −b −a d −b c −c −b −d−a a −b c d −b −d −c −b
−b −d −c −b −a a −b c d−c −b −d a −b −a d −b c−d −c −b −a a −b c d −b
−b c d −b −d −c −b −a ad −b c −c −b −d a −b −ac d −b −d −c −b −a a −b
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
,
(9.78f)
D1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
−d a −a b −c −d −b −d c−a −d a −d b −c c −b −d
a −a −d −c −d b −d c −b
−b −d c −d a −a b −c −dc −b −d −a −d a −d b −c−d c −b a −a −d −c −d b
b −c −d −b −d c −d a −a−d b −c c −b −d −a −d a−c −d b −d c −b a −a −d
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
, D2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
−c −a a b −c d −b −d −ca −c −a d b −c −c −b −d−a a −c −c d b −d −c −b
−b −d −c −c −a a b −c d−c −b −d a −c −a d b −c−d c −b −a a −c −c d b
b −c d −b −d −c −c −a ad b −c −c −b −d a −c −a−c d b −d −c −b −a a −c
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
,
(9.78g)
D3 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
b a −a b c d b −d −c−a b a d b c −c b −d
a −a b c d b −d −c b
b −d −c b a −a b c d−c b −d −a b a d b c−d c b a −a b c d b
b c d b −d −c b a −ad b c −c b −d −a b ac d b −d −c b a −a b
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
, D4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
a a a −b −c d b −d ca a a d −b −c c b −da a a −c d −b −d c b
b −d c a a a −b −c dc b −d a a a d −b −c−d c b a a a −c d −b
−b −c d b −d c a a ad −b −c c b −d a a a−c d −b −d c b a a a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
.
(9.78h)
Now, if Xi, i = 1, 2, 3, 4 are T matrices of order k, then by substituting matrices
A =4∑
i=1
Ai ⊗ Xi, B =4∑
i=1
Bi ⊗ Xi, C =4∑
i=1
Ci ⊗ Xi, D =4∑
i=1
Di ⊗ Xi (9.79)
into the array in Eq. (9.20), we obtain the Baumert–Hall array of order 4 ·9k. Usingthe Welch array we can obtain the Baumert–Hall array of order 4 · 5k. Hence, fromRemark 9.2.1 and Theorem 9.2.1, we have the following:
Corollary 9.6.1: There are Baumert–Hall arrays of orders 4k, 20k, and 36k,where k ∈ M.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Orthogonal Arrays 299
Corollary 9.6.2: (a) The matrices in Eq. (9.79) are the generalized parametricWilliamson matrices of order 9k.
(b) Matrices {Ai}4i=1, {Bi}4i=1, {Ci}4i=1, and {Di}4i=1 are generalized parametricWilliamson matrices of order 9. Furthermore, the array in Eq. (9.77), whereAi, Bi, Ci, Di are mutually commutative parametric matrices, will be called aWelch-type array.
Theorem 9.6.1: Let there be a Welch array of order 4k. Then, there is also a Welcharray of order 4k(p + 1), where p ≡ 1 (mod 4) is a prime power.
Proof: Let Eq. (9.9) be a Welch array of order 4k, i.e., {Ai}4i=1, {Bi}4i=1, {Ci}4i=1, {Di}4i=1are parametric matrices of order k satisfying the conditions
PQ = QP, P,Q ∈ {Ai, Bi, Ci, Di} ,4∑
i=1
AiBTi =
4∑i=1
AiCTi =
4∑i=1
AiDTi =
4∑i=1
BiCTi =
4∑i=1
BiDTi =
4∑i=1
CiDTi = 0,
4∑i=1
AiATi =
4∑i=1
BiBTi =
4∑i=1
CiCTi =
4∑i=1
DiDTi = k(a2 + b2 + c2 + d2)Ik.
(9.80)
Now, let p ≡ 1 (mod 4) be a prime power. According to Ref. 8, there exist cyclicsymmetric Williamson matrices of orders (p + 1)/2 of the form I + A, I − A, B, B.
Consider the matrices
X =
(I BB −I
), Y =
(A 00 A
). (9.81)
We can verify that (0,±1) matrices X, Y of orders p + 1 satisfy the conditions
X ∗ Y = 0,
XT = X, YT = Y,
XY = YX,
X ± Y is a (+1,−1)matrix,
X2 + Y2 = (p + 1)Ip+1.
(9.82)
Now we introduce the following matrices:
X1 = X ⊗ A1 + Y ⊗ A2, X2 = X ⊗ A2 − Y ⊗ A1,X3 = X ⊗ A3 + Y ⊗ A4, X4 = X ⊗ A4 − Y ⊗ A3;
Y1 = X ⊗ B1 + Y ⊗ B2, Y2 = X ⊗ B2 − Y ⊗ B1,Y3 = X ⊗ B3 + Y ⊗ B4, Y4 = X ⊗ B4 − Y ⊗ B3;
Z1 = X ⊗C1 + Y ⊗C2, Z2 = X ⊗C2 − Y ⊗C1,Z3 = X ⊗C3 + Y ⊗C4, Z4 = X ⊗C4 − Y ⊗C3;
W1 = X ⊗ D1 + Y ⊗ D2, W2 = X ⊗ D2 − Y ⊗ D1,W3 = X ⊗ D3 + Y ⊗ D4, W4 = X ⊗ D4 − Y ⊗ D3.
(9.83)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
300 Chapter 9
Let us prove that the parametric matrices in Eq. (9.83) of order k(p + 1) satisfy theconditions of Eq. (9.80). The first condition is evident. We will prove the secondcondition of Eq. (9.80).
X1YT1 = X2 ⊗ A1BT
1 + XY ⊗ A1BT2 + YX ⊗ A2BT
1 + Y2 ⊗ A2BT2 ,
X2YT2 = X2 ⊗ A2BT
2 − XY ⊗ A2BT1 − YX ⊗ A1BT
2 + Y2 ⊗ A1BT1 . (9.84)
Summarizing the above-given expressions and taking into account theconditions of Eq. (9.82), we find that
X1YT1 + X2YT
2 = (p + 1)(A1BT1 + A2BT
2 ) ⊗ Ip+1. (9.85)
By similar calculations, we obtain
X3YT3 + X4YT
4 = (p + 1)(A3BT3 + A4BT
4 ) ⊗ Ip+1. (9.86)
Now, summarizing the two last equations, we have
4∑i=1
XiYTi = (p + 1)
4∑i=1
AiBTi ⊗ Ip+1 = 0. (9.87)
Similarly, we can prove the validity of the second condition of Eq. (9.80) for allmatrices Xi, Yi, Zi, Wi, i = 1, 2, 3, 4.
Now, prove the third condition of Eq. (9.80). We obtain
X1XT1 = X2 ⊗ A1AT
1 + XY ⊗ A1AT2 + YX ⊗ A2AT
1 + Y2 ⊗ A2AT2 ,
X2XT2 = X2 ⊗ A2AT
2 − XY ⊗ A2AT1 − YX ⊗ A1AT
2 + Y2 ⊗ A1AT1 . (9.88)
Summarizing, we obtain
X1XT1 + X2XT
2 = (X2 + Y2) ⊗ (A1AT1 + A2AT
2 )
= (p + 1)(A1AT1 + A2AT
2 ) ⊗ Ip+1. (9.89)
Then, we find that
X3XT3 + X4XT
4 = (p + 1)(A3AT3 + A4AT
4 ) ⊗ Ip+1. (9.90)
Hence, taking into account the third condition of Eq. (9.80), we have
4∑i=1
XiXTi = k(p + 1)(a2 + b2 + c2 + d2)Ik(p+1). (9.91)
Other conditions can be similarly checked.
Remark 9.6.1: There are Welch arrays of orders 20(p + 1) and 36(p + 1), wherep ≡ 1 (mod 4) is a prime power.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Orthogonal Arrays 301
References
1. A. Hurwitz, “Uber die komposition der quadratischen formen,” Math. Ann.88 (5), 1–25 (1923).
2. J. Radon, Lineare scharen orthogonaler matrizen, Abhandlungen aus dem,presented at Mathematischen Seminar der Hamburgischen Universitat, 1–14,1922.
3. T.-K. Woo, “A novel complex orthogonal design for space–time coding insensor networks,” Wireless Pers. Commun. 43, 1755–1759 (2007).
4. R. Craigen, “Hadamard matrices and designs,” in The CRC Handbook ofCombinatorial Design, Ch. J. Colbourn and J. H. Dinitz, Eds., 370–377CRC Press, Boca Raton (1996).
5. A. V. Geramita and J. Seberry, Orthogonal Designs. Quadratic Forms andHadamard Matrices, in Lecture Notes in Pure and Applied Mathematics 45,Marcel Dekker, New York (1979).
6. A. V. Geramita, J. M. Geramita, and J. Seberry Wallis, “Orthogonal designs,”Linear Multilinear Algebra 3, 281–306 (1976).
7. A. S. Hedayat, N. J. A. Sloane, and J. Stufken, Orthogonal Arrays: Theoryand Applications, Springer-Verlag, New York (1999).
8. R. J. Turyn, “An infinitive class of Williamson matrices,” J. Combin. Theory,Ser. A 12, 19–322 (1972).
9. M. Hall Jr., Combinatorics, Blaisdell Publishing Co., Waltham, MA (1970).
10. R. J. Turyn, “Hadamard matrices, Baumert–Hall units, four-symbol sequen-ces, puls compression, and surface wave encoding,” J. Combin. Theory, Ser.A 16, 313–333 (1974).
11. http://www.uow.edu.au/∼jennie.
12. S. Agaian and H. Sarukhanyan, “Generalized δ-codes and construction ofHadamard matrices,” Prob. Transmission Inf. 16 (3), 50–59 (1982).
13. J. S. Wallis, “On Hadamard matrices,” J. Comb. Theory Ser. A 18, 149–164(1975).
14. S. Agaian and H. Sarukhanyan, “On Plotkin’s hypothesis,” Dokladi NAS RALXVI (5), 11–15 (1978) (in Russian).
15. S. Agaian and H. Sarukhanyan, “Plotkin hypothesis about D(4k, 4) decompo-sition,” J. Cybernetics and Systems Analysis 18 (4), 420–428 (1982).
16. H. Sarukhanyan, “Construction of new Baumert–Hall arrays and Hadamardmatrices,” J. of Contemporary Mathematical Analysis, NAS RA, Yerevan 32(6), 47–58 (1997).
17. J. M. Goethals and J. J. Seidel, “Orthogonal matrices with zero diagonal,”Can. J. Math. 19, 1001–1010 (1967).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
302 Chapter 9
18. K. Sawade, “A Hadamard matrix of order 268,” Graphs Combin. 1, 185–187(1985).
19. J. Bell and D. Z. Djokovic, “Construction of Baumert–Hall–Welch arraysand T-matrices,” Australas. J. Combin. 14, 93–107 (1996).
20. W. Tadej and K. Zyczkowski, “A concise guide to complex Hadamardmatrices,” Open Syst. Inf. Dyn 13, 133–177 (2006).
21. H. G. Gadiyar, K. M. S. Maini, R. Padma, and H. S. Sharatchandra, “Entropyand Hadamard matrices,” J. Phys. Ser. A 36, 109–112 (2003).
22. W. B. Bengtsson, A. Ericsson, J. Larsson, W. Tadej and K. Zyczkowski,“Mutually unbiased bases and Hadamard matrices of order six,” J. Math.Phys. 48 (5), 21 (2007).
23. A. Hedayat, N. J. A. Sloane, and J. Stufken, Orthogonal Arrays: Theory andApplications, Springer-Verlag, Berlin (1999).
24. M. Y. Xia, “Some new families of SDSS and Hadamard matrices,” ActaMath. Sci. 16, 153–161 (1996).
25. D. Ž Djokovic, “Ten Hadamard matrices of order 1852 of Goethals–Seideltype,” Eur. J. Combin. 13, 245–248 (1992).
26. J. M. Goethals and J. J. Seidel, “A skew Hadamard matrix of order 36,”J. Austral. Math. Soc. 11, 343–344 (1970).
27. J. Seberry and A. L. Whiteman, “New Hadamard matrices and conferencematrices obtained via Mathon’s construction,” Graphs Combin. 4, 355–377(1988).
28. S. Agaian and H. Sarukhanyan, Williamson type M-structures, in Proc.2nd Int. Workshop on Transforms and Filter Banks, Brandenburg, Germany,223–249 (1999).
29. H. Sarukhanyan, S. Agaian, K. Egiazarian and J. Astola, Construction ofWilliamson type matrices and Baumert–Hall, Welch and Plotkin arrays,in Proc. Int. Workshop on Spectral Transforms and Logic Design forFuture Digital Systems (SPECLOG-2000), Tampere, Finland, TICSP Ser. 10,189–205 (2000).
30. W. H. Holzmann and H. Kharaghani, “On the amicability of orthogonaldesigns,” J. Combin. Des. 17, 240–252 (2009).
31. S. Georgiou, C. Koukouvinos, and J. Seberry, “Hadamard matrices, orthogo-nal designs and construction algorithms,” in DESIGNS 2002: Further Com-putational and Constructive Design Theory, W. D. Wallis, Ed., 133–205Kluwer Academic Publishers, Dordrecht (2003).
32. J. Cooper and J. Wallis, “A construction for Hadamard arrays,” Bull. Austral.Math. Soc. 7, 269–278 (1972).
33. J. Hammer and J. Seberry, “Higher dimensional orthogonal designs andHadamard matrices,” Combinatorial Mathematics VII, Lecture Notes inMathematics, Springer 829, 220–223 (1980).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Orthogonal Arrays 303
34. J. Hammer and J. Seberry, “Higher dimensional orthogonal designs andapplications,” IEEE Trans. Inf. Theory 27 (6), 772–779 (1981).
35. M. Xia, T. Xia, J. Seberry, and J. Wu, “An infinite family of Goethals–Seidelarrays,” Discrete Appl. Math. 145 (3), 498–504 (2005).
36. Ch. Koukouvinos and J. Seberry, “Orthogonal designs of Kharaghani type:II,” Ars. Combin. 72, 23–32 (2004).
37. S. Georgiou, Ch. Koukouvinos, and J. Seberry, “On full orthogonal designsin order 56,” Ars. Combin. 65, 79–89 (2002).
38. S. Georgiou, C. Koukouvinos, and J. Seberry, “Hadamard matrices, orthogo-nal designs and construction algorithms,” in DESIGNS 2002: Further Com-putational and Constructive Design Theory, W. D. Wallis, Ed., 133–205Kluwer Academic Publishers, Dordrecht (2003).
39. L. D. Baumert, “Hadamard matrices of orders 116 and 232,” Bull. Am. Math.Soc. 72 (2), 237. (1966).
40. L. D. Baumert, Cyclic Difference Sets, Lecture Notes in Mathematics, 182,Springer-Verlag, Berlin (1971).
41. L. D. Baumert and M. Hall Jr., “Hadamard matrices of the Williamson type,”Math. Comput. 19, 442–447 (1965).
42. T. Chadjipantelis and S. Kounias, “Supplementary difference sets andD-optimal designs for n ≡ 2 (mod4),” Discrete Math. 57, 211–216 (1985).
43. R. J. Fletcher, M. Gysin, and J. Seberry, “Application of the discreteFourier transform to the search for generalized Legendre pairs and Hadamardmatrices,” Australas. J. Combin. 23, 75–86 (2001).
44. S. Georgiou and C. Koukouvinos, “On multipliers of supplementarydifference sets and D-optimal designs for n ≡ 2 (mod4),” Utilitas Math. 56,127–136 (1999).
45. S. Georgiou and C. Koukouvinos, “On amicable sets of matrices andorthogonal designs,” Int. J. Appl. Math. 4, 211–224 (2000).
46. S. Georgiou, C. Koukouvinos, M. Mitrouli, and J. Seberry, “Necessaryand sufficient conditions for two variable orthogonal designs in order 44:addendum,” J. Combin. Math. Combin. Comput. 34, 59–64 (2000).
47. S. Georgiou, C. Koukouvinos, M. Mitrouli, and J. Seberry, “A new algorithmfor computer searches for orthogonal designs,” J. Combin. Math. Combin.Comput. 39, 49–63 (2001).
48. M. Gysin and J. Seberry, “An experimental search and new combinatorialdesigns via a generalization of cyclotomy,” J. Combin. Math. Combin.Comput. 27, 143–160 (1998).
49. M. Gysin and J. Seberry, “On new families of supplementary difference setsover rings with short orbits,” J. Combin. Math. Combin. Comput. 28, 161–186(1998).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
304 Chapter 9
50. W. H. Holzmann and H. Kharaghani, “On the Plotkin arrays,” Australas. J.Combin. 22, 287–299 (2000).
51. W. H. Holzmann and H. Kharaghani, “On the orthogonal designs of order24,” Discrete Appl. Math. 102 (1–2), 103–114 (2000).
52. W. H. Holzmann and H. Kharaghani, “On the orthogonal designs of order40,” J. Stat. Plan. Inference 96, 415–429 (2001).
53. N. Ito, J. S. Leon, and J. Q. Longyear, “Classification of 3-(24, 12, 5)designs and 24-dimensional Hadamard matrices,” J. Combin. Theory, Ser. A31, 66–93 (1981).
54. Z. Janko, “The existence of a Bush-type Hadamard matrix of order 36 andtwo new infinite classes of symmetric designs,” J. Combin. Theory, Ser. A 95(2), 360–364 (2001).
55. H. Kimura, “Classification of Hadamard matrices of order 28 with Hall sets,”Discrete Math. 128 (1–3), 257–268 (1994).
56. H. Kimura, “Classification of Hadamard matrices of order 28,” DiscreteMath. 133 (1–3), 171–180 (1994).
57. C. Koukouvinos, “Some new orthogonal designs of order 36,” Utilitas Math.51, 65–71 (1997).
58. C. Koukouvinos, “Some new three and four variable orthogonal designs inorder 36,” J. Stat. Plan. Inference 73, 21–27 (1998).
59. C. Koukouvinos, M. Mitrouli, and J. Seberry, “Necessary and sufficientconditions for some two variable orthogonal designs in order 44,” J. Combin.Math. Combin. Comput. 28, 267–286 (1998).
60. C. Koukouvinos, M. Mitrouli, J. Seberry, and P. Karabelas, “On suffici-ent conditions for some orthogonal designs and sequences with zero auto-correlation function,” Australas. J. Combin. 13, 197–216 (1996).
61. C. Koukouvinos and J. Seberry, “New weighing matrices and orthogonal de-signs constructed using two sequences with zero autocorrelation function—a review,” J. Statist. Plan. Inference 81, 153–182 (1999).
62. C. Koukouvinos and J. Seberry, “New orthogonal designs and sequenceswith two and three variables in order 28,” Ars. Combin. 54, 97–108 (2000).
63. C. Koukouvinos and J. Seberry, “Infinite families of orthogonal designs: I,”Bull. Inst. Combin. Appl 33, 35–41 (2001).
64. C. Koukouvinos and J. Seberry, “Short amicable sets and Kharaghani typeorthogonal designs,” Bull. Austral. Math. Soc. 64, 495–504 (2001).
65. C. Koukouvinos, J. Seberry, A. L. Whiteman, and M. Y. Xia, “Optimaldesigns, supplementary difference sets and multipliers,” J. Statist. Plan.Inference 62, 81–90 (1997).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Orthogonal Arrays 305
66. S. Kounias, C. Koukouvinos, N. Nikolaou, and A. Kakos, “The non-equivalent circulant D-optimal designs for n ≡ 2 (mod4), n ≤ 54, n = 66,”J. Combin. Theory, Ser. A 65 (1), 26–38 (1994).
67. C. Lam, S. Lam, and V. D. Tonchev, “Bounds on the number of affine,symmetric, and Hadamard designs and matrices,” J. Combin. Theory, Ser.A 92 (2), 186–196 (2000).
68. J. Seberry and R. Craigen, “Orthogonal designs,” in Handbook of Combina-torial Designs, C. J. Colbourn and J. H. Dinitz, Eds., 400–406 CRC Press,Boca Raton (1996).
69. S. A. Tretter, Introduction to Discrete-time Signal Processing, John Wiley &Sons, Hoboken, NJ (1976).
70. R. J. Turyn, “An infinite class of Williamson matrices,” J. Combin. Theory,Ser. A 12, 319–321 (1972).
71. A. L. Whiteman, “An infinite family of Hadamard matrices of Williamsontype,” J. Combin. Theory, Ser. A 14, 334–340 (1973).
72. I. S. Kotsireas, C. Koukouvinos, and J. Seberry, “Hadamard ideals andHadamard matrices with two circulant cores,” Eur. J. Combin. 27 (5),658–668 (2006).
73. I. Bouyukliev, V. Fack, and J. Winne, “2-(31,15,7), 2-(35,17,8) and 2-(36,15,6) designs with automorphisms of odd prime order, and their relatedHadamard matrices and codes,” Des. Codes Cryptog. 51 (2), 105–122 (2009).
74. S. Georgiou, C. Koukouvinos, and J. Seberry, “Hadamard Matrices, Ortho-gonal Designs and Construction Algorithms,” in DESIGNS 2002: FurtherComputational and Constructive Design Theory, W. D. Wallis, Ed., KluwerAcademic Publishers, Dordrecht (2003).
75. V. D. Tonchev, Combinatorial Configurations, Designs, Codes, Graphs, JohnWiley & Sons, Hoboken, NJ (1988).
76. C. Koukouvinos and D. E. Simos, “Improving the lower bounds oninequivalent Hadamard matrices, through orthogonal designs and meta-programming techniques,” Appl. Numer. Math. 60 (4), 370–377 (2010).
77. J. Seberry, K. Finlayson, S. S. Adams, T. A. Wysocki, T. Xia, and B. J.Wysocki, “The theory of quaternion orthogonal designs,” IEEE Trans. SignalProc. 56 (1), 256–265 (2009).
78. S. M. Alamouti, “A simple transmit diversity technique for wirelesscommunications,” IEEE J. Sel. Areas Commun. 16 (8), 1451–1458 (1998).
79. X.-B. Liang, “Orthogonal designs with maximal rates,” IEEE Trans. Inform.Theory 49 (10), 2468–2503 (2003).
80. V. Tarokh, H. Jafarkhani, and A. R. Calderbank, “Space-time block codesfrom orthogonal designs,” IEEE Trans. Inf. Theory 45 (5), 1456–1467 (1999).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
306 Chapter 9
81. C. Yuen, Y. L. Guan and T. T. Tjhung, Orthogonal space-time block codefrom amicable complex orthogonal design, in Proc, of IEEE Int. Conf. onAcoustics, Speech, and Signal Processing (ICASSP), Vol. 4, pp. 469–472(2004).
82. Z. Chen, G. Zhu, J. Shen, and Y. Liu, “Differential space-time block codesfrom amicable orthogonal designs,” IEEE Wireless Commun. Networking 2,768–772 (2003).
83. H. Sarukhanyan, S. Agaian, K. Egiazarian and J. Astola, Space-time codesfrom Hadamard matrices, presented at URSI/FWCW’01, Finnish WirelessCommunications Workshop, 23–24 Oct., Tampere, Finland (2001).
84. S. L. Altmann, Rotations, Quaternions, and Double Groups, Clarendo Press,Oxford (1986).
85. A. R. Calderbank, S. Das, N. Al-Dhahir, and S. N. Diggavi, “Constructionand analysis of a new quaternionic space-time code for 4 transmit antennas,”Commun. Inf. Syst. 5 (1), 1–26 (2005).
86. B. S. Collins, “Polarization-diversity antennas for compact base stations,”Microwave J. 43 (1), 76–88 (2000).
87. C. Charnes, J. Pieprzyk and R. Safavi-Naini, Crypto Topics and ApplicationsII, Faculty of Informatics—Papers, 1999, http://en.scientificcommons.org/j_seberry.
88. I. Oppermann and B. S. Vucetic, “Complex spreading sequences with a widerange of correlation properties,” IEEE Trans. Commun. COM-45, 365–375(1997).
89. B. J. Wysocki and T. Wysocki, “Modified Walsh-Hadamard sequences forDS CDMA wireless systems,” Int. J. Adapt. Control Signal Process. 16,589–602 (2002).
90. S. Tseng and M. R. Bell, “Asynchronous multicarrier DS-CDMA usingmutually orthogonal complementary sets of sequences,” IEEE Trans.Commun. 48, 53–59 (2000).
91. L. C. Tran, Y. Wang, B. J. Wysocki, T. A. Wysocki, T. Xia and Y. Zhao, Twocomplex orthogonal space-time codes for eight transmit antennas, Faculty ofInformatics—Papers, 2004, http://en.scientificcommons.org/j_seberry.
92. J. Seberry and M. Yamada, Hadamard Matrices, Sequences and BlockDesigns. Surveys in Contemporary Design Theory, Wiley-Interscience Seriesin Discrete Mathematics, Wiley, Hoboken, NJ (1992).
93. J. Seberry, K. Finlayson, S. S. Adams, T. Wysocki, T. Xia, and B. Wysocki,“The theory of quaternion orthogonal designs,” IEEE Trans. Signal Process.56 (1), 256–265 (2008).
94. S. L. Altmann, Rotations, Quaternions, and Double Groups, Clarendo Press,Oxford (1986).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Orthogonal Arrays 307
95. S. S. Adams, J. Seberry, N. Karst, J. Pollack, and T. Wysocki, “Quanternionorthogonal designs,” Linear Algebra Appl. 428 (4), 1056–1071 (2008).
96. B. J. Wysocki and T. A. Wysocki, “On an orthogonal space-time-polarizationblock code,” J. Commun. 4 (1), 20–25 (2009).
97. W. D. Wallis, A. P. Street and J. S. Wallis, Combinatorics: Room Squares,Sum-Free Sets, Hadamard Matrices, Lecture Notes in Mathematics 292,Springer, Berlin (1972).
98. S. S. Agaian, Hadamard Matrices and Their Applications, Lecture Notes inMathematics, Springer, Berlin 1168 (1985).
99. H. G. Sarukhanyan, “On Goethals–Seidel arrays,” Sci. Notes YSU, Armenia1, 12–19 (1979) (in Russian).
100. M. Plotkin, “Decomposition of Hadamard matrices,” J. Comb. Theory, Ser.A 12, 127–130 (1972).
101. J. Bell and D. Z. Djokovic, “Construction of Baumert–Hall-Welch arrays andT-matrices,” Australas. J. Combin. 14, 93–109 (1996).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Chapter 10Higher-Dimensional HadamardMatrices
High-dimensional Hadamard matrices can be found in nature; e.g., a typical modelof a rock salt crystal is a 3D Hadamard matrix of order 4 (see Fig. 10.1).
Higher-dimensional Hadamard matrices were introduced several decadesago. Shlichta was the first to construct examples of n-dimensional Hadamardmatrices.3,5 He proposed the procedures for generating the simplest 3D, 4D, and5D Hadamard matrices. In particular, he put special emphasis on construction ofthe “proper” matrices, which have a dimensional hierarchy of orthogonalities. Thisproperty is a key for many applications such as error-correction codes and securitysystems. Shlichta also suggests a number of unsolved problems and unprovenconjectures, as follows:
• The algebraic approach to the derivation of 2D Hadamard matrices (seeChapters 1 and 4) suggests that a similar procedure may be feasible for 3D orhigher matrices.• Just as families of 2D Hadamard matrices (such as skew and Williamson
matrices) have been defined, it may be possible to identify families of higher-
Figure 10.1 Rock salt crystal: The black circles represent sodium atoms; the white circlesrepresent chlorine atoms.1,2
309
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
310 Chapter 10
dimensional matrices, especially families that extend over a range of dimensionsas well as orders.• An algorithm may be developed for deriving a completely proper n3 (or nm)
Hadamard matrix from one that is n2.• Two-dimensional Hadamard matrices exist only in orders of 1, 2, or 4t. No
such restriction has yet been established for higher dimensions. There may beabsolutely improper n3 or nm Hadamard matrices of order n = 2s � 4t.• Shlichta’s work prompted a study of higher-dimensional Hadamard matrix
designs. Several articles and books on higher-dimensional Hadamard matriceshave been published.1,5–35 In our earlier work,1,5,6,8,27 we have presentedthe higher-dimensional Williamson–Hadamard, generalized (including Butson)Hadamard matrix construction methods, and also have introduced (λ, μ)-dimensional Hadamard matrices.
Several interesting methods to construct higher-dimensional Hadamard matriceshave been developed,1–30 including the following:
• Agaian and de Launey submitted a different way to construct an n-dimensionalHadamard matrix for a given 2D Hadamard matrix.8,25
• Agaian and Egiazarian35 presented (λ, μ)-dimensional Hadamard matrixconstruction methods.• Hammer and Seberry developed a very interesting approach to construct high-
dimensional orthogonal designs and weighted matrices.21,22
• de Launey et al.19 derived first principles of automorphism and equivalencefor higher-dimensional Hadamard matrices. In addition, Ma9 investigated theequivalence classes of n-dimensional proper Hadamard matrices.• de Launey et al.19 constructed proper higher-dimensional Hadamard matrices
for all orders 4t ≤ 100, and conference matrices of order q + 1, where q isan odd prime power. We conjecture that such Hadamard matrices exist for allorders v = 0 (mod 4).
The first application of 3D WHTs in signal processing was shown by Harmuth.2
Recently, in Ref. 18, Testoni and Costa created a fast embedded 3D Hadamardcolor video codec, which was developed to be executed by a set-top box device ona broadband network. The applicability of this codec is best directed to systemswith complexity and storage limitations, possibly using fixed-point processes, butenjoying high-bit-rate network connections (a low-cost codec that makes use ofhigh-performance links). A survey of the higher-dimensional Hadamard matricesand 3D Walsh transforms can be found in Refs. 14, 17, 19, 25, and 32.
This chapter is organized as follows: Section 10.1 presents the mathematicaldefinition and properties of the 3D Hadamard matrices; Section 10.2 providesthe 3D Williamson–Hadamard matrix construction procedure; Section 10.3 givesa construction method for 3D Hadamard matrices of order 4n + 2; Section 10.4presents a fast 3D WHTs algorithm. Finally, Sections 10.5 and 10.6 cover 3Dcomplex HT construction processes.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Higher-Dimensional Hadamard Matrices 311
Figure 10.2 Three-dimensional Hadamard matrix of size (2 × 2 × 2).
10.1 3D Hadamard Matrices
Definition 10.1.1:3,4 The 3D matrix H = (hi, j,k)ni, j,k=1 is called a Hadamard matrix
if all elements hi, j,k = ±1 and∑i
∑j
hi,c,rhi,b,r =∑
j
∑k
hi,c,rhi,b,r =∑
i
∑k
hi,c,rhi,b,r = n2δa,b, (10.1)
where δa,b is a Kronecker function, i.e., δa,a = 1, δa,b = 0, if a � b.
Later, Shlichta narrowed this definition and included only those matrices inwhich all 2D layers (hi0, j,k)n
j,k=1, (hi, j0,k)ni,k=1, and (hi, j,k0 )
ni, j=1 in all axis normal
orientations are themselves Hadamard matrices of order n.
Definition 10.1.2:4,8,17,32 A 3D Hadamard matrix H = (hi, j,k)ni, j,k=1 of order n is
called a regular 3D Hadamard matrix if the following conditions are satisfied:
n∑i=1
hi,c,rhi,b,r =
n∑j=1
ha, j,rhb, j,r = nδa,b,
n∑i=1
hi,q,ahi,q,b =
n∑k=1
ha,q,khb,q,k = nδa,b,
n∑j=1
hp, j,ahp, j,b =
n∑k=1
hp,a,khp,b,k = nδa,b.
(10.2)
Matrices satisfying Eq. (10.2) are called “proper” or regular Hadamard matrices.Matrices satisfying Eq. (10.1) but not all of Eq. (10.2) are called “improper.”
A 3D Hadamard matrix of order 2 [or size (2× 2× 2)] is presented in Fig. 10.2.Three-dimensional Hadamard matrices of order 2m (see Figs. 10.3 and 10.4) can
be generated as follows:
(1) From m − 1 successive direct products (see Appendix A.1 of 23 Hadamardmatrices.
(2) The direct product of three 2D Hadamard matrices of order m in differentorientations.5
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
312 Chapter 10
Figure 10.3 Illustrative example of Hadamard matrices as direct products of 2m Hadamardmatrices (courtesy of IEEE5).
Figure 10.4 Illustrative example of generation of 43 improper Hadamard matrices by thedirect product of three mutually perpendicular 2D Hadamard matrices (courtesy of IEEE5).
Shlichta noted that 3D matrices are 3D Hadamard matrices if the followinghold:
• All layers in one direction are 2D Hadamard matrices that are orthogonal to eachother.• All layers in two directions are Hadamard matrices.• In any direction, all layers are orthogonal in at least one layer direction so that
collectively there is at least one correlation vector in each axial direction.
Definition 10.1.3:5 A general n-dimensional Hadamard matrix H = (hi, j,k,...,m) isa binary matrix in which all parallel (m − 1)-dimensional sections are mutuallyorthogonal, that is, all hi, j,k,...,m and∑
p
∑q
∑r
. . . .∑
y
hpqr...yahpqr...yb = n(m−1)δa,b, (10.3)
where (pqr . . . yz) represents all permutations of (i jk . . .m).
This means that a completely proper n-dimensional Hadamard matrix is onein which all 2D sections, in all possible axis-normal orientations, are Hadamardmatrices. As a consequence, all intermediate-dimensional sections are alsocompletely proper Hadamard matrices, or an m-dimensional Hadamard matrix isspecified if either all (m − l)-dimensional sections in one direction are Hadamardmatrices and also are mutually orthogonal, or if all (m − l)-dimensional sections intwo directions are Hadamard matrices.
10.2 3D Williamson–Hadamard Matrices
This section presents 3D Williamson–Hadamard matrices; first, we define the 3DWilliamson array.
Definition 10.2.1:8,11 The 3D matrix H(a, b, c, d) = (hi, j,k)4i, j,k=1 is called the 3D
Williamson array if all 2D matrices parallel to planes (X, Y), (X, Z), and (Y, Z) areWilliamson arrays.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Higher-Dimensional Hadamard Matrices 313
The 3D Williamson array is given in Fig. 10.5.This matrix also can be represented in the following form:
∥∥∥∥∥∥∥∥∥∥∥−d −c b a
c −d −a b−b a −d c−a −b −c −d
−c d a −b−d −c b a−a −b −c −d
b −a d −c
−b a −d c−a −b −c −d
d c −b −a−c d a −b
a b c d−b a −d c−c d a −b−d −c b a
∥∥∥∥∥∥∥∥∥∥∥ (10.4)
Example 10.2.1: It can be shown that from the 3D Williamson array (seeFig. 10.5) we obtain the following:
(1) The matrices parallel to plane (X, Y) are Williamson arrays
AX,Y =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝a b c d−b a −d c−c d a −b−d −c b a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , BX,Y =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝−b a −d c−a −b −c −d
d c −b −a−c d a −b
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (10.5a)
CX,Y =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝−c d a −b−d −c b a−a −b −c −d
b −a d −c
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , DX,Y =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝−d −c b a
c −d −a b−b a −d c−a −b −c −d
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (10.5b)
(2) Similarly, the matrices parallel to plane (X, Z) are Williamson arrays
AX,Z =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝a b c d−b a −d c−c d a −b−d −c b a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , BX,Z =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝−b a −d c−a −b −c −d−d −c b a
c −d −a b
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (10.6a)
CX,Z =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝−c d a −b
d c −b −a−a −b −c −d−b a −d c
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , DX,Z =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝−d −c b a−c d a −b
b −a d −c−a −b −c −d
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (10.6b)
(3) Similarly, the following matrices that are parallel to plane (Y, Z) are Williamsonarrays:
AY,Z =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝a −b −c −d−b −a d −c−c −d −a b−d c −b −a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , BY,Z =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝b a d −ca −b c dd −c −b −a−c −d a −b
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (10.7a)
CY,Z =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝c −d a b−d −c −b a
a b −c db −a −d −c
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , DY,Z =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝d c −b ac −d −a −b−b a −d −c
a b c −d
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (10.7b)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
314 Chapter 10
The 3D Williamson–Hadamard matrix of order 4 obtained from Fig. 10.5 bysubstituting a = b = c = d = 1 is given in Fig. 10.6.
Example 10.2.2: Sylvester–Hadamard matrices obtained from the 3D Sylvester–Hadamard matrix of order 4 (see Fig. 10.7).
a
–b
–b
–b
–b
–b
b
–c
–c
–c
–c
–c
–c
c
c
c
–c
–c
–c
–c
c
c
–d
–d
–d
–d
–d
–d
–d
–d
–d
–d
d
d
d
d
d
d
–a
–a
–a
–a
–a
–a
a
a
a
b
b
b
b
b
a
a
a
a
a
–b
–b
–b
–b
–b
c
a
Figure 10.5 A 3D Williamson array.
Figure 10.6 3D Williamson–Hadamard matrix of order 4.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Higher-Dimensional Hadamard Matrices 315
Figure 10.7 3D Sylvester–Hadamard matrix of order 4.
Sylvester–Hadamard matrices parallel to planes (X, Y), (X, Z), and (Y, Z) havethe following forms, respectively:
H1X,Y = H1
X,Z = H1Y,Z =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + + +
+ − + −+ + − −+ − − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , H2X,Y = H2
X,Z = H2Y,Z =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ − + −− − − −+ − − +− − + +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (10.8a)
H3X,Y = H3
X,Z = H3Y,Z =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + − −+ − − +− − − −− + − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , H4X,Y = H4
X,Z = H4Y,Z =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ − − +− − + +− + − ++ + + +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (10.8b)
Definition 10.2.2:9,11 3D matrices A, B, C, and D of order n are called 3DWilliamson-type matrices or Williamson cubes if all 2D matrices parallel to planes(X, Y), (X, Z), and (Y, Z) are Williamson-type matrices of order n, i.e.,
AX,Y , BX,Y , CX,Y , DX,Y ;
AX,Z , BX,Z , CX,Z , DX,Z;
AY,Z , BY,Z , CY,Z , DY,Z
(10.9)
are Williamson-type matrices.
Illustrative Example 10.2.3: 2D Williamson-type matrices can be obtained fromWilliamson cubes (see Fig. 10.8).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
316 Chapter 10
Figure 10.8 Williamson cubes A, B = C = D of order 3.
(1) Williamson-type matrices parallel to plane (X, Y),
A1X,Y =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + +
+ + +
+ + +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎠ , B1X,Y = C1
X,Y = D1X,Y =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎝+ − −− + −− − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎠ ;
A2X,Y =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + +
+ + +
+ + +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎠ , B2X,Y = C2
X,Y = D2X,Y =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎝− + −− − ++ − −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎠ ;
A3X,Y =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + +
+ + +
+ + +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎠ , B3X,Y = C3
X,Y = D3X,Y =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎝+ − −− + −− − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎠ .
(10.10)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Higher-Dimensional Hadamard Matrices 317
(2) Williamson-type matrices parallel to plane (X, Z):
A1X,Z =
⎛⎜⎜⎜⎜⎜⎜⎜⎝+ + +
+ + +
+ + +
⎞⎟⎟⎟⎟⎟⎟⎟⎠ , B1X,Z = C1
X,Z = D1X,Z =
⎛⎜⎜⎜⎜⎜⎜⎜⎝+ − −− + −− − +
⎞⎟⎟⎟⎟⎟⎟⎟⎠ ;
A2X,Z =
⎛⎜⎜⎜⎜⎜⎜⎜⎝+ + +
+ + +
+ + +
⎞⎟⎟⎟⎟⎟⎟⎟⎠ , B2X,Z = C2
X,Z = D2X,Z =
⎛⎜⎜⎜⎜⎜⎜⎜⎝− + −− − ++ − −
⎞⎟⎟⎟⎟⎟⎟⎟⎠ ;
A3X,Z =
⎛⎜⎜⎜⎜⎜⎜⎜⎝+ + +
+ + +
+ + +
⎞⎟⎟⎟⎟⎟⎟⎟⎠ , B3X,Z = C3
X,Z = D3X,Z =
⎛⎜⎜⎜⎜⎜⎜⎜⎝− − ++ − −− + −
⎞⎟⎟⎟⎟⎟⎟⎟⎠ .(10.11)
(3) Williamson-type matrices parallel to plane (Y, Z):
A1Y,Z =
⎛⎜⎜⎜⎜⎜⎜⎜⎝+ + +
+ + +
+ + +
⎞⎟⎟⎟⎟⎟⎟⎟⎠ , B1Y,Z = C1
Y,Z = D1Y,Z =
⎛⎜⎜⎜⎜⎜⎜⎜⎝+ − −− + −− − +
⎞⎟⎟⎟⎟⎟⎟⎟⎠ ;
A2Y,Z =
⎛⎜⎜⎜⎜⎜⎜⎜⎝+ + +
+ + +
+ + +
⎞⎟⎟⎟⎟⎟⎟⎟⎠ , B2Y,Z = C2
Y,Z = D2Y,Z =
⎛⎜⎜⎜⎜⎜⎜⎜⎝− + −+ − −− − +
⎞⎟⎟⎟⎟⎟⎟⎟⎠ ;
A3Y,Z =
⎛⎜⎜⎜⎜⎜⎜⎜⎝+ + +
+ + +
+ + +
⎞⎟⎟⎟⎟⎟⎟⎟⎠ , B3Y,Z = C3
Y,Z = D3Y,Z =
⎛⎜⎜⎜⎜⎜⎜⎜⎝− − +− + −+ − −
⎞⎟⎟⎟⎟⎟⎟⎟⎠ .(10.12)
Let us denote the 3D Williamson array by S 3(a, b, c, d) (see Fig. 10.5). Thefollowing matrices are 3D Williamson–Hadamard matrices of order 4:
P0 = S 3(−1,−1,−1,−1), P1 = S 3(−1,−1,−1,+1),P2 = S 3(−1,−1,+1,−1), P3 = S 3(−1,−1,+1,+1),
P4 = S 3(−1,+1,−1,−1), P5 = S 3(−1,+1,−1,+1),P6 = S 3(−1,+1,+1,−1), P7 = S 3(−1,+1,+1,+1),
(10.13a)
P8 = S 3(+1,−1,−1,−1), P9 = S 3(+1,−1,−1,+1),P10 = S 3(+1,−1,+1,−1), P11 = S 3(+1,−1,+1,+1),
P12 = S 3(+1,+1,−1,−1), P13 = S 3(+1,+1,−1,+1),P14 = S 3(+1,+1,+1,−1), P15 = S 3(+1,+1,+1,+1).
(10.13b)
Let
V0 =(R,U,U2, . . . ,Un−2,Un−1
),
V1 =(U,U2, . . . ,Un−1,R
),
V2 =(U2,U3, . . . ,Un−1,R,U
),
. . . ,
Vn−1 =(Un−1,R,U, . . . ,Un−3,Un−2
),
(10.14)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
318 Chapter 10
or
V0 =
∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥
1 0 0 . . . 0 00 1 0 . . . 0 00 0 1 . . . 0 0. . . . . . . .0 0 0 . . . 1 00 0 0 . . . 0 1
∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣
0 1 0 . . . 0 00 0 1 . . . 0 0. . . . . . . .. . . . . . . .0 0 0 . . . 0 11 0 0 . . . 0 0
∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣· · ·
∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣
0 0 0 . . . 0 01 0 0 . . . 0 00 1 0 . . . 0 0. . . . . . . .0 0 0 . . . 1 00 0 0 . . . 0 0
100.00
∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥. (10.15)
We can design the following matrix:
[S H]mn = V0 ⊗ Π0 + V1 ⊗ Π1 + V2 ⊗ Π2 + · · · + Vm−1 ⊗ Πm−1, (10.16)
where Πi = 0, 1, 2, . . . ,m − 1 are (+1, −1) matrices of order n. What conditionsmust satisfy matricesΠi, i = 0, 1, 2, . . . ,m−1 in order for [S H]mn to be a Hadamardmatrix?
The following statement holds true:
Statement 10.2.1:4 Let m in Eq. (10.16) be an odd number, and Πi = Πm−i,i = 0, 1, 2, . . . ,m − 1 and Πi ∈ {P0, P1, . . . , P15}.
If Π0 ∈ P1 = {P0, P3, P5, P6, P9, P10, P12, P15}, then Πi ∈ P2 = {P1, P2, P4, P7,P8, P11, P13, P14}, and vice versa; if Π0 ∈ P2, then Πi ∈ P1.
Theorem 10.2.1: (Generalized Williamson Theorem4,11) If there are spatial Willi-amson-type matrices A, B, C, and D of order m, then the matrix H(A, B,C,D) (seeFig. 10.9) is a spatial Williamson–Hadamard matrix of order 4m.
10.3 3D Hadamard Matrices of Order 4n + 2
Recently, Xian32 presented a very simple method of construction of a 3DHadamard matrix of the order n = 4k+2, k ≥ 1. Here, we will use Xian’s definitionsand construction method.32 First, we prove the following:
Theorem 10.3.1:32 If {H(i, j, k)}n−1i, j,k=0 is a 3D Hadamard matrix, then n must be an
even number.
Let {H(i, j, k)}n−1i, j,k=0 be a 3D Hadamard matrix, i.e., H(i, j, k) = ±1, and
H2(i, j, k) = 1. Then, using the orthogonality condition, we obtain
n−1∑i=0
n−1∑j=0
H(i, j, 0)H(i, j, 1) = 0,
n−1∑i=0
n−1∑j=0
H(i, j, 0)H(i, j, 0) = n2.
(10.17)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Higher-Dimensional Hadamard Matrices 319
A
A
A
A
–A
–B
–B
–B
–B
–B
B
–B
B
–C
–C
–C
C
–C
–C
C
C
C
C
–C
–C
–C
C
–C
–D
–D
–D
–A
–D
–D
D
D
D
D
D
D
–A
–C
–A
–B
–A
–A
A
A
A
B
A
B
B
B
A
A
–D
–B
–B
–D
–D
–D
– D
–B
Figure 10.9 3D Williamson–Hadamard matrix of order 4m.
Thus,
n−1∑i=0
n−1∑j=0
H(i, j, 0) {H(i, j, 0) + H(i, j, 1)} =n−1∑i=0
n−1∑j=0
H2(i, j, 0)
+
n−1∑i=0
n−1∑j=0
H(i, j, 0)H(i, j, 1) = n2. (10.18)
However,
H(i, j, 0) + H(i, j, 1) = (±1) + (±1) = even number. (10.19)
Thus, the number n2 must be even.Now, we return to the construction of a 3D Hadamard matrix, particularly as an
illustrative example of a 3D Hadamard matrix order 6.
Definition 10.3.1:32 A (−1, +1) matrix A = [A(i, j)], 0 ≤ i ≤ n − 1, 0 ≤j ≤ m − 1 is called a (2D) perfect binary array of dimension n × m and isdenoted by PBA(n,m), if and only if, its 2D autocorrelation is a δ-function,
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
320 Chapter 10
i.e.,
RA(s, t) =n−1∑i=0
n−1∑j=0
A(i, j)A[(i + s)mod n, ( j + s)mod m
]= 0, (s, t) � (0, 0).
(10.20)
An example of PBA(6, 6) is given by (see Ref. 31)
A =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
− + + + + −+ − + + + −+ + − + + −+ + + − + −+ + + + − −− − − − − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (10.21)
Theorem 10.3.2: (See more detail in Ref. 31). If A is PBA(m,m), then B =[B(i, j, k)
]m−1i, j,k=0 is a 3D Hadamard matrix of order m, where
B(i, j, k) = A[(k + i)mod m, (k + j)mod m,m
], 0 ≤ i, j, k ≤ m − 1. (10.22)
Now, using Theorem 10.3.2 and Eq. (10.21), we present a 3D Hadamard matrix oforder 6. Because B(i, j, k) = B( j, i, k), which means that the layers of the x and ydirections are the same, we need give only the layers in the z and y directions asfollows:
Layers in the z direction:
[B(i, j, 0)] =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
− + + + + −+ − + + + −+ + − + + −+ + + − + −+ + + + − −− − − − − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, [B(i, j, 1)] =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
− + + + − ++ − + + − ++ + − + − ++ + + − − +− − − − + −+ + + + − −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠,
[B(i, j, 2)] =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
− + + − + ++ − + − + ++ + − − + +− − − + − −+ + + − − ++ + + − + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, [B(i, j, 3)] =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
− + − + + ++ − − + + +− − + − − −+ + − − + ++ + − + − ++ + − + + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠,
[B(i, j, 4)] =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
− − + + + +− + − − − −+ − − + + ++ − + − + ++ − + + − ++ − + − + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, [B(i, j, 5)] =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ − − − − −− − + + + +− + − + + +− + + − + +− + + + − +− + + + + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠.
(10.23)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Higher-Dimensional Hadamard Matrices 321
Layers in the y direction:
[B(i, 0, k)] =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
− − − − − ++ + + + − −+ + + − + −+ + − + + −+ − + + + −− + + + + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, [B(i, 1, k)] =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + + + − −− − − − + −+ + + − − ++ + − + − ++ − + + − +− + + + − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠,
[B(i, 2, k)] =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + + − + −+ + + − − +− − − + − −+ + − − + ++ − + − + +− + + − + +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, [B(i, 3, k)] =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + − + + −+ + − + − ++ + − − + +− − + − − −+ − − + + +− + − + + +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠,
[B(i, 4, k)] =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ − + + + −+ − + + − ++ − + − + ++ − − + + +− + − − − −− − + + + +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, [B(i, 5, k)] =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
− + + + + −− + + + − +− + + − + +− + − + + +− − + + + ++ − − − − −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠.
(10.24)
In Fig. 10.10, a 3D Hadamard matrix of size 6×6×6 obtained from Eq. (10.21)using Eq. (10.22) is given.
Example 10.3.1:32 The following matrices Am, m = 2, 4, 8, 12 are perfect binaryarrays of size m × m:
A2 =
(+ ++ −
), A4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + + −+ + + −+ + + −− − − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , A8 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
− + + + − + + ++ − + + − + − −+ + + − + + + −+ + − − − − + +− − + − − − + −+ + + − − − − ++ − + + + − + ++ − − + − + + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (10.25a)
A12 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ − + − + − − + − − − −+ − − − + − − − − − + ++ + − + + + + − − + − −− + − + − + + − + + + ++ − − − + − − − − − + ++ + − + + + + − − + − −+ − + − + − − + − − − −− + + + − + + + + + − −+ + − + + + + − − + − −+ − + − + − − + − − − −+ − − − + − − − − − + +− − + − − − − + + − + +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (10.25b)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
322 Chapter 10
Figure 10.10 3D Hadamard matrix of size 6 × 6 × 6 (dark circles denote +1).
To prove this statement, we can verify the correctness of the condition Eq. (10.20)only for the matrix A4, i.e., we can prove that
RA(s, t) =3∑
i, j=0
A(i, j)A[(i + s)mod 4, ( j + t)mod 4
], (s, t) � (0, 0). (10.26)
Let us consider the following cases:
Case for s = 0, t = 1, 2, 3:
RA(0, 1) = A(0, 0)A(0, 1) + A(0, 1)A(0, 2) + A(0, 2)A(0, 3) + A(0, 3)A(0, 0)
+ A(1, 0)A(1, 1) + A(1, 1)A(1, 2) + A(1, 2)A(1, 3) + A(1, 3)A(1, 0)
+ A(2, 0)A(2, 1) + A(2, 1)A(2, 2) + A(2, 2)A(2, 3) + A(2, 3)A(2, 0)
+ A(3, 0)A(3, 1) + A(3, 1)A(3, 2) + A(3, 2)A(3, 3) + A(3, 3)A(3, 0),
RA(0, 2) = A(0, 0)A(0, 2) + A(0, 1)A(0, 3) + A(0, 2)A(0, 0) + A(0, 3)A(0, 1)
+ A(1, 0)A(1, 2) + A(1, 1)A(1, 3) + A(1, 2)A(1, 0) + A(1, 3)A(1, 1)
+ A(2, 0)A(2, 2) + A(2, 1)A(2, 3) + A(2, 2)A(2, 0) + A(2, 3)A(2, 1)
+ A(3, 0)A(3, 2) + A(3, 1)A(3, 3) + A(3, 2)A(3, 0) + A(3, 3)A(3, 1),
(10.27)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Higher-Dimensional Hadamard Matrices 323
RA(0, 3) = A(0, 0)A(0, 3) + A(0, 1)A(0, 0) + A(0, 2)A(0, 1) + A(0, 3)A(0, 2)
+ A(1, 0)A(1, 3) + A(1, 1)A(1, 0) + A(1, 2)A(1, 1) + A(1, 3)A(1, 2)
+ A(2, 0)A(2, 3) + A(2, 1)A(2, 0) + A(2, 2)A(2, 1) + A(2, 3)A(2, 2)
+ A(3, 0)A(3, 3) + A(3, 1)A(3, 0) + A(3, 2)A(3, 1) + A(3, 3)A(3, 2).
By substituting the elements of matrix A4 into these expressions, we obtain
RA(0, 1) = 1 + 1 − 1 − 1 + 1 + 1 − 1 − 1 + 1 + 1 − 1 − 1 + 1 + 1 − 1 − 1 = 0,
RA(0, 2) = 1 − 1 + 1 − 1 + 1 − 1 + 1 − 1 + 1 − 1 + 1 − 1 + 1 − 1 + 1 − 1 = 0,
RA(0, 3) = −1 + 1 + 1 − 1 − 1 + 1 + 1 − 1 − 1 + 1 + 1 − 1 − 1 + 1 + 1 − 1 = 0.
(10.28)
Case for s = 1, t = 0, 1, 2, 3:
RA(1, 0) = A(0, 0)A(1, 0) + A(0, 1)A(1, 1) + A(0, 2)A(1, 2) + A(0, 3)A(1, 3)
+ A(1, 0)A(2, 0) + A(1, 1)A(2, 1) + A(1, 2)A(2, 2) + A(1, 3)A(2, 3)
+ A(2, 0)A(3, 0) + A(2, 1)A(3, 1) + A(2, 2)A(3, 2) + A(2, 3)A(3, 3)
+ A(3, 0)A(0, 0) + A(3, 1)A(0, 1) + A(3, 2)A(0, 2) + A(3, 3)A(0, 3),
RA(1, 1) = A(0, 0)A(1, 1) + A(0, 1)A(1, 2) + A(0, 2)A(1, 3) + A(0, 3)A(1, 0)
+ A(1, 0)A(2, 1) + A(1, 1)A(2, 2) + A(1, 2)A(2, 3) + A(1, 3)A(2, 0)
+ A(2, 0)A(3, 1) + A(2, 1)A(3, 2) + A(2, 2)A(3, 3) + A(2, 3)A(3, 0)
+ A(3, 0)A(0, 1) + A(3, 1)A(0, 2) + A(3, 2)A(0, 3) + A(3, 3)A(0, 0),
RA(1, 2) = A(0, 0)A(1, 2) + A(0, 1)A(1, 3) + A(0, 2)A(1, 0) + A(0, 3)A(1, 1)
+ A(1, 0)A(2, 2) + A(1, 1)A(2, 3) + A(1, 2)A(2, 0) + A(1, 3)A(2, 1)
+ A(2, 0)A(3, 2) + A(2, 1)A(3, 3) + A(2, 2)A(3, 0) + A(2, 3)A(3, 1)
+ A(3, 0)A(0, 2) + A(3, 1)A(0, 3) + A(3, 2)A(0, 0) + A(3, 3)A(0, 1),
RA(1, 3) = A(0, 0)A(1, 3) + A(0, 1)A(1, 0) + A(0, 2)A(1, 1) + A(0, 3)A(1, 2)
+ A(1, 0)A(2, 3) + A(1, 1)A(2, 0) + A(1, 2)A(2, 1) + A(1, 3)A(2, 2)
+ A(2, 0)A(3, 3) + A(2, 1)A(3, 0) + A(2, 2)A(3, 1) + A(2, 3)A(3, 2)
+ A(3, 0)A(0, 3) + A(3, 1)A(0, 0) + A(3, 2)A(0, 1) + A(3, 3)A(0, 2).
(10.29)
By substituting the elements of matrix A4 into these expressions, we obtain
RA(1, 0) = 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 = 0,
RA(1, 1) = 1 + 1 − 1 − 1 + 1 + 1 − 1 − 1 − 1 − 1 + 1 + 1 − 1 − 1 + 1 + 1 = 0,
RA(1, 2) = 1 − 1 + 1 − 1 + 1 − 1 + 1 − 1 − 1 + 1 − 1 + 1 − 1 + 1 − 1 + 1 = 0,
RA(1, 3) = −1 + 1 + 1 − 1 − 1 + 1 + 1 − 1 + 1 − 1 − 1 + 1 + 1 − 1 − 1 + 1 = 0.
(10.30)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
324 Chapter 10
Case for s = 2, t = 0, 1, 2, 3:
RA(2, 0) = A(0, 0)A(2, 0) + A(0, 1)A(2, 1) + A(0, 2)A(2, 2) + A(0, 3)A(2, 3)
+ A(1, 0)A(3, 0) + A(1, 1)A(3, 1) + A(1, 2)A(3, 2) + A(1, 3)A(3, 3)
+ A(2, 0)A(0, 0) + A(2, 1)A(0, 1) + A(2, 2)A(0, 2) + A(2, 3)A(0, 3)
+ A(3, 0)A(1, 0) + A(3, 1)A(1, 1) + A(3, 2)A(1, 2) + A(3, 3)A(1, 3),
RA(2, 1) = A(0, 0)A(2, 1) + A(0, 1)A(2, 2) + A(0, 2)A(2, 3) + A(0, 3)A(2, 0)
+ A(1, 0)A(3, 1) + A(1, 1)A(3, 2) + A(1, 2)A(3, 3) + A(1, 3)A(3, 0)
+ A(2, 0)A(0, 1) + A(2, 1)A(0, 2) + A(2, 2)A(0, 3) + A(2, 3)A(0, 0)
+ A(3, 0)A(1, 1) + A(3, 1)A(1, 2) + A(3, 2)A(1, 3) + A(3, 3)A(1, 0),
(10.31a)
RA(2, 2) = A(0, 0)A(2, 2) + A(0, 1)A(2, 3) + A(0, 2)A(2, 0) + A(0, 3)A(2, 1)
+ A(1, 0)A(3, 2) + A(1, 1)A(3, 3) + A(1, 2)A(3, 0) + A(1, 3)A(3, 1)
+ A(2, 0)A(0, 2) + A(2, 1)A(0, 3) + A(2, 2)A(0, 0) + A(2, 3)A(0, 1)
+ A(3, 0)A(1, 2) + A(3, 1)A(1, 3) + A(3, 2)A(1, 0) + A(3, 3)A(1, 1),
RA(2, 3) = A(0, 0)A(2, 3) + A(0, 1)A(2, 0) + A(0, 2)A(2, 1) + A(0, 3)A(2, 2)
+ A(1, 0)A(3, 3) + A(1, 1)A(3, 0) + A(1, 2)A(3, 1) + A(1, 3)A(3, 2)
+ A(2, 0)A(0, 3) + A(2, 1)A(0, 0) + A(2, 2)A(0, 1) + A(2, 3)A(0, 2)
+ A(3, 0)A(1, 3) + A(3, 1)A(1, 0) + A(3, 2)A(1, 1) + A(3, 3)A(1, 2).
(10.31b)
By substituting the elements of matrix A4 into these expressions, we obtain
RA(2, 0) = 1 + 1 + 1 + 1 − 1 − 1 − 1 − 1 + 1 + 1 + 1 + 1 − 1 − 1 − 1 − 1 = 0,
RA(2, 1) = 1 + 1 − 1 − 1 − 1 − 1 + 1 + 1 + 1 + 1 − 1 − 1 − 1 − 1 + 1 + 1 = 0,
RA(2, 2) = 1 − 1 + 1 − 1 − 1 + 1 − 1 + 1 + 1 − 1 + 1 − 1 − 1 + 1 − 1 + 1 = 0,
RA(2, 3) = −1 + 1 + 1 − 1 + 1 − 1 − 1 + 1 − 1 + 1 + 1 − 1 + 1 − 1 − 1 + 1 = 0.
(10.32)
Case for s = 3, t = 0, 1, 2, 3:
RA(3, 0) = A(0, 0)A(3, 0) + A(0, 1)A(3, 1) + A(0, 2)A(3, 2) + A(0, 3)A(3, 3)+ A(1, 0)A(0, 0) + A(1, 1)A(0, 1) + A(1, 2)A(0, 2) + A(1, 3)A(0, 3)+ A(2, 0)A(1, 0) + A(2, 1)A(1, 1) + A(2, 2)A(1, 2) + A(2, 3)A(1, 3)+ A(3, 0)A(2, 0) + A(3, 1)A(2, 1) + A(3, 2)A(2, 2) + A(3, 3)A(2, 3),
RA(3, 1) = A(0, 0)A(3, 1) + A(0, 1)A(3, 2) + A(0, 2)A(3, 3) + A(0, 3)A(3, 0)+ A(1, 0)A(0, 1) + A(1, 1)A(0, 2) + A(1, 2)A(0, 3) + A(1, 3)A(0, 0)+ A(2, 0)A(1, 1) + A(2, 1)A(1, 2) + A(2, 2)A(1, 3) + A(2, 3)A(1, 0)+ A(3, 0)A(2, 1) + A(3, 1)A(2, 2) + A(3, 2)A(2, 3) + A(3, 3)A(2, 0),
RA(3, 2) = A(0, 0)A(3, 2) + A(0, 1)A(3, 3) + A(0, 2)A(3, 0) + A(0, 3)A(3, 1)+ A(1, 0)A(0, 2) + A(1, 1)A(0, 3) + A(1, 2)A(0, 0) + A(1, 3)A(0, 1)+ A(2, 0)A(1, 2) + A(2, 1)A(1, 3) + A(2, 2)A(1, 0) + A(2, 3)A(1, 1)+ A(3, 0)A(2, 2) + A(3, 1)A(2, 3) + A(3, 2)A(2, 0) + A(3, 3)A(2, 1),
RA(3, 3) = A(0, 0)A(3, 3) + A(0, 1)A(3, 0) + A(0, 2)A(3, 1) + A(0, 3)A(3, 2)+ A(1, 0)A(0, 3) + A(1, 1)A(0, 0) + A(1, 2)A(0, 1) + A(1, 3)A(0, 2)+ A(2, 0)A(1, 3) + A(2, 1)A(1, 0) + A(2, 2)A(1, 1) + A(2, 3)A(1, 2)+ A(3, 0)A(2, 3) + A(3, 1)A(2, 0) + A(3, 2)A(2, 1) + A(3, 3)A(2, 2).
(10.33)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Higher-Dimensional Hadamard Matrices 325
Figure 10.11 3D Hadamard matrix from PBA(4, 4).
By substituting the elements of matrix A4 into these expressions, we obtain
RA(3, 0) = −1 − 1 − 1 − 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 − 1 − 1 − 1 − 1 = 0,
RA(3, 1) = −1 − 1 + 1 + 1 + 1 + 1 − 1 − 1 + 1 + 1 − 1 − 1 − 1 − 1 + 1 + 1 = 0,
RA(3, 2) = −1 + 1 − 1 + 1 + 1 − 1 + 1 − 1 + 1 − 1 + 1 − 1 − 1 + 1 − 1 + 1 = 0,
RA(3, 3) = 1 − 1 − 1 + 1 − 1 + 1 + 1 − 1 − 1 + 1 + 1 − 1 + 1 − 1 − 1 + 1 = 0.
(10.34)
The 3D Hadamard matrix of order 4 obtained from PBA(4, 4) is given inFig. 10.11.
10.4 Fast 3D WHTs
We have seen that the transform technique based on sinusoidal functions hasbeen successfully applied in signal processing and in communications over aconsiderable period of time. A mathematical reason for this success is the factthat these functions constitute the eigenfunctions of linear operators, which aremodeled by means of ordinary linear differential equations. One nonsinusoidalsystem is given by the Walsh functions (as they were first studied in 1923 by Walsh)and their different variations and generalizations. Chrestenson33 has investigatedthe “generalized” Walsh functions. An original work on this topic is reportedin Ahmed and Rao.26 The first application of 3D Walsh transforms in signalprocessing is given by Harmuth.2 Finally, surveys of 3D Walsh transforms are alsoin Refs. 14, 17, and 32.
In this section, we consider some higher-dimensional orthogonal transforms thatcan be used to process 2D (e.g., images) and/or 3D (e.g., seismic waves) digital
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
326 Chapter 10
signals. Let W = [W(i, j, k)], 0 ≤ i, j, k ≤ 2n − 1 be a 3D (−1, +1)-matrix of size2n × 2n × 2n such that
W−1 =12n
W. (10.35)
A 2D digital signal f = [ f (i, j)], 0 ≤ i, j ≤ 2n − 1 can be treated as a 3D matrix ofsize 2n × 1 × 2n. Thus,
F = W f , f =12n
WF (10.36)
are a pair of orthogonal forward and inverse transforms.Equation (10.35) can be satisfied by the following matrices:
W = [W(i, j, k)] =[(−1)〈i, j〉+〈i,k〉+〈 j,k〉+an+b(〈i,i〉+〈 j, j〉)+c〈k,k〉] , (10.37)
where 0 ≤ i, j, k ≤ 2n − 1, a, b, c ∈ {0, 1}, 〈i, j〉 is the inner product of the vectorsi = (i0, i1, . . . , in−1) and j = ( j0, j1, . . . , jn−1), which are the binary expanded vectorsof integers i =
∑n−1t=0 it2t and j =
∑n−1t=0 jt2t, respectively.
Theorem 10.4.1:32 Let W = [W(i, j, k)] be the 3D Hadamard matrix in Eq. (10.37)and f = [ f (i, j)], 0 ≤ i, j ≤ 2n − 1, an image signal. Then, the transform
F = [F(i, j)] = W f and f =12n
WF (10.38)
can be factorized as
F = W f =n−1∏i=0
(I2i ⊗ A ⊗ I2n−1−i) f , (10.39)
where A = [A(p, q, r)], 0 ≤ p, q, r ≤ 1, is a 3D matrix of size 2 × 2 × 2 defined by
A(p, q, r) = (−1)pq+pr+qr+a+b(p+q)cr (10.40)
and can be implemented using n4n addition operations.
Example 10.4.1: A 3D forward HT of images of size 4 × 4.
Let n = 2 and a = b = c = 0. From Eqs. (10.37) and (10.40), we obtain a 3DHadamard matrix of the size 4 × 4 × 4 given in Fig. 10.12.
Denote the image matrix of order 4 by f = [ f (i, j)]. Realization of an F = W ftransform can be obtained from equations given in Example 10.3.1.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Higher-Dimensional Hadamard Matrices 327
Figure 10.12 3D Hadamard matrix of size 4 × 4 × 4 from Eq. (10.37).
F(0, 0) = f (0, 0) + f (1, 0) + f (2, 0) + f (3, 0),
F(0, 1) = f (0, 1) − f (1, 1) + f (2, 1) − f (3, 1),
F(0, 2) = f (0, 2) + f (1, 2) − f (2, 2) − f (3, 2),
F(0, 3) = f (0, 3) − f (1, 3) − f (2, 3) + f (3, 3);
F(1, 0) = f (0, 0) − f (1, 0) + f (2, 0) − f (3, 0),
F(1, 1) = − f (0, 1) − f (1, 1) − f (2, 1) − f (3, 1),
F(1, 2) = f (0, 2) − f (1, 2) − f (2, 2) + f (3, 2),
F(1, 3) = − f (0, 3) − f (1, 3) + f (2, 3) + f (3, 3);
F(2, 0) = f (0, 0) + f (1, 0) − f (2, 0) − f (3, 0),
F(2, 1) = f (0, 1) − f (1, 1) − f (2, 1) + f (3, 1),
F(2, 2) = − f (0, 2) − f (1, 2) − f (2, 2) − f (3, 2),
F(2, 3) = − f (0, 3) + f (1, 3) − f (2, 3) + f (3, 3);
F(3, 0) = f (0, 0) − f (1, 0) − f (2, 0) + f (3, 0),
F(3, 1) = − f (0, 1) − f (1, 1) + f (2, 1) + f (3, 1),
F(3, 2) = − f (0, 2) + f (1, 2) − f (2, 2) + f (3, 2),
F(3, 3) = f (0, 3) + f (1, 3) + f (2, 3) + f (3, 3).
(10.41)
Example 10.4.2: Factorization of 3D Hadamard matrix of size 4.
Consider the case where n = 2 and a = b = c = 0. From Eqs. (10.37) and (10.40),we obtain
W = [W(i, j, k)] = (−1)〈i, j〉+〈i,k〉+〈 j,k〉,A = [A(p, q, r)] = (−1)pq+pr+qr.
(10.42)
From Theorem 10.4.1, we obtain
W = (I1 ⊗ A ⊗ I2) (I2 ⊗ A ⊗ I1) = (A ⊗ I2) (I2 ⊗ A) , (10.43)
where A, I2, A ⊗ I2 and I2 ⊗ A are given in Figs. 10.13 and 10.14.
Example 10.4.3: A 3D HT using factorization.
Let f = [ f (i, j)] be an input image of size 4 × 4. A 3D HT of this image can beimplemented as
F = W f = (A ⊗ I2) (I2 ⊗ A) f = W1W2 f = W1R, (10.44)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
328 Chapter 10
Figure 10.13 The structure of the matrix A × I2.
where W1 and W2 are taken from Figs. 10.13 and 10.14, respectively. FromExample 10.3.1, we have R = {R(p, q)} = W2 f , where
R(0, 0) = f (2, 0) + f (3, 0), R(0, 1) = f (2, 1) − f (3, 1),
R(0, 2) = f (2, 2) + f (3, 2), R(0, 3) = f (2, 3) − f (3, 3);
R(1, 0) = f (2, 0) − f (3, 0), R(1, 1) = − f (2, 1) − f (3, 1),
R(1, 2) = f (2, 2) − f (3, 2), R(1, 3) = − f (2, 3) − f (3, 3);
R(2, 0) = f (0, 0) + f (1, 0), R(2, 1) = f (0, 1) − f (1, 1),
R(2, 2) = f (0, 2) + f (1, 2), R(2, 3) = f (0, 3) − f (1, 3);
R(3, 0) = f (0, 0) + f (1, 0), R(3, 1) = − f (0, 1) − f (1, 1),
R(3, 2) = f (0, 2) + f (1, 2), R(3, 3) = − f (0, 3) − f (1, 3).
(10.45)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Higher-Dimensional Hadamard Matrices 329
Figure 10.14 The structure of the matrix I2 × A.
and F = {F(p, q)} = W1R,
F(0, 0) = R(1, 0) + R(3, 0), F(0, 1) = R(1, 1) + R(3, 1),
F(0, 2) = R(1, 2) − R(3, 2), F(0, 3) = R(1, 3) − R(3, 3);
F(1, 0) = R(0, 0) + R(2, 0), F(1, 1) = R(0, 1) + R(2, 1),
F(1, 2) = R(0, 2) − f (2, 2), F(1, 3) = R(0, 3) − R(2, 3);
F(2, 0) = R(1, 0) − R(3, 0), F(2, 1) = R(1, 1) − R(3, 1),
F(2, 2) = −R(1, 2) − f (3, 2), F(2, 3) = −R(1, 3) − R(3, 3);
F(3, 0) = R(0, 0) − R(2, 0), F(3, 1) = R(0, 1) − R(2, 1),
F(3, 2) = −R(0, 2) − R(2, 2), F(3, 3) = −R(0, 3) − R(2, 3).
(10.46)
10.5 Operations with Higher-Dimensional Complex Matrices
In this section, we define some useful operations with higher-dimensional complexmatrices that can be additionally used for realization of multidimensional complexHTs. Let A = [A(i1, i2, . . . , in)] be an n-dimensional complex matrix of sizem1 × m2 × · · · × mn and 0 ≤ ik ≤ mk − 1, where k = 1, 2, . . . , n. Complex matrix Acan be presented as A = A1 + jA2, where A1, A2 are real n-dimensional matrices ofsize m1 × m2 × · · · × mn, j =
√−1.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
330 Chapter 10
(1) Scalar multiplication. sA = [sA(i1, i2, . . . , in)], s is a complex scalar number.Since s = s1 + js2, the real and imaginary parts of the sA matrix, respectively,are given by s1A1 − s2A2, s2A1 + s1A1.
(2) Equality of two complex matrices. A = B means A(i1, i2, . . . , in) = B(i1, i2,. . . , in), where A and B have the same size. If A = Ai + jA2 and B = Bi + jB2,then equality A = B means A1 = B1, A2 = B2.
(3) Addition of two complex matrices. C = A ± B = C1 + jC2, where
C1 = [A1(i1, i2, . . . , in) ± B1(i1, i2, . . . , in)],
C2 = [A2(i1, i2, . . . , in) ± B2(i1, i2, . . . , in)].
(4) Multiplication of two complex matrices. Let A = [A(i1, i2, . . . , in)] and B =[B(i1, i2, . . . , in)] be two n-dimensional complex matrices of sizes a1 × a2 ×· · · × an and b1 × b2 × · · · × bn, respectively.
(a) If n = 2m and (am+1, am+2, . . . , an) = (bm+1, bm+2, . . . , bm), then the complexmatrix C[(k1, k2, . . . , kn)] = [C1(k1, k2, . . . , kn) + jC2(k1, k2, . . . , kn)] = ABof size a1 × a2 × · · · × am × bm+1 × · · · × bn is defined as
C1(k1, k2, . . . , kn) =b1−1∑t1=0
b2−1∑t2=0
· · ·bm−1∑tm=0
A1(k1, . . . , km, t1, . . . , tm)
× B1(t1, . . . , tm, km+1, . . . , kn)
−b1−1∑t1=0
b2−1∑t2=0
· · ·bm−1∑tm=0
A2(k1, . . . , km, t1, . . . , tm)
× B2(t1, . . . , tm, km+1, . . . , kn),
C2(k1, k2, . . . , kn) =b1−1∑t1=0
b2−1∑t2=0
· · ·bm−1∑tm=0
A1(k1, . . . , km, t1, . . . , tm)
× B2(t1, . . . , tm, km+1, . . . , kn)
+
b1−1∑t1=0
b2−1∑t2=0
· · ·bm−1∑tm=0
A2(k1, . . . , km, t1, . . . , tm)
× B1(t1, . . . , tm, km+1, . . . , kn).
(10.47)
(b) If n = 2m + 1 and (am+1, am+2, . . . , a2m) = (b1, b2, . . . , bm) and an = bn,then the complex matrix C[(k1, k2, . . . , kn)] = [C1(k1, k2, . . . , kn) + jC2(k1,
k2, . . . , kn)] = AB of size a1 × a2 × · · · × am × bm+1 × · · · × bn is then definedas
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Higher-Dimensional Hadamard Matrices 331
C1(k1, k2, . . . , kn) =b1−1∑t1=0
b2−1∑t2=0
· · ·bm−1∑tm=0
A1(k1, . . . , km, t1, . . . , tm, kn)
× B1(t1, . . . , tm, km+1, . . . , kn)
−b1−1∑t1=0
b2−1∑t2=0
· · ·bm−1∑tm=0
A2(k1, . . . , km, t1, . . . , tm, kn)
× B2(t1, . . . , tm, km+1, . . . , kn),
C2(k1, k2, . . . , kn) =b1−1∑t1=0
b2−1∑t2=0
· · ·bm−1∑tm=0
A1(k1, . . . , km, t1, . . . , tm, kn)
× B2(t1, . . . , tm, km+1, . . . , kn)
+
b1−1∑t1=0
b2−1∑t2=0
· · ·bm−1∑tm=0
A2(k1, . . . , km, t1, . . . , tm, kn)
× B1(t1, . . . , tm, km+1, . . . , kn).
(10.48)
(5) Conjugate transpose of complex matrices. Let A = [A(i1, i2, . . . in)] be ann-dimensional matrix. The conjugate transpose matrix of A is defined asfollows:(a) If n = 2m, then A∗= [B( j1, j2, . . . , jn)]= [A∗( jm+1, jm+2, . . . , jn, j1, . . . , jm)].(b) If n = 2m+1, then A∗= [B( j1, j2, . . . , jn)] = [A∗( jm+1, jm+2, . . . , j2m, j1, . . . ,
jm, jn)].(6) Identity matrix. An n-dimensional matrix I of size a1×a2× · · ·×an is called an
identity matrix if, for any n-dimensional matrix, A is satisfied by IA = AI = A.Note the following:(a) If n = 2m, then the size of matrix I should satisfy
(a1, a2, . . . , am) = (am+1, am+2, . . . , an). (10.49)
(b) If n = 2m + 1, then the size of matrix I should satisfy
(a1, a2, . . . , am) = (am+1, am+2, . . . , an−1). (10.50)
Note also that for every given integer n and the size a1 × a2 × · · · × an
satisfying conditions (a) and (b), there is one, and only one, identity matrix.That identity matrix is defined as follows:
(c) If n = 2m, then the identity matrix I = [I(i1, i2, . . . , in)] of size a1 × · · · ×am × a1 × · · · × am is defined as
I(i1, i2, . . . , in) ={
1, if (i1, i2, . . . , im) = (im+1, im+2, . . . , in),0, otherwise.
(10.51)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
332 Chapter 10
(d) If n = 2m + 1, then the identity matrix I = [I(i1, i2, . . . , in)] of sizea1 × · · · × am × a1 × · · · × am × an is defined as
I(i1, i2, . . . , in) ={
1, if (i1, i2, . . . , im) = (im+1, im+2, . . . , in−1),0, otherwise.
(10.52)
Let m and k be two integers, and A, B, and C be n-dimensional complex matrices.The following identities hold:
A(B +C) = AB + AC, (B +C)A = BA +CA,
(m + k)A = mA + kA, m(A + B) = mA + mB,
m(AB) = (mA)B = A(mB), A(BC) = (AB)C
(A∗)∗ = A, (A + B)∗ = A∗ + B∗,(AB)∗ = B∗A∗.
(10.53)
10.6 3D Complex HTs
Recall that from the definition of a 2D complex Hadamard matrix, it follows thata complex Hadamard matrix is a matrix such that its (2 − 1)-dimensional layers(rows or columns) in each normal orientation of the axes are orthogonal to eachother. Similarly, we can define a 3D complex Hadamard matrix as follows:
Definition 10.6.1: A 3D complex Hadamard matrix H = (hi, j,k)ni, j,k=1 of order n
is called a regular 3D complex Hadamard matrix if the following conditions aresatisfied:
n∑i=1
hi,a,rh∗i,b,r =
n∑j=1
ha, j,rh∗b, j,r = nδa,b,
n∑i=1
hi,q,ah∗i,q,b =n∑
k=1
ha,q,kh∗b,q,k = nδa,b,
n∑j=1
hp, j,ah∗p, j,b =n∑
k=1
hp,a,kh∗p,b,k = nδa,b,
(10.54)
where ha,b,c ∈ {−1,+1,− j,+ j}, j =√−1, δa,b is a Kronecker function, i.e., δa,a = 1
and δa,b = 0 if a � b.
From the conditions of Eq. (10.54), it follows that for fixed i0, j0, k0, the matrices(hi0, j,k)n
j,k=1, (hi, j0,k)ni,k=1, and (hi, j,k0)n
i, j=1 are 2D complex Hadamard matrices oforder n. In Fig. 10.15, two 3D complex Hadamard matrices of size 2 × 2 × 2 aregiven.
The higher size of 3D complex Hadamard matrices can be obtained by theKronecker product. The 3D complex Hadamard matrix of size 4×4×4 constructed
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Higher-Dimensional Hadamard Matrices 333
Figure 10.15 Two 3D complex Hadamard matrices.
Figure 10.16 3D complex Hadamard matrices of size 4 × 4 × 4.
by the Kronecker product of 3D Hadamard matrix of size 2 × 2 × 2 (see Fig. 10.2)and the 3D complex Hadamard matrix of size 2 × 2 × 2 from Fig. 10.15 (left sidematrix) are given in Fig. 10.16.
Denote by C = [C(m, n, k)] the 3D complex Hadamard matrix of size 4 × 4 × 4given in Fig. 10.16, and let f be a 2D image matrix of order 4, which can beregarded as a 3D matrix f = [ f (i, 0, k)] of size 4 × 1 × 4. By this definition, matrixD = [D(d1, 0, d3)] = C f can be obtained from the following equation:
D(m, 0, k) =3∑
n=0
C(m, n, k) f (m, 0, k), m, k = 0, 1, 2, 3. (10.55)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
334 Chapter 10
From this equation, we obtain
D(000) = − j f (000) + f (100) + j f (200) + f (300),D(001) = f (001) + j f (101) − f (201) − j f (301),D(002) = − j f (002) − f (102) − j f (202) − f (302),D(003) = f (003) + j f (103) + f (203) + j f (303);
D(100) = f (000) + j f (100) − f (200) − j f (300),D(101) = − j f (001) − f (101) + j f (201) + f (301),D(102) = f (002) + j f (102) + f (202) + j f (302),D(103) = − j f (003) − f (103) − j f (203) − f (303);
D(200) = j f (000) + f (100) + j f (200) + f (300),D(201) = − f (001) − j f (101) − f (201) − j f (301),D(202) = − j f (002) − f (102) + j f (202) + f (302),D(203) = f (003) + j f (103) − f (203) − j f (303);
D(300) = − f (000) − j f (100) − f (200) − j f (300),D(301) = j f (001) + f (101) + j f (201) + f (301),D(302) = f (002) + j f (102) − f (202) − j f (302),D(303) = − j f (003) − f (103) + j f (203) + f (303).
(10.56)
Or, by ignoring the second coordinate, we obtain
D(0, 0) = [ f (1, 0) + f (3, 0)] − j[ f (0, 0) − f (2, 0)],D(0, 1) = [ f (0, 1) − f (2, 1)] + j[ f (11) − f (3, 1)],D(0, 2) = −[ f (1, 2) + f (3, 2)] − j[ f (0, 2) + f (2, 2)],D(0, 3) = [ f (0, 3) + f (2, 3)] + j[ f (1, 3) + f (3, 3)];
D(1, 0) = [ f (0, 0) − f (2, 0)] + j[ f (1, 0) − f (3, 0)],D(1, 1) = −[ f (1, 1) − f (3, 1)] − j[ f (0, 1) − f (2, 1)],D(1, 2) = [ f (0, 2) + f (2, 2)] + j[ f (1, 2) + f (3, 2)],D(1, 3) = −[ f (1, 3) + f (3, 3)] − j[ f (0, 3) + f (2, 3)];
D(2, 0) = [ f (1, 0) + f (3, 0)] + j[ f (0, 0) + f (2, 0)],D(2, 1) = −[ f (0, 1) + f (2, 1)] − j[ f (1, 1) + f (3, 1)],D(2, 2) = −[ f (1, 2) − f (3, 2)] − j[ f (0, 2) − f (2, 2)],D(2, 3) = [ f (0, 3) − f (2, 3)] + j[ f (1, 3) − f (3, 3)];
D(3, 0) = −[ f (0, 0) + f (2, 0)] − j[ f (1, 0) + f (3, 0)],D(3, 1) = [ f (1, 1) + f (3, 1)] + j[ f (0, 1) + f (2, 1)],D(3, 2) = [ f (0, 2) − f (2, 2)] + j[ f (1, 2) − f (3, 2)],D(3, 3) = −[ f (1, 3) − f (3, 3)] − j[ f (0, 3) − f (2, 3)].
(10.57)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Higher-Dimensional Hadamard Matrices 335
10.7 Construction of (λ, μ) High-Dimensional GeneralizedHadamard Matrices
An n-dimensional matrix of order m is defined as
A = [Ai1,i2,...,in], i1, i2, . . . , in = 1, 2, . . . ,m. (10.58)
The set of elements of the matrix in Eq. (10.58) with fixed values i j1 , i j2 , . . . , i jk ofindices i j1 , i j2 , . . . , i jk (1 ≤ jr ≤ n, 1 ≤ k ≤ n − 1) defines a k-tuple section of theorientation (i j1 , i j2 , . . . , i jk ), and is given by the (n − k)-dimensional matrix of orderm. The matrix
Bi1,i2,...,in = Aij1 ,i j2 ,...,i jk(10.59)
is called a transposition of the matrix in Eq. (10.58) according to a substitution(i1 i2 · · · ini j1 i j2 · · · i jn
). (10.60)
The transposed matrix will be denoted by34
A
⎛⎜⎜⎜⎜⎜⎜⎜⎝i1 i2 · · · ini j1 i j2 · · · i jn
⎞⎟⎟⎟⎟⎟⎟⎟⎠. (10.61)
Let [A]n = [Ai1,i2,...,in] and [B]r = [Bj1, j2,..., jr ] be n- and r-dimensional matrices oforder m, respectively, (i1, i2, . . . , in, j1, . . . , jr = 1, 2, . . . ,m).
Definition 10.7.1:34 The (λ, μ) convolute product of the matrix [A]n to the matrix[B]r with decomposition by indices s and c is called a t-dimensional matrix [D]t oforder m, defined as
[D]t = [Dl,s,k]=(λ,μ)([A]n[B]r) =
⎡⎢⎢⎢⎢⎢⎢⎣∑(c)
Al,s,cBc,s,k
⎤⎥⎥⎥⎥⎥⎥⎦ , (10.62)
where
n = k + λ + μ, r = ν + λ + μ, t = n + r − (λ + 2μ),
l = (l1, l2, . . . , lk), s = (s1, s2, . . . , sλ), c = (c1, c2, . . . , cμ),
k = (k1, k2, . . . , kν).
(10.63)
Now we introduce the concept of a multidimensional generalized Hadamardmatrix.
Definition 10.7.2:8,35 An n-dimensional matrix [H]n = [Hi1,i2,...,in] (i1, i2, . . . , in =1, 2, . . . ,m) with elements of p’th root of unity, here will be called an n-dimensional
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
336 Chapter 10
generalized Hadamard matrix of order m, if all (n−1)-dimensional parallel sectionsof orientation (il) 1 ≤ l ≤ n are mutually orthogonal matrices, i.e.,∑
r
· · ·∑
t
· · ·∑
z
Hr,...,α,...,zH∗r,...,β,...,z = mn−1δα,β, (10.64)
where (r, . . . , t, . . . , z) represents all of the scrambling (i1, . . . , il, . . . , in), and δα,β isthe Kronecker symbol.
Let H′ be an n-dimensional matrix of order m, and H′′ be a conjugate transposeof matrix H1 by several given indexes.
Definition 10.7.3:8,35 The matrix H′ of order m will be called the (λ, μ) orthogonalmatrix by all axial-oriented directions with parameters λ, μ, if, for fixed values λ, μ(μ � 0), the following conditions hold:
λ,μH′t H′′t = mμE(λ, k), t = 1, 2, . . . ,N, (10.65)
where k = n − λ − μ, E(λ, k) is a (λ, k)-dimensional identity matrix,34 and
N =
⎧⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎩n!λ!μ!k!, if μ � k,
n!2λ!μ!k!, if μ = k
(10.66)
are satisfied.
Remark 10.7.1: The concept of the multidimensional (λ, μ)-orthogonal matrixcoincides with the concept of the multidimensional generalized Hadamard matrix[H(p,m)]n if λ + μ = n − 1.
We emphasize the following two special cases:
• For λ = 0 and μ = n−1, we have a general n-dimensional generalized Hadamardmatrix [H(p,m)]n. In this case, Eq. (10.65) can be rewritten as
0,n−1H′t H′′t = mn−1E(0, 1), t = 1, 2, . . . , n, (10.67)
where
H′t = (H′)
⎛⎜⎜⎜⎜⎜⎜⎜⎝i1 i2 · · · it−1 iti2 i3 · · · it i1
⎞⎟⎟⎟⎟⎟⎟⎟⎠, H′′t = (H′′)
⎛⎜⎜⎜⎜⎜⎜⎜⎝it it+1 · · · in−1 inin it · · · in−2 in−1
⎞⎟⎟⎟⎟⎟⎟⎟⎠. (10.68)
• For λ = n − 2 and μ = 1, we obtain a regular n-dimensional generalizedHadamard matrix [H(p,m)]n satisfying the following relationships:
n−2,1(H′t,qH′′t,q) = mE(n − 2, 1), t = 1, 2, . . . , n, q = t + 1, t + 2, . . . , n, (10.69)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Higher-Dimensional Hadamard Matrices 337
where
H′t,q = (H′)
⎛⎜⎜⎜⎜⎜⎜⎜⎝i1 · · · it−1 it iq iq+1 · · · ini2 · · · it i1 in iq · · · in−1
⎞⎟⎟⎟⎟⎟⎟⎟⎠,
H′′t,q = (H′′)
⎛⎜⎜⎜⎜⎜⎜⎜⎝i1 · · · it−1 it iq iq+1 · · · ini2 · · · it i1 in iq · · · in−1
⎞⎟⎟⎟⎟⎟⎟⎟⎠.
(10.70)
Theorem 10.7.1:35 If a generalized Hadamard matrix H(p,m) exists, then there isa 3D generalized Hadamard matrix [H(p,m)]3.
Proof: First, we define a generalized Hadamard matrix. A square matrix H(p,m)of order m with elements of the p’th root of unity is called a generalized Hadamardmatrix if HH∗ = H∗H = NIN , where H∗ is the conjugate-transpose matrix of H(for more details about such matrices, see Chapter 11).
Now, let H1 = H(p,m) be a generalized Hadamard matrix
H1 = {hi, j} ={γφ(i, j)p
}, i, j = 0, 1, 2, . . . ,m − 1. (10.71)
According to this definition, we have
m−1∑j=0
γφ(i1, j)−φ(i2, j)p =
{m, if i1 = i2,0, if i1 � i2.
(10.72)
The matrix H(2)1 = H1 ⊗ H1 =
{h(2)
i, j
}m2−1
i, j=0can be defined as
{h(2)
i, j
}=
{h(2)
mi1+i0,m j1+ j0
}=
{hi1, j1hi0, j0
}=
{γφ(i1, j1)+φ(i0, j0)p
}, (10.73)
where i, j = 0, 1, . . . ,m2 − 1, i0, i1, j0, j1 = 0, 1, . . . ,m − 1.Now, consider the 3D matrix A = [H(p,m)]3 with elements Ai1,i2,i3 (i1, i2, i3 =
0, 1, . . . ,m − 1),
A = {Ai1,i2,i3} ={h2
i1(m+1),mi12+i3
}=
{γφ(i1,i2)+φ(i1,i3)p
}. (10.74)
In other words, any section of a matrix A = {Ai1,i2,i3} of the orientation i1 is thei1(m + 1)’th row of the matrix H(2)
1 .Prove that A is the 3D generalized matrix A = [H(p,m)]3. For this, we can check
the matrix system in Eq. (10.67), which can be represented as
0,2(A1t A2
t ) = m2E(0, 1), t = 1, 2, 3, (10.75)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
338 Chapter 10
where E(0, 1) is an identity matrix of order m, and
A11 = A, A2
1 = (A∗)
⎛⎜⎜⎜⎜⎜⎜⎜⎝i1 i2 i3i3 i1 i2
⎞⎟⎟⎟⎟⎟⎟⎟⎠,
A12 = A
⎛⎜⎜⎜⎜⎜⎜⎜⎝i1 i2i2 i1
⎞⎟⎟⎟⎟⎟⎟⎟⎠, A2
1 = (A∗)
⎛⎜⎜⎜⎜⎜⎜⎜⎝i2 i3i3 i2
⎞⎟⎟⎟⎟⎟⎟⎟⎠,
A13 = A
⎛⎜⎜⎜⎜⎜⎜⎜⎝i1 i2 i3i2 i3 i1
⎞⎟⎟⎟⎟⎟⎟⎟⎠, A2
3 = A.
(10.76)
Now we will check the system in Eq. (10.75) for defining the matrix A byEq. (10.74).
(1)
0,2
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝(AA∗)
⎛⎜⎜⎜⎜⎜⎜⎜⎝i1 i2 i3i3 i1 i2
⎞⎟⎟⎟⎟⎟⎟⎟⎠⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ = m2E(0, 1), i.e.,
m−1∑i2=0
m−1∑i3=0
Ai1,i2,i3 Aj1,i2,i3 = m2δi1, j1 ,
(10.77)or according to Eqs. (10.72) and (10.74),
m−1∑i2=0
m−1∑i3=0
γφ(i1,i2)+φ(i1,i3)−φ( j1,i2)−φ( j1,i3)p = m2δi1, j1 . (10.78)
(2)
0,2
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝A
⎛⎜⎜⎜⎜⎜⎜⎜⎝i1 i2i2 i1
⎞⎟⎟⎟⎟⎟⎟⎟⎠(A∗)
⎛⎜⎜⎜⎜⎜⎜⎜⎝i2 i3i3 i2
⎞⎟⎟⎟⎟⎟⎟⎟⎠⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ = m2E(0, 1), i.e.
m−1∑i1=0
m−1∑i3=0
Ai1,i2,i3 Ai1, j2,i3 = m2δi2, j2 ,
(10.79)or, according to Eqs. (10.72) and (10.74),
m−1∑i1=0
m−1∑i3=0
γφ(i1,i2)+φ(i1,i3)−φ(i1, j2)−φ( j1,i2)p = m2δi2, j2 . (10.80)
(3)
0,2
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝A
⎛⎜⎜⎜⎜⎜⎜⎜⎝i1 i2 i3i2 i3 i1
⎞⎟⎟⎟⎟⎟⎟⎟⎠A∗
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , i.e.,m−1∑i1=0
m−1∑i2=0
Ai1,i2,i3 Ai1, j2, j3 = m2δi3, j3 , (10.81)
or according to Eqs. (10.72) and (10.74),
m−1∑i1=0
m−1∑i2=0
γφ(i1,i2)+φ(i1,i3)−φ(i1,i2)−φ(i1, j3)p = m2δi3, j3 . (10.82)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Higher-Dimensional Hadamard Matrices 339
Hence, the matrix A defined by Eq. (10.74) is the 3D generalized Hadamard matrix[H(p,m)]3.
The generalized matrices contained in the 3D generalized Hadamard matrix[H(3, 3)]3 of order 3 are given below.
• Generalized Hadamard matrices parallel to the flat (X, Y) are⎛⎜⎜⎜⎜⎜⎜⎜⎝1 1 11 x1 x2
1 x2 x1
⎞⎟⎟⎟⎟⎟⎟⎟⎠ ,⎛⎜⎜⎜⎜⎜⎜⎜⎝x1 x2 1x2 x1 11 1 1
⎞⎟⎟⎟⎟⎟⎟⎟⎠ ,⎛⎜⎜⎜⎜⎜⎜⎜⎝x1 1 x2
1 1 1x2 1 x1
⎞⎟⎟⎟⎟⎟⎟⎟⎠ . (10.83)
• Generalized Hadamard matrices parallel to the flat (X, Z) are⎛⎜⎜⎜⎜⎜⎜⎜⎝1 1 1x1 x2 1x1 1 x2
⎞⎟⎟⎟⎟⎟⎟⎟⎠ ,⎛⎜⎜⎜⎜⎜⎜⎜⎝1 x1 x2
x2 x1 11 1 1
⎞⎟⎟⎟⎟⎟⎟⎟⎠ ,⎛⎜⎜⎜⎜⎜⎜⎜⎝1 x2 x1
1 1 1x2 1 x1
⎞⎟⎟⎟⎟⎟⎟⎟⎠ . (10.84)
• Generalized Hadamard matrices parallel to the flat (Y, Z) are⎛⎜⎜⎜⎜⎜⎜⎜⎝1 1 1x2 x1 1x2 1 x1
⎞⎟⎟⎟⎟⎟⎟⎟⎠ ,⎛⎜⎜⎜⎜⎜⎜⎜⎝x2 x1 11 x1 x2
1 1 1
⎞⎟⎟⎟⎟⎟⎟⎟⎠ ,⎛⎜⎜⎜⎜⎜⎜⎜⎝x1 x2 11 1 1x1 1 x2
⎞⎟⎟⎟⎟⎟⎟⎟⎠ . (10.85)
References
1. S. S. Agaian, Hadamard Matrices and Their Applications, Lecture Notes ofMathematics, 1168, Springer-Verlag, Berlin (1985).
2. H. Harmuth, Sequency Theory, Foundations and Applications, AcademicPress, New York (1977).
3. P. J. Shlichta, “Three- and four-dimensional Hadamard matrices,” Bull. Am.Phys. Soc., Ser. 11 16, 825–826 (1971).
4. S. S. Agaian, “On three-dimensional Hadamard matrix of Williamson type,”(Russian–Armenian summary) Akad. Nauk Armenia, SSR Dokl. 72, 131–134(1981).
5. P. J. Slichta, “Higher dimensional Hadamard matrices,” IEEE Trans. Inf.Theory IT-25 (5), 566–572 (1979).
6. S.S. Agaian, “A new method for constructing Hadamard matrices and thesolution of the Shlichta problem,” in Proc. of 6th Hungarian Coll. Comb.,Budapesht, Hungary, 6–11, pp. 2–3 (1981).
7. A. M. Trachtman and B. A. Trachtman, (in Russian) Foundation of the Theoryof Discrete Signals on Finite Intervals, Nauka, Moscow (1975).
8. S. S. Agaian, “Two and high dimensional block Hadamard matrices,” (InRussian) Math. Prob. Comput. Sci. 12, 5–50 (1984) Yerevan, Armenia.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
340 Chapter 10
9. K. Ma, “Equivalence classes of n-dimensional proper Hadamard matrices,”Austral. J. Comb. 25, 3–17 (2002).
10. Y. X. Yang, “The proofs and some conjectures on higher dimensionalHadamard matrices,” Kexue Tongbao 31, 1662–1667 (1986) (English trans.).
11. S. S. Agaian and H. Sarukhanyan, “Three dimensional Hadamard matrices,”in Proc. of CSIT-2003, 271–274 NAS RA, Yerevan, Armenia (2003).
12. W. de Launey, “A note on n-dimensional Hadamard matrices of order 2t andReed–Muller codes,” IEEE Trans. Inf. Theory 37 (3), 664–667 (1991).
13. X.-B. Liang, “Orthogonal designs with maximal rates,” IEEE Trans. Inf.Theory 49 (10), 2468–2503 (2003).
14. K. J. Horadam, Hadamard Matrices and Their Applications, PrincetonUniversity Press, Princeton (2006).
15. Q. K. Trinh, P. Fan, and E. M. Gabidulin, “Multilevel Hadamard matrices andzero correlation zone sequences,” Electron. Lett. 42 (13), 748–750 (2006).
16. H. M. Gastineau-Hills and J. Hammer, “Kronecker products of systemsof higher-dimensional orthogonal designs,” in Combinatorial MathematicsX, Adelaide, 1982, Lecture Notes in Math., 1036 206–216 Springer, Berlin(1983).
17. X. Yang and Y. X. Yang, Theory and Applications of Higher-DimensionalHadamard Matrices, Kluwer, Dordrecht (2001).
18. V. Testoni and M.H.M. Costa, “Fast embedded 3D-Hadamard color videocodec,” presented at XXV Simpósio Brasileiro de Telecomunicações—SBrT’2007, Recife, PE, Brazil (Sept. 2007).
19. W. de Launey and R. M. Stafford, “Automorphisms of higher-dimensionalHadamard matrices,” J. Combin. Des. 16 (6), 507–544 (2008).
20. W. de Launey, “(0, G)-designs and applications,” Ph.D. thesis, University ofSydney, (1987).
21. J. Hammer and J. Seberry, “Higher dimensional orthogonal designs andHadamard matrices,” Congr. Numer. 31, 95–108 (1981).
22. J. Seberry, “Higher dimensional orthogonal designs and Hadamard matrices,”in Combinatorial Mathematics VII, Lecture Notes in Math., 829 220–223Springer, New York (1980).
23. Y.X. Yang, “On the classification of 4-dimensional 2 order Hadamard matri-ces,” preprint (in English) (1986).
24. W. de Launey, “On the construction of n-dimensional designs from 2-dimensional designs,” Australas. J. Combin. 1, 67–81 (1990).
25. K. Nyberg and M. Hermelin, “Multidimensional Walsh transform and acharacterization of bent functions,” in Proc. of IEEE Information TheoryWorkshop, Information Theory for Wireless Networks, July 1–6, pp. 1–4(2007).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Higher-Dimensional Hadamard Matrices 341
26. N. Ahmed and K. R. Rao, Orthogonal Transforms for Digital SignalProcessing, Springer-Verlag, Berlin (1975).
27. N. J. Vilenkin, “On a class of complete orthogonal systems,” (in Russian) Izv.AN, SSSR 11, 363–400 (1947).
28. S. S. Agaian and A. Matevosian, “Fast Hadamard Transform,” Math. Prob.Cybern. Comput. Technol. 10, 73–90 (1982).
29. K. Egiazarian, J. Astola and S. Agaian, “Orthogonal transforms based ongeneralized Fibonacci recursions,” in Proc. of Workshop on Spectral Transformand Logic Design for Future Digital Systems, June, Tampere, Finland,pp. 455–475 (2000).
30. S.S. Agaian and H. Sarukhanyan, “Williamson type M-structure,” in Proc. of2nd Int. Workshop on Transforms and Filter Banks, Brandenburg Der Havel,Germany, TICSP Ser. 4, pp. 223–249 (2000).
31. J. Astola, K. Egiazarian, K. Öktem and S. Agaian, “Binary polynomialtransforms for nonlinear signal processing,” in Proc. of IEEE Workshopon Nonlinear Signal and Image Processing, Sept., Mackinac Island, MI,pp. 132–141 (1997).
32. X. Yang and X. Y. Yang, Theory and Applications of Higher-DimensionalHadamard Matrices, Kluwer Academic Publications, Dordrecht (2001).
33. H. E. Chrestenson, “A class of generalized Walsh functions,” Pacific J. Math.5 (1), 17–31 (1955).
34. N. P. Sokolov, (in Russian) Introduction in Multidimensional Matrix Theory,Naukova Dumka, Kiev (1972).
35. S.S. Agaian and K.O. Egiazarian, “Generalized Hadamard matrices,” Math.Prob. Comput. Sci.12, 51–88 (in Russian), Yerevan, Armenia (1984).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Chapter 11Extended Hadamard Matrices
11.1 Generalized Hadamard Matrices
The generalized Hadamard matrices were introduced by Butson in 1962.1
Generalized Hadamard matrices arise naturally in the study of error-correctingcodes, orthogonal arrays, and affine designs (see Refs. 2–4). In general, generalizedHadamard matrices are used in digital signal/image processing in the form of thefast transform by Walsh, Fourier, and Vilenkin–Chrestenson–Kronecker systems.The survey of generalized Hadamard matrix construction can be found in Refs. 2and 5–12.
11.1.1 Introduction and statement of problems
Definition 11.1.1.1:1 A square matrix H(p,N) of order N with elements of the p’throot of unity is called a generalized Hadamard matrix if
HH∗ = H∗H = NIN ,
where H∗ is the conjugate-transpose matrix of H.
Remarks: The generalized Hadamard matrices contain the following:
• A Sylvester–Hadamard matrix if p = 2,N = 2n.13
• A real Hadamard matrix if p = 2,N = 4t.5
• A complex Hadamard matrix if p = 4,N = 2t.14
• A Fourier matrix if p = N, N = N.
Note: Vilenkin–Kronecker systems are generalized Hadamard H(p, p) andH(p, pn) matrices, respectively.8
Example 11.1.1.1: A generalized Hadamard matrix H(3, 6) has the followingform:
H(3, 6) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
x2 x0 x1 x1 x0 x2
x0 x2 x1 x0 x1 x2
x0 x0 x0 x0 x0 x0
x2 x0 x2 x0 x1 x1
x0 x2 x2 x1 x0 x1
x2 x2 x0 x1 x1 x0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (11.1)
343
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
344 Chapter 11
where x0 = 1, x1 = −(1/2) + j(√
3/2), x2 = −(1/2) − j(√
3/2), j =√−1. A
generalized Hadamard matrix H(p,N) with the first row and first column of theform (11 . . . .1) is called a normalized matrix.
For example, from H(3, 6), one can generate a normalized matrix by two stages.At first, multiplying the columns with numbers 1, 3, 4, and 6 of the matrix H(3, 6)by x1, x2, x2, and x1, respectively, we obtain the matrix
H1(3, 6) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1x1 x2 1 x2 x1 1x1 1 x2 x2 1 x1
1 1 x1 x2 x1 x2
x1 x2 x1 1 1 x2
1 x2 x2 1 x1 x1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (11.2)
Then, multiplying the rows with numbers 2, 3, and 5 of the matrix H1(3, 6) byx2, we obtain the normalized matrix corresponding to the generalized Hadamardmatrix H(3, 6) of the following form:
Hn(3, 6) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 11 x1 x2 x1 1 x2
1 x2 x1 x1 x2 11 1 x1 x2 x1 x2
1 x1 1 x2 x2 x1
1 x2 x2 1 x1 x1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (11.3)
Note that generalized Hadamard matrices also can be defined as the matrix withone of the elements being the p’th root of unity. For example, the matrix Hn(3, 6)can be represented as follows:
Hn(3, 6) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 11 x x2 x 1 x2
1 x2 x x x2 11 1 x x2 x x2
1 x 1 x2 x2 x1 x2 x2 1 x x
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (11.4)
where x = −(1/2) + j(√
3/2).
In Refs. 1 and 13 it was proven that for any prime p, nonnegative integer m,and any natural number k (m ≤ k), there exists an H(p2m, pk) matrix. If anH(2,N) matrix exists, then for any nonzero natural number p, an H(2p,N) matrixexists.
The Kronecker product of two generalized Hadamard matrices is also ageneralized Hadamard matrix. For example,
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Extended Hadamard Matrices 345
H(3, 3) ⊗ H(3, 3) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 11 x x2
1 x2 x
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ⊗⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 11 x x2
1 x2 x
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 1 11 x x2 1 x x2 1 x x2
1 x2 x 1 x2 x 1 x2 x
1 1 1 x x x x2 x2 x2
1 x x2 x x2 1 x2 1 x
1 x2 x x 1 x2 x2 x 11 1 1 x2 x2 x2 x x x
1 x x2 x2 1 x x x2 11 x2 x x2 x 1 x 1 x2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (11.5)
• If p is prime, then the generalized Hadamard matrix H(p,N) can exist only forN = pt, where t is a natural number.• If p = 2, then the generalized Hadamard matrix H(p, 2p) can exist,• If pn is a prime power, then a generalized Hadamard matrix H(pn,N) can exist
only for N = pt, where t is a positive integer.
Problems for exploration: The inverse problem, i.e., the problem of constructionor proof of the existence of the generalized Hadamard matrix H(p, pt) for anyprime p, remains open.
More complete information about construction methods and applications ofgeneralized Hadamard matrices can be obtained from Refs. 2,11, and 15–30.
Definition 11.1.1.2:12 A square matrix H of order N with elements of Hjk is calleda complex Hadamard matrix if
• |Hjk| = 1 has unimodularity.• HH∗ = H∗H = NIN has orthogonality, where H∗ is the conjugate-transpose
matrix of H.
Definition 11.1.1.3: A square matrix H(p,N) of order N with elements xke jαk iscalled a parametric generalized Hadamard matrix if
HH∗ = H∗H = NIN ,
where xk is a p’th root of unity, αk is a parameter, and H∗ is the conjugate-transposematrix of H.
Problems for exploration: Investigate the construction of the parametric genera-lized Hadamard matrices. The parametric generalized Hadamard matrices mayplay a key role in the theory of quantum information and encryption systems.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
346 Chapter 11
Example: N = 4,
H4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 1 11 je jα −1 − je jα
1 −1 1 −11 − je jα −1 je jα
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , where α ∈ [0,π), j =√−1. (11.6)
Any complex Hadamard matrix is equivalent to a dephased Hadamard matrix, inwhich all elements in the first row and first column are equal to unity. For N = 2, 3,and 5, all complex Hadamard matrices are equivalent to the Fourier matrix FN .For N = 4, there is a continuous, one-parameter family of inequivalent complexHadamard matrices.
11.1.2 Some necessary conditions for the existence of generalized Hadamardmatrices
First, we give some useful properties of generalized Hadamard matrices fromRef. 1.Properties:
• The condition HH∗ = NIN is equivalent to condition H∗H = NIN , i.e., thecondition of orthogonality of distinct rows of matrix H implies the orthogonalityof distinct columns of matrix H.• The permutation of rows (columns) of matrix H(p,N) and multiplication of
rows (columns) by the fixed root of unity does not change a condition of thematrix to a generalized Hadamard matrix. Note that if H1 = H(p1,N) is thegeneralized Hadamard matrix and rp2 is the primitive p2’th root of unity, thenH2 = rp2 H1 = H(p3,N), where p3 = l.c.m.(p1, p2) (l.c.m. stands for the leastcommon multiple).• If H = (hi, j)N
i, j=1 is the normalized generalized Hadamard matrix, then
N∑i=1
hi, j =
N∑i=1
h∗i, j = 0, j = 2, 3, . . . ,N,
N∑j=1
hi, j =
N∑j=1
h∗i, j = 0, i = 2, 3, . . . ,N.
(11.7)
• Let H1 = H(p1,N1) and H2 = H(p2,N2) be generalized Hadamard matrices;then, the matrix
H(p3,N3) = H(p1,N1) ⊗ H(p2,N2) (11.8)
is the generalized Hadamard matrix of order N3 = N1N2, where p3 = l.c.m.(p1, p2).
Now, we want to construct a generalized Hadamard matrix H(p,N) for anynonprime number p. Note that if H(p,N) is the normalized generalized Hadamardmatrix and p is a prime number, then N = pt, but similar to the classical Hadamardmatrix H(2,N), the following conditions are not necessary:
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Extended Hadamard Matrices 347
{N = p2t,N = p
or{
N = 2pt,N = p.
(11.9)
The normalized generalized Hadamard matrices H(6, 10) and H(6, 14) are given asfollows [z = 1/2 + j(
√3/2)]:
H(6, 10) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 1 1 11 z4 z z5 z3 z z3 z3 z5 z1 z z2 z3 z5 z5 z z3 z5 z3
1 z5 z3 z2 z z5 z3 z5 z3 z1 z3 z5 z z4 z z z5 z3 z3
1 z3 z3 z3 z3 z3 1 1 1 11 z z z5 z5 z4 z3 1 z2 z4
1 z z5 z3 z3 z2 z4 z3 z2 11 z5 z3 z5 z5 z2 1 z2 z3 z4
1 z3 z5 z z z4 z4 z2 1 z3
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
, (11.10a)
H(6, 14) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 1 1 1 1 1 1 11 1 z5 z5 z3 z z z4 z2 z3 z z3 z5 z3
1 z5 z4 z3 z z3 z 1 z5 z4 z3 z3 z z1 z5 z3 z2 z5 z z3 z4 z z z4 z5 z z3
1 z3 z z5 z2 z3 z5 z4 z3 z z5 z4 z z1 z z3 z z3 z4 z5 1 z z z3 z3 z4 z5
1 z z z3 z5 z5 1 z4 z3 z5 z3 z z3 z2
1 z2 z4 z4 z4 z4 z2 z3 z5 z z z z z5
1 z4 z5 z z5 z3 z3 z z3 1 1 z2 z4 z2
1 z3 1 z z z3 z z3 1 z3 z4 1 z4 z4
1 z5 z z4 z3 z z3 z z4 z2 z3 1 z4 11 z3 z z3 z4 z z5 z 1 z4 1 z3 z2 z4
1 z z3 z z 1 z3 z3 z4 z4 1 z4 z3 11 z3 z3 z5 z z5 z4 z z2 z4 z2 1 1 z3
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (11.10b)
11.1.3 Construction of generalized Hadamard matrices of new ordersNow we consider the recurrent algorithm of construction of generalized Hadamardmatrix H(pn, pn), where p is a prime, and n is a natural number.
Definition 11.1.3.1:11 We will call square matrices X and Y of order k withelements zero and the p’th root of unity generalized two-element hyperframes anddenote this by Γ(p, k) = {X(p, k), Y(p, k)}, if the following conditions are satisfied:
X ∗ Y = 0,X ± Y is the matrix with elements of p’th root of unity,XYH = YXH ,
XXH + YYH = kIk,
(11.11)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
348 Chapter 11
where * is a sign of Hadamard (pointwise) product, and H denotes the Hermitiantranspose.
Lemma 11.1.3.1: If there is a generalized Hadamard matrix H(p, 2m), then thereis also a generalized two-element hyperframe Γ(2p, 2m).
Proof: Represent the matrix H(p, 2m) by
H(p, 2m) =(A BD C
)(11.12)
and denote
X =
(A 00 C
), Y =
(0 B−D 0
). (11.13)
Prove that {X, Y} is a generalized hyperframe. The first two conditions ofEq. (11.11) are evident. Prove the next two conditions. Because H(p, 2m) is thegeneralized Hadamard matrix, the following conditions hold:
AAH + BBH = DDH +CCH = 2mIm,
ADH + BCH = 0.(11.14)
Now it is not difficult to check the accuracy of the next two conditions. Thiscompletes the proof.
Lemma 11.1.3.2: If there is a generalized Hadamard matrix H(p, n), then thereis a generalized two-element hyperframe Γ(p, n).
Proof: Let H(p, n) ={hi, j
}n−1
i, j=0. Consider following matrices:
X =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝a0h0
a1h1...
an−1hn−1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , Y =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝b0h0
b1h1...
bn−1hn−1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (11.15)
where hi is the i’th row of matrix H(p, n), and numbers ai, bi satisfy the conditions
aibi = 0, ai + bi = 1, ai, bi ∈ {0, 1}, i = 0, 1, . . . , n − 1. (11.16)
Now it is not difficult to prove that X and Y satisfy the conditions in Eq. (11.11);i.e., Γ(p, n) = {X, Y} is the generalized hyperframe.
Now, let H and G be generalized Hadamard matrices H(p,m) of the followingform:
H =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
h0
h1
h2
h3...
hm−2
hm−1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, G =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
h1
−h0
h3
−h2...
hm−1
−hm−2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (11.17)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Extended Hadamard Matrices 349
It is evident that
HGH +GHH = 0. (11.18)
Theorem 11.1.3.1: Let H0 = H(p1,m) and G0 = H(p1,m) be generalizedHadamard matrices satisfying the condition Eq. (11.18) and let Γ(p2, k) = {X, Y}be a generalized hyperframe. Then, the matrices
Hn = X ⊗ Hn−1 + Y ⊗Gn−1,Gn = X ⊗Gn−1 − Y ⊗ Hn−1, n ≥ 1
(11.19)
are
• generalized Hadamard matrices H(2p,mkn), where p = l.c.m.(p1, p2), ifl.c.m.(p1, 2) = 1 and l.c.m.(p2, 2) = 1;• generalized Hadamard matrices H(p,mkn), where p = l.c.m.(p1, p2), if p1
and/or p2, are even.
Proof: First, let us prove that HnHHn = mknImkn . Using the properties of the
Kronecker product, from Eq. (11.19), we obtain
HnHHn = (X ⊗ Hn−1 + Y ⊗Gn−1)(XH ⊗ HH
n−1 + YH ⊗GHn−1)
= XXH ⊗ Hn−1HHn−1 + XYH ⊗ Hn−1GH
n−1 + YXH
⊗Gn−1HHn−1 + YYH ⊗Gn−1GH
n−1
= (XXH + YYH) ⊗ mkn−1Imkn−1 + XYH ⊗ (Hn−1GHn−1 +Gn−1HH
n−1)
= kIk ⊗ mkn−1Imkn−1 = mknImkn . (11.20)
Similarly, we can show that GnGHn = mknImkn . Now prove that HnGH
n +GnHHn = 0.
Indeed,
HnGHn +GnHH
n = XXH ⊗ (Hn−1GHn−1 +Gn−1HH
n−1) − XYH
⊗ (Hn−1HHn−1 −Gn−1GH
n−1)
+ YXH ⊗ (Gn−1GHn−1 − Hn−1HH
n−1) − YYH
⊗ (Hn−1GHn−1 +Gn−1HH
n−1)
= (XXH − YYH) ⊗ (Hn−1GHn−1 +Gn−1HH
n−1)
− (XYH + YXH) ⊗ (Hn−1HHn−1 −Gn−1GH
n−1) = 0. (11.21)
From Theorem 11.1.3.1, the following is evident:
Corollary 11.1.3.1: Let pi, i = 1, 2, . . . , k be prime numbers; then, there is ageneralized Hadamard matrix H(2pr1
1 pr22 · · · prk
k ,mkt11 kt2
2 · · · ktkk ), where ri, ti are
natural numbers.
Theorem 11.1.3.2: Let Hn defined as in Theorem 11.1.3.1 satisfy the condition ofEq. (11.19). Then, Hn can be factorized by sparse matrices as
Hn = M1M2 · · ·Mn+1, (11.22)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
350 Chapter 11
where
Mn+1 = Ikn ⊗ H0,
Mi = Imi−1 ⊗ (X ⊗ Imki−1 + Y ⊗ Pmki−1 ) ,
Pmki−1 = I(mki−1)/2 ⊗⎛⎜⎜⎜⎜⎜⎜⎝ 0 1
−1 0
⎞⎟⎟⎟⎟⎟⎟⎠ , i = 1, 2, . . . , n.
(11.23)
It is easy to show that H(p, pn) exists where p is a prime number. Indeed, thesematrices can be constructed using the Kronecker product. Let us give an example.Let p = 3; then, we have
H(3, 3) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 11 a a2
1 a2 a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ,
H(3, 9) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 1 1
1 a a2 1 a a2 1 a a2
1 a2 a 1 a2 a 1 a2 a
1 1 1 a a a a2 a2 a2
1 a a2 a a2 1 a2 1 a
1 a2 a a 1 a2 a2 a 1
1 1 1 a2 a2 a2 a a a
1 a a2 a2 1 a a a2 1
1 a2 a a2 a 1 a 1 a2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (11.24)
11.1.4 Generalized Yang matrices and construction of generalizedHadamard matrices
In this section, we introduce the concept of generalized Yang matrices and considerthe generalized case of the Yang theorem. First, we give a definition.
Definition 11.1.4.1:11 Square matrices A(p, n) and B(p, n) of order n with elementsof the p’th roots of unity are called generalized Yang matrices if
ABH = BAH ,
AAH + BBH = 2nIn.(11.25)
Note that for p = 2, the generalized Yang matrix coincides with classical Yangmatrices.11 Now, let us construct generalized Yang matrices. We will search A andB as cyclic matrices, i.e.,
A = a0U0 + a1U1 + a2U2 + · · · + an−1Un−1,
B = b0U0 + b1U1 + b2U2 + · · · + bn−1Un−1.(11.26)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Extended Hadamard Matrices 351
We see that
AH = a∗0Un + a∗n−1U1 + a∗n−2U2 + · · · + a∗1Un−1,
BH = b∗0Un + b∗n−1U1 + b∗n−2U2 + · · · + b∗1Un−1.(11.27)
It can be shown that Eq. (11.25) is equivalent to
n−1∑i=0
(aia∗i + bib
∗i ) = 2n,
n−1∑i=0
[aia∗(i−t)mod n + bib
∗(i−t)mod n
]= 0, t = 0, 1, . . . ,
(n2
).
(11.28)
Therefore, the condition of Eq. (11.25) or (11.28) is a necessary and sufficientcondition for the existence of generalized Yang matrices.
Examples of cyclic generalized Yang matrices (only the first rows of the matricesare presented) are as follows:
• A(3, 3) : (a, 1, a), B(3, 3) : (a2, a2, 1), where a = − 12 + j
√3
2 ;
• A(3, 6) : (1, a2, a2, a, a2, a2), B(3, 6) : (1, a2, a, a, a, a2), where a = − 12 + j
√3
2 ;• A(4, 4) : (1, j, j,−1), B(4, 4) : (−1, j, j, 1), where j =
√−1;• A(6, 5) : (1, a, a2, a2, a), B(6, 5) : (1,−a,−a2,−a2,−a),where a = − 1
2 + j√
32 ;
Theorem 11.1.4.1: Let A0 = A0(p1, n) and B0 = B0(p1, n) be generalized Yangmatrices , and let Γ(p2, k) = {X(p2, k), Y(p2, k)} be a generalized hyperframe. Then,the following matrices:
Ai+1 = X ⊗ Ai + Y ⊗ Bi,
Bi+1 = X ⊗ Bi − Y ⊗ Ai, i = 0, 1, . . .(11.29)
are generalized Yang matrices A(2p, nki+1) and B(2p, nki+1), where p = l.c.m.(p1,p2), l.c.m.(p1, 2) = 1, and l.c.m.(p2, 2) = 1, i ≥ 1. We find generalized Yangmatrices A(2p, nki+1) and B(2p, nki+1), where p = l.c.m.(p1, p2), and p1 and/orp2 are even numbers.
Corollary 11.1.4.1: The following matrix is the generalized Hadamard matrix :
H(2p, n2i+1) =(
Ai Bi
−Bi Ai
). (11.30)
11.2 Chrestenson Transform
11.2.1 Rademacher functions
The significance of the Rademacher function system (see Fig. 11.1) amongvarious discontinuous functions is that it is a subsystem of the Walsh function
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
352 Chapter 11
Figure 11.1 Dr. Hans Rademacher (fromwww.apprendre-math.info/hindi/historyDetail.htm).
system, and the latter plays an important role in Walsh–Fourier analysis. TheRademacher functions {rn(x)} form an incomplete set of orthogonal, normalized,periodic square wave functions with their period equal to 1. Using the Rademachersystem functions, one may generate the Walsh–Hadamard, Walsh–Paley, andHarmuth function systems, as well the Walsh–Rademacher function systems. TheRademacher function is defined in Refs. 2, 5–8, 31, and 32.
Definition: The n’th Rademacher function rn(x) is defined as follows:
r0(x) = 1, x ∈ [0, 1),
rn(x) =
⎧⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎩1, if x ∈
[2i2n,
2i + 12n
),
−1, if x ∈[2i + 1
2n,
2i + 22n
),
(11.31)
where i = 0, 1, 2, . . . , 2n − 1.
The sequence {rn(x)} will be called the system of Rademacher functions, or theRademacher function system; or, the Rademacher function rn(x) may be defined asa family of functions {rn(x)} defined by the unit interval by the formula
⎧⎪⎪⎪⎨⎪⎪⎪⎩r0(x) = 1, if x ∈ [0, 1),
rn(x) = (−1)i+1, if x ∈[i − 12n,
i22
),
(11.32)
where n = 0, 1, 2, 3, . . . which means
rn(x) ={+1, for i even,−1, for i odd.
(11.33)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Extended Hadamard Matrices 353
Rademacher function systems {rn(x)} may be also defined by the formula
rn(x) = sign{sin(2 jπx)
}, (11.34)
where x is taken over the continuous interval 0 to 1.Selected properties:
(1) |rn(x)| ≤ 1.(2) rn(x)| = rn(x + 1).(3) rn(x) = r1(2n−1x).
where r0(x) = 1, r1(x) =
⎧⎪⎪⎪⎨⎪⎪⎪⎩+1, if x ∈
[0,
12
),
−1, if x ∈[
12, 1
).
(4) rn+m(x) = rn(2mx) = rm(2nx), m, n = 0, 1, 2, . . ..(5) The Rademacher functions may be constructed as
rn(x) = exp( jπxn+1) = (e jπ)xn+1 =[cos(π) + j sin(π)
]xn+1 = (−1)xn+1 ,
where j =√−1, x =
∑N−1n=0 xn2n.
(6) All Rademacher functions except r0(x) are odd, i.e., rn(−x) = rn(x). Thus, it isimpossible to represent even functions by any combination of the Rademacherfunctions, which means that the Rademacher functions do not form a completeset with respect to the L2 norm.
(7) For the representation of even functions by combination of the Rademacherfunctions, we may introduce even Rademacher functions{
reven(n, x) = sgn[cos(2nπx)], n = 1, 2, 3...reven(0, x) = r1(x).
However, now it is impossible to represent odd functions by any combinationof the even Rademacher functions. It is also easy to check that the discreteRademacher functions Rad( j, k) may be generated by sampling the Rademacherfunctions at times x = 0, 1/N, 2/N, . . . , (N − 1)/N.
11.2.2 Example of Rademacher matricesFor n = 3, 4,N = 2n, the Rademacher matrices R3,8 and R4,16 are taken to be
R3,8 =
⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣+ + + + + + + +
+ + + + − − − −+ + − − + + − −+ − + − + − + −
⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦⇒⇒⇒⇒
Rad(0, k)Rad(0, k)Rad(0, k)Rad(0, k)
(11.35)
R4,16 =
⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣+ + + + + + + + + + + + + + + +
+ + + + + + + + − − − − − − − −+ + + + − − − − + + + + − − − −+ + − − + + − − + + − − + + − −+ − + − + − + − + − + − + − + −
⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦⇒⇒⇒⇒⇒
Rad(0, k)Rad(1, k)Rad(2, k)Rad(3, k)Rad(4, k)
,
where + and − indicate +1 and −1, respectively.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
354 Chapter 11
Properties:
• The Rademacher matrix is a rectangular (n + 1) × 2n matrix with (+1,−1)elements where the first row has +1 elements,
Rn,NRTn,N = 2nIn, (11.36)
where In is the identity matrix of order n.• The Rademacher matrix uniquely determines the system of Rademacher
functions of the interval [0, 1).
There are several other ways to construct new Rademacher function systems.For example, take any solution of the following equations:⎧⎪⎪⎪⎨⎪⎪⎪⎩
α(x + 1) = α(x),
α(x) + α(x +
12
)= 0, where x ∈ [0, 1),α(x) ∈ L2(0, 1),
(11.37)
then define Rademacher functions Rad(n, x) via the dilation operation α(2nx).
11.2.2.1 Generalized Rademacher functions
The well-known system of Rademacher functions was generalized to a system offunctions whose values are ω f c, k = 0, 1, 2, . . . a − 1, where a is a natural numberand ω is one of the primitive c’th roots of 1, by Levy,31 and the latter was madeto be a complete orthonormal system, i.e., the W∗ system of generalized Walshfunctions by Chrestenson.32 These systems are known to preserve some essentialproperties of the original functions. Similarly, the analogous generalization hasbeen performed for the Haar system.
Now, we may extend the definition of Rademacher functions from the case p = 2to an arbitrary p,
Rn(x) = exp(
j2πp
xn+1
), where x =
∑n
xn pn. (11.38)
Let p be an integer p ≥ 2, and w = exp[ j(2π/p)]. Then, the Rademacher functionsof order p are defined by
ϕ0(k) = wk, x ∈[
kp,
k + 1p
], k = 0, 1, 2, . . . , p − 1, (11.39)
and for n � 0, ϕn(x) = ϕn(x + 1) = ϕn(pnx). These functions form a set oforthonormal functions. The Walsh functions of order p are defined by
φ0(x) = 1,φn(x) = ϕa1
n1(x)ϕa2
n2(x) · · ·ϕam
nm(x),
(11.40)
where n =∑nm
k=0 ak pnk , 0 < ak < 1, n1 > n2 > · · · > nm.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Extended Hadamard Matrices 355
11.2.2.2 The Rademacher–Walsh transforms
The Walsh functions33–35 are a closed set of two-valued orthogonal functions givenby
Wal( j, k) =n−1∏t=0
(−1)(kn−t+kn−1−t) jt , (11.41)
where jt, kt are determined by the binary expansions of j, k, respectively, andj, k = 0, 1, . . . , 2n − 1, where j = jn−12n−1 + jn−12n−2 + · · · + j121 + j020,k = kn−12n−1 + kn−22n−2 + · · · + k121 + k020.
The Walsh transform matrix with corresponding Walsh functions is given below.
k/ j 000 001 010 011 100 101 110 111000 +1 +1 +1 +1 +1 +1 +1 +1 Wal(0, k)001 +1 +1 +1 +1 −1 −1 −1 −1 Wal(1, k)010 +1 +1 −1 −1 −1 −1 +1 +1 Wal(2, k)011 +1 +1 −1 −1 +1 +1 −1 −1 Wal(3, k)100 +1 −1 −1 +1 +1 −1 −1 +1 Wal(4, k)101 +1 −1 −1 +1 −1 +1 +1 −1 Wal(5, k)110 +1 −1 +1 −1 −1 +1 −1 +1 Wal(6, k)111 +1 −1 +1 −1 +1 −1 +1 −1 Wal(7, k)
(11.42)
Note that the 2n Walsh functions for any n constitute a closed set of orthogonalfunctions; the multiplication of any two functions always generates a functionwithin this set. However, the Rademacher functions are an incomplete set of n + 1orthogonal functions, which is a subset of Walsh functions, and from which all2n Walsh functions can be generated by multiplication. The Rademacher functionsmay be defined as follows:
Rad( j, k) = sign{sin(2 jπk)
}, (11.43)
where k is taken over the continuous interval 0 to 1.The Rademacher and the Walsh functions are related by Rad( j, k) = Wal(2 j −
1, k). Taking the Rademacher functions as a basis set, the complete set of Walshfunctions is generated in an alternative order from the original Walsh order, as iden-tified below (∗ indicates Hadamard product or element by element multiplication).∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣
+1 +1 +1 +1 +1 +1 +1 +1+1 +1 +1 +1 _1 −1 −1 −1+1 +1 −1 −1 +1 +1 −1 −1+1 −1 +1 −1 +1 −1 +1 −1+1 +1 −1 −1 −1 −1 +1 +1+1 −1 +1 −1 −1 +1 −1 +1+1 −1 −1 +1 +1 −1 −1 +1+1 −1 −1 +1 −1 +1 +1 −1
∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣
Rad(0, k)Rad(1, k)Rad(2, 0)Rad(3, k)Rad(1, k) ∗ Rad(2, k)Rad(1, k) ∗ Rad(3, k)Rad(2, k) ∗ Rad(3, k)Rad(1, k) ∗ Rad(2, k) ∗ Rad(3, k).
(11.44)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
356 Chapter 11
We see that
Rad(0, k) = Wal(0, k),Rad(1, k) = Wal(1, k),Rad(2, k) = Wal(3, k),Rad(3, k) = Wal(7, k),Rad(1, k) ∗ Rad(2, k) = Wal(2, k),Rad(1, k) ∗ Rad(3, k) = Wal(6, k),Rad(2, k) ∗ Rad(3, k) = Wal(4, k),Rad(1, k) ∗ Rad(2, k) ∗ Rad(3, k) = Wal(5, k).
(11.45)
It can be shown also that the natural-ordered Walsh–Hadamard matrix ordern, sequency-ordered Walsh matrix order n, dyadic-ordered Paley matrix order n,and Cal–Sal-ordered Hadamard matrix order n can be represented by discreteRademacher function systems. For example, the natural-ordered Walsh–Hadamardmatrix can be represented by discrete Rademacher function systems using thefollowing rules:
Hh(8) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + + + + + + +
+ − + − + − + −+ + − − + + − −+ − − + + − − ++ + + + − − − −+ − + − − + − ++ + − − − − + ++ − − + − + + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
Rad(0, k)Rad(3, k)Rad(1, k) ∗ Rad(2, k)Rad(2, k) ∗ Rad(3, k)Rad(1, k)Rad(1, k) ∗ Rad(3, k)Rad(2, k)Rad(1, k) ∗ Rad(2, k) ∗ Rad(3, k).
Natural-ordered Walsh–Hadamard matrix.
(11.46)
The dyadic-ordered Paley matrix can be represented by discrete Rademacherfunction systems using the following rules:
Hp(8) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + + + + + + +
+ + + + − − − −+ + − − + + − −+ + − − − − + ++ − + − + − + −+ − + − − + − ++ − − + + − − ++ − − + − + + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
Rad(0, k)Rad(1, k)Rad(2, 0)Rad(1, k) ∗ Rad(2, k)Rad(3, k)Rad(1, k) ∗ Rad(3, k)Rad(2, k) ∗ Rad(3, k)Rad(1, k) ∗ Rad(2, k) ∗ Rad(3, k).
Dyadic-ordered Paley matrix
(11.47)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Extended Hadamard Matrices 357
The Cal–Sal-ordered Hadamard matrix can be represented by discreteRademacher function systems using the following rules:
Hcs(8) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + + + + + + +
+ + − − − − + ++ − − + + − − ++ − + − − + − ++ − + − + − + −+ − − + − + + −+ + − − + + − −+ + + + − − − −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
Rad(0, k)Rad(1, k) ∗ Rad(2, k)Rad(2, k) ∗ Rad(3, k)Rad(1, k) ∗ Rad(3, k)Rad(3, k)Rad(1, k) ∗ Rad(2, k) ∗ Rad(3, k)Rad(2, k)Rad(1, k).
Cal–Sal-ordered Hadamard matrix
(11.48)
11.2.2.3 Chrestenson functions and matrices
Chrestenson functions are orthogonal p-valued functions defined over the interval[0, pn) by
Ch(p)(k, t) = exp{
j2πp
C(k, t)},
C(k, t) =n−1∑m=0
kmtm,(11.49)
where km, and tm are the p-ary expansions of k and t, respectively, i.e.,
k = kn−1 pn−1 + kn−2 pn−2 + · · · + k0 p0, t = tn−1 pn−1 + tn−2 pn−2 + · · · + t0 p0.
For p = 3 and n = 1, Chrestenson matrices of order 3 have the form
Ch(1)3 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 11 a a2
1 a2 a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (11.50)
For p = 3, n = 2, we obtain the following transform matrix:(00 01 02 10 11 12 20 21 22
)
Ch(2)3 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
000102101112202122
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 1 11 a a2 1 a a2 1 a a2
1 a2 a 1 a2 a 1 a2 a1 1 1 a a a a2 a2 a2
1 a a2 a a2 1 a2 1 a1 a2 a a 1 a2 a2 a 11 1 1 a2 a2 a2 a a a1 a a2 a2 1 a a a2 11 a2 a a2 a 1 a 1 a2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
.(11.51)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
358 Chapter 11
The important characteristics of this complete orthogonal matrix are as follows:
• Its dimensions are pn × pn.• It is symmetric, i.e., (Ch(n)
p )T = Ch(n)p .
• Its inverse is given by (Ch(n)p )−1 = 1
pn (Ch(n)p )H , where superscript H indicates the
transposed conjugate or Hermitian of Ch(n)p .
• It has the recursive structure for the ternary case, i.e.,
Ch(n)3 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝Ch(n−1)
3 Ch(n−1)3 Ch(n−1)
3
Ch(n−1)3 aCh(n−1)
3 a2Ch(n−1)3
Ch(n−1)3 a2Ch(n−1)
3 aCh(n−1)3
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (11.52)
where Ch(0)3 = 1.
An alternative definition for the Chrestenson functions yields the same completeset, but in dissimilar order, as follows:
Ch(p)(k, t) = exp{
j2πp
C(k, t)},
C(k, t) =n−1∑m=0
kmtn−1−m,
(11.53)
where km and tm are the p-ary expansions of k and t, respectively.This definition gives the alternative transform matrix for p = 3, n = 2 as follows:(
00 10 20 01 11 21 02 12 22)
Ch(2)3 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
000102101112202122
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 1 11 1 1 a a a a2 a2 a2
1 1 1 a2 a2 a2 a a a1 a a2 1 a a2 1 a a2
1 a a2 a a2 1 a2 1 a1 a a2 a2 1 a a a2 11 a2 a 1 a2 a 1 a2 a1 a2 a a 1 a2 a2 a 11 a2 a a2 a 1 a 1 a2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
.(11.54)
Finally, we can also consider a subset of the Chrestenson functions for anyp, n, which constitute the generalization of the Rademacher functions, and fromwhich the complete set of orthogonal functions for the given p, n can be generatedby element-by-element multiplication. The generalized Rademacher functions aredefined as
Rad(p)(k, t) = exp{
j2πp
C′(k, t)},
C′(k, t) =n−1∑m=0
k′mtm,(11.55)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Extended Hadamard Matrices 359
where k′m here is a subset of k, whose decimal identification numbers are 0, 1 and allhigher values of k that are divisible by a power of p. The closed set of Chrestensonfunctions for p = 3, n = 2 generated from the reduced set can be represented asfollows:
Ch(3)(0, t) = Rad(3)(0, t),Ch(3)(1, t) = Rad(3)(1, t),Ch(3)(2, t) = Rad(3)(1, t) ∗ Rad(3)(1, t),Ch(3)(3, t) = Rad(3)(3, t),Ch(3)(4, t) = Rad(3)(1, t) ∗ Rad(3)(3, t),Ch(3)(5, t) = Rad(3)(1, t) ∗ Rad(3)(1, t) ∗ Rad(3)(3, t),Ch(3)(6, t) = Rad(3)(3, t) ∗ Rad(3)(3, t),Ch(3)(7, t) = Rad(3)(1, t) ∗ Rad(3)(3, t) ∗ Rad(3)(3, t),Ch(3)(8, t) = Rad(3)(1, t) ∗ Rad(3)(1, t) ∗ Rad(3)(3, t) ∗ Rad(3)(3, t).
(11.56)
11.3 Chrestenson Transform Algorithms11.3.1 Chrestenson transform of order 3n
Now we will compute the complexity of the Chrestenson transform of order 3n.First, we calculate the complexity of the C1
3 transform [see Eq. (11.50)]. LetXT = (x0 + jy0, x0 + jy0, x0 + jy0) be a complex-valued vector of length 3,a = exp[ j(2π/3)] = cos(2π/3) + j sin(2π/3), and j =
√−1.A 1D forward Chrestenson transform of order 3 can be performed as follows:
C13X =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 11 a a2
1 a2 a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎠⎛⎜⎜⎜⎜⎜⎜⎜⎝z0
z1
z2
⎞⎟⎟⎟⎟⎟⎟⎟⎠ =⎛⎜⎜⎜⎜⎜⎜⎜⎝i3b3
b∗3
⎞⎟⎟⎟⎟⎟⎟⎟⎠⎛⎜⎜⎜⎜⎜⎜⎜⎝z0
z1
z2
⎞⎟⎟⎟⎟⎟⎟⎟⎠ =⎛⎜⎜⎜⎜⎜⎜⎜⎝v0
v1
v2
⎞⎟⎟⎟⎟⎟⎟⎟⎠ , (11.57)
where
v0 = (x0 + x1 + x2) + j(y0 + y1 + y2),
v1 = x0 + (x1 + x2) cos2π3− (y1 − y2) sin
2π3
+ j
[y0 + (x1 − x2) sin
2π3+ (y1 + y2) cos
2π3
],
v2 = x0 + (x1 + x2) cos2π3+ (y1 − y2) sin
2π3
+ j
[y0 − (x1 − x2) sin
2π3+ (y1 + y2) cos
2π3
].
(11.58)
We can see that
C+(i3X) = 4, C×(i3X) = 0,C+(b3X) = 6, C×(b3X) = 4,C+(i3X) = 3, C×(i3X) = 0.
(11.59)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
360 Chapter 11
Therefore, the complexity of the C13 transform is: C+(C1
3) = 13,C×(C13) = 4, where
C+ and C× denote the number of real additions and multiplications, respectively.Now, let ZT = (xi + jyi)N−1
i=0 be a complex-valued vector of length N = 3n (n >1). We introduce the following notations: Pi denotes a (0, 1) column vector oflength N/3 whose only i’th element is equal to 1 (i = 0, 1, . . . ,N/3 − 1) andZi = (x3i + jy3i, x3i+1 + jy3i+1, x3i+2 + jy3i+2).
The 1D forward Chrestenson transform of order N can be performed as follows[see Eq. (11.52)]:
Cn3Z =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝(Cn−1
3 ⊗ i3)Z(Cn−1
3 ⊗ b3)Z(Cn−1
3 ⊗ b∗3)Z
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (11.60)
Using the above notations, we have
(Cn−13 ⊗ i3)Z = (Cn−1
3 ⊗ i3)(P0 ⊗ Z0 + P1 ⊗ Z1 + · · · + PN/3−1 ⊗ ZN/3−1
)= Cn−1
3 P0 ⊗ i3Z0 +Cn−13 P1 ⊗ i3Z1 + · · · +Cn−1
3 PN/3−1 ⊗ i3ZN/3−1
= Cn−13 P0(z0 + z1 + z2) +Cn−1
3 P1(z3 + z4 + z5)
+ · · · +Cn−13 PN/3−1(zN−3 + zN−2 + zN−1)
= Cn−13
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝z0 + z1 + z2
z3 + z4 + z5...
zN−3 + zN−2 + zN−1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (11.61)
Then, we can write
C+(Cn−13 ⊗ i3) = C+(Cn−1
3 ) + 4 · 3n−1,
C×(Cn−13 ⊗ i3) = C×(Cn−1
3 ).(11.62)
Now, let us compute the complexity of the Cn−13 ⊗ b3 transform
(Cn−13 ⊗ b3)Z = (Cn−1
3 ⊗ b3)(P0 ⊗ Z0 + P1 ⊗ Z1 + · · · + PN/3−1 ⊗ ZN/3−1
)= Cn−1
3 (P0 ⊗ b3Z0 + P1 ⊗ b3Z1 + · · · + PN/3−1 ⊗ bZN/3−1). (11.63)
From Eq. (11.58), it follows that C+(b3Zi) = 6 and C×(b3Zi) = 4. Then we obtain
C+(Cn−13 ⊗ b3) = C+(Cn−1
3 ) + 6 · 3n−1,
C×(Cn−13 ⊗ b3) = C×(Cn−1
3 ) + 4 · 3n−1.(11.64)
Similarly, we obtain
C+(Cn−13 ⊗ b∗3) = C+(Cn−1
3 ) + 3 · 3n−1,
C×(Cn−13 ⊗ b∗3) = C×(Cn−1
3 ).(11.65)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Extended Hadamard Matrices 361
Finally, the complexity of the Cn3 transform can be calculated as follows:
C+(Cn3) = 3 ·C+(Cn−1
3 ) + 13 · 3n−1,
C×(Cn3) = 3 ·C×(Cn−1
3 ) + 4 · 3n−1,(11.66)
or
C+(Cn3) = 13 · 3n−1n,
C×(Cn−13 ⊗ b∗3) = 4 · 3n−1n, n ≥ 1.
(11.67)
For example, we have C+(C23) = 78,C×(C2
3) = 24,C+(C33) = 351,C×(C3
3) = 108.
11.3.2 Chrestenson transform of order 5n
Let us introduce the following notations:
i5 = (1, 1, 1, 1, 1), a = exp(
j2π5
)= cos
2π5+ j sin
2π5, j =
√−1,
a1 = (1, a, a2, a3, a4), a2 = (1, a2, a4, a, a3).(11.68)
From the relations in Eq. (11.49), we obtain the Chrestenson transform matrix oforder 5:
C15 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 11 a a2 a3 a4
1 a2 a4 a a3
1 a3 a a4 a2
1 a4 a3 a2 a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝i5a1
a2
a∗2a∗1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (11.69)
A Chrestenson matrix of order 5n can be generated recursively as follows:
Cn5 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
Cn−15 Cn−1
5 Cn−15 Cn−1
5 Cn−15
Cn−15 aCn−1
5 a2Cn−15 a3Cn−1
5 a4Cn−15
Cn−15 a2Cn−1
5 a4Cn−15 aCn−1
5 a3Cn−15
Cn−15 a3Cn−1
5 aCn−15 a4Cn−1
5 a2Cn−15
Cn−15 a4Cn−1
5 a3Cn−15 a2Cn−1
5 aCn−15
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (11.70)
Now we compute the complexity of this transform. First, we calculate thecomplexity of the C1
5 transform [see Eq. (11.69)]. Let ZT = (z0, z1, . . . , z4) be acomplex-valued vector of length 5. The 1D forward Chrestenson transform of order5 can be performed as follows:
C15Z =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 11 a a2 a3 a4
1 a2 a4 a a3
1 a3 a a4 a2
1 a4 a3 a2 a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝z0
z1
z2
z3
z4
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝i5a1
a2
a∗2a∗1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝z0
z1
z2
z3
z4
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝v0
v1
v2
v3
v4
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (11.71)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
362 Chapter 11
Using the relations a3 = (a2)∗, a4 = a∗, we obtain
v0 = x0 + (x1 + x4) + (x2 + x3) + j[y0 + (y1 + y4) + (y2 + y3)],
v1 = x0 + (x1 + x4) cos2π5+ (x2 + x3) cos
4π5− (y1 − y4) sin
2π5
− (y2 − y3) sin4π5+ j
[y0 + (x1 − x4) sin
2π5+ (x2 − x3) sin
4π5
+ (y1 + y4) cos2π5+ (y2 + y3) cos
4π5
],
v2 = x0 + (x1 + x4) cos4π5+ (x2 + x3) cos
2π5− (y1 − y4) sin
4π5
+ (y2 − y3) sin2π5+ j
[y0 + (x1 − x4) sin
4π5− (x2 − x3) sin
2π5
+ (y1 + y4) cos4π5+ (y2 + y3) cos
2π5
], (11.72)
v3 = x0 + (x1 + x4) cos4π5+ (x2 + x3) cos
2π5− (y1 − y4) sin
4π5
+ (y2 − y3) sin2π5+ j
[y0 − (x1 − x4) sin
4π5− (x2 − x3) sin
2π5
+ (y1 + y4) cos4π5+ (y2 + y3) cos
2π5
],
v4 = x0 + (x1 + x4) cos2π5+ (x2 + x3) cos
4π5+ (y1 − y4) sin
2π5
+ (y2 − y3) sin4π5+ j
[y0 − (x1 − x4) sin
2π5+ (x2 − x3) sin
4π5
+ (y1 + y4) cos2π5+ (y2 + y3) cos
4π5
].
Now we precompute the following quantities:
t1 = x1 + x4, t2 = x2 + x3, t3 = y1 + y4, t4 = y2 + y3,
b1 = x1 − x4, b2 = x2 − x3, b3 = y1 − y4, b4 = y2 − y3,
c1 = b1 sin2π5, c2 = b2 sin
4π5, c3 = b3 sin
2π5, c4 = b4 sin
4π5,
d1 = t1 cos2π5, d2 = t2 cos
4π5, d3 = t3 cos
2π5, d4 = t4 cos
4π5,
e1 = t1 cos4π5, e2 = t2 cos
2π5, e3 = t3 cos
4π5, e4 = t4 cos
2π5,
f1 = b1 sin4π5, f2 = b2 sin
2π5, f3 = b3 sin
4π5, f4 = b4 sin
2π5,
A1 = x0 + d1 + d2, A2 = c3 + c4, A3 = y0 + d3 + d4, A4 = c1 + c2,
B1 = y0 + e1 + e2, B2 = f3 − f4, B3 = y0 + e3 + e4, B4 = f1 − f2.
(11.73)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Extended Hadamard Matrices 363
Then, Eq. (11.72) can be rewritten as follows:
v0 = x0 + t1 + t2 + j(y0 + t3 + t4),v1 = A1 − A2 + j(A3 + A4),v2 = B1 − B2 + j(B3 + B4),v3 = B1 + B2 + j(B3 − B4),v4 = A1 + A2 + j(A3 − A4).
(11.74)
Subsequently, we can calculate the complexities of the following terms:
C+(i5Z = v0) = 8, C×(i5Z) = 0,
C+(a1Z = v1) = 8, C×(a1Z) = 8,
C+(a2Z = v2) = 8, C×(a2Z) = 8, (11.75)
C+(a∗2Z = v3) = 2, C×(a∗2Z) = 0,
C+(a∗1Z = v4) = 2, C×(a∗1Z) = 0.
Then, the complexity of the C15 transform is
C+(C15) = 28, C×(C1
5) = 16. (11.76)
Now, let ZT = (z0, z1, . . . , zN−1) be a complex-valued vector of length N =5n (n > 1). We introduce the following notations: Pi is a (0, 1) column vectorof length N/5 whose only i’th i = 0, 1, . . . ,N/5 − 1 element is equal to 1, and(Zi)T = (z5i, z5i+1, z5i+2, z5i+3, z5i+4).
The 1D forward Chrestenson transform of order N can be performed as follows[see Eq. (11.70)]:
Cn5Z =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
(Cn−15 ⊗ i5)Z
(Cn−15 ⊗ a1)Z
(Cn−15 ⊗ a2)Z
(Cn−15 ⊗ a∗2)Z
(Cn−15 ⊗ a∗1)Z
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (11.77)
Using the above notations, we have
(Cn−15 ⊗ i5)Z = (Cn−1
5 ⊗ i5)(P0 ⊗ Z0 + P1 ⊗ Z1 + · · · + PN/5−1 ⊗ ZN/5−1
)= Cn−1
5 P0 ⊗ i5Z0 +Cn−15 P1 ⊗ i5Z1 + · · · +Cn−1
5 PN/5−1 ⊗ i5ZN/5−1
= Cn−15 P0(z0 + · · · + z4) +Cn−1
5 P1(z5 + · · · + z9)
+ · · · +Cn−15 PN/5−1(zN−5 + · · · + zN−1)
= Cn−15
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝z0 + z1 + · · · + z4
z5 + z6 + · · · + z9...
zN−5 + zN−4 + · · · + zN−1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (11.78)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
364 Chapter 11
Then, we can write
C+(Cn−15 ⊗ i5) = C+(Cn−1
5 ) + 8 · 5n−1, C×(Cn−15 ⊗ i5) = C×(Cn−1
5 ). (11.79)
Now, compute the complexity of the (Cn−15 ⊗ a1)Z transform,
(Cn−15 ⊗ a1)Z = (Cn−1
5 ⊗ a1)(P0 ⊗ Z0 + P1 ⊗ Z1 + · · · + PN/5−1 ⊗ ZN/5−1
)= Cn−1
5
(P0 ⊗ a1Z0 + P1 ⊗ a1Z1 + · · · + PN/5−1 ⊗ a1ZN/5−1
)(11.80)
Then, from Eq. (11.75), we obtain
C+(Cn−15 ⊗ a1) = C+(Cn−1
5 ) + 8 · 5n−1,
C×(Cn−15 ⊗ a1) = C×(Cn−1
5 ) + 8 · 5n−1.(11.81)
Similarly, we can verify
C+(Cn−15 ⊗ a2) = C+(Cn−1
5 ) + 8 · 5n−1, C×(Cn−15 ⊗ a2) = C×(Cn−1
5 ) + 8 · 5n−1,
C+(Cn−15 ⊗ a∗2) = C+(Cn−1
5 ) + 2 · 5n−1, C×(Cn−15 ⊗ a∗2) = C×(Cn−1
5 ),
C+(Cn−15 ⊗ a∗1) = C+(Cn−1
5 ) + 2 · 5n−1, C×(Cn−15 ⊗ a∗1) = C×(Cn−1
5 ).
(11.82)
Finally, the complexity of the Cn5 transform can be calculated as follows:
C+(Cn5) = 5 ·C+(Cn−1
5 ) + 28 · 5n−1, C×(Cn5) = 5 ·C×(Cn−1
5 ) + 16 · 5n−1,
or
C+(Cn5) = 28n · 5n−1, C×(Cn
5) = 16n · 5n−1, n = 1, 2, . . .
(11.83)
The numerical results of the complexities of the Chrestenson transforms are givenin Table 11.1.
Table 11.1 Results of the complexities of the Chrestenson transforms.
Size Addition Multiplication
Cn3 n 13 · 3n−1n 4 · 3n−1n
3 1 13 49 2 78 24
27 3 351 10881 4 1404 432
Cn5 n 28 · 5n−1n 16 · 5n−1n
5 1 28 1625 2 280 160
125 3 2 100 1200625 4 14,000 8000
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Extended Hadamard Matrices 365
11.4 Fast Generalized Haar Transforms
The evolution of imaging and audio/video applications over the past two decadeshas pushed data storage and transmission technologies beyond their previouslimits. One of the main and most important steps in data compression, as wellas in various pattern recognition and communication tasks, is the applicationof discrete orthogonal (spectral) transforms to the input signals and images.This step allows transforming the original signals into the much less redundantspectral domain and performing the actual compression/recognition on spectralcoefficients rather than on the original signals.36–43 Developed in the 1960s and1970s, fast trigonometric transforms such as the FFT and DCT (discrete cosinetransform) facilitated the use of such techniques for a variety of efficient datarepresentation problems. Particularly, the DCT-based algorithms have become theindustry standard (JPEG/MPEG) in digital image/video compression systems.41
Here, we consider the generalized Haar transform, develop the corresponding fastalgorithms, and evaluate its complexities.36,39,44,45
11.4.1 Generalized Haar functions
The generalized Haar functions for any p, n (p is a prime power) are defined asfollows:
H0,00,0(k) = 1, 0 ≤ k < 1,
Hq,ri,t (k) =
(√p)i−1 exp
(j2πp
(t − 1)r),
q + [(t − 1)/p]pi−1
≤ k <q + (t/p)
pi−1,
Hq,ri,t = 0, at all other points,
(11.84)
where j =√−1, i = 1, 2, . . . , n, r = 1, 2, . . . , p − 1, q = 0, 1, 2, . . . , pi−1 − 1,
t = 1, 2, . . . , p.For p = 2 from Eq. (11.84), we obtain the definition of classical Haar functions
(r = 1),
H00,0(k) = 1, 0 ≤ k < 1,
Hqi,1(k) =
(√2)i−1,
2q2i≤ k <
2q + 12i,
Hqi,2(k) = −
(√2)i−1,
2q + 12i
≤ k <2q + 2
2i,
Hqi,t = 0, at all other points,
(11.85)
from which we generate a classical Haar transform matrix of order 2n (see previouschapters in this book).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
366 Chapter 11
Example: Observe the generalized Haar transform matrix generation for p = 3and n = 2; we use the following notations: a = exp[ j(2π/3)], s =
√3:
Row 1: H0111(k) = 1, 0 ≤ k <
13,
H0112(k) = a,
13≤ k <
23,
H0113(k) = a2,
23≤ k < 1;
Row 2: H0211(k) = 1, 0 ≤ k <
13,
H0212(k) = a2,
13≤ k <
23,
H0213(k) = a,
23≤ k < 1;
(11.86a)
Row 3: H0121(k) = s, 0 ≤ k <
19,
H0122(k) = sa,
19≤ k <
29,
H0123(k) = sa2,
29≤ k < 1;
Row 4: H1121(k) = s,
13≤ k <
49,
H1122(k) = sa,
49≤ k <
59,
H1123(k) = sa2,
59≤ k <
23
;
(11.86b)
Row 5: H2121(k) = s,
23≤ k <
79,
H2122(k) = sa,
79≤ k <
89,
H2123(k) = sa2,
89≤ k < 1;
Row 6: H0221(k) = s, 0 ≤ k <
19,
H0222(k) = sa2,
19≤ k <
29,
H0223(k) = sa,
29≤ k <
13
;
(11.86c)
Row 7: H1221(k) = s,
13≤ k <
49,
H1222(k) = sa2,
49≤ k <
59,
H1223(k) = sa,
59≤ k <
23
;
Row 8: H2221(k) = s,
23≤ k <
79,
H2222(k) = sa2,
79≤ k <
89,
H2223(k) = sa,
89≤ k < 1.
(11.86d)
Therefore, the complete orthogonal generalized Haar transform matrix for p = 3,n = 2 has the following form:
H9 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 1 11 1 1 a a a a2 a2 a2
1 1 1 a2 a2 a2 a a a
s sa sa2 0 0 0 0 0 00 0 0 s sa sa2 0 0 00 0 0 0 0 0 s sa sa2
s sa2 sa 0 0 0 0 0 00 0 0 s sa2 sa 0 0 00 0 0 0 0 s sa2 sa
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (11.87)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Extended Hadamard Matrices 367
The complete orthogonal generalized Haar transform matrix for p = 4, n = 2has the following form:
H16 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 11 1 1 1 j j j j −1 −1 −1 −1 − j − j − j − j1 1 1 1 −1 −1 −1 −1 1 1 1 1 −1 −1 −1 −11 1 1 1 − j − j − j − j −1 −1 −1 −1 j j j j2 2 j −2 −2 j 0 0 0 0 0 0 0 0 0 0 0 00 0 0 0 2 2 j −2 −2 j 0 0 0 0 0 0 0 00 0 0 0 0 0 0 0 2 2 j −2 −2 j 0 0 0 00 0 0 0 0 0 0 0 0 0 0 0 2 2 j −2 −2 j2 −2 2 −2 0 0 0 0 0 0 0 0 0 0 0 00 0 0 0 2 −2 2 −2 0 0 0 0 0 0 0 00 0 0 0 0 0 0 0 2 −2 2 −2 0 0 0 00 0 0 0 0 0 0 0 0 0 0 0 2 −2 2 −22 −2 j −2 2 j 0 0 0 0 0 0 0 0 0 0 0 00 0 0 0 2 −2 j −2 2 j 0 0 0 0 0 0 0 00 0 0 0 0 0 0 0 2 −2 j −2 2 j 0 0 0 00 0 0 0 0 0 0 0 0 0 0 0 2 −2 j −2 2 j
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (11.88)
From the above-given generalized Haar transform matrices, we can see that theHaar transform is globally sensitive for the first p of the pn row vectors, but locallysensitive for all subsequent vectors.
11.4.2 2n-point Haar transformLet us introduce the following notations: i2 = (1, 1), j2 = (1,−1). From therelations in Eq. (11.85), we obtain
for n = 1,
H2 =
(1 11 −1
)=
(i2j2
); (11.89)
for n = 2,
H4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 1 11 1 −1 −1√2 −√2 0 00 0
√2 −√2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ =(
H2 ⊗ i2√2I2 ⊗ j2
);
(11.90)
for n = 3,
H8 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 11 1 1 1 −1 −1 −1 −1√2√
2 −√2 −√20 0 0 0
√2√
2 −√2 −√22 −2 0 0 0 0 0 00 0 2 −2 0 0 0 00 0 0 0 2 −2 0 00 0 0 0 0 0 2 −2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠=
(H4 ⊗ i2
2I4 ⊗ j2
).
(11.91)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
368 Chapter 11
Continuing this process, we obtain recursive representations of Haar matrices ofany order 2n as
H2n =
⎛⎜⎜⎜⎜⎜⎝ H2n−1 ⊗ i2(√2)n−1
I2n−1 ⊗ j2
⎞⎟⎟⎟⎟⎟⎠ , H1 = 1, n = 1, 2, . . . . (11.92)
Now, we will compute the complexity of a Haar transform of order 2n.Note that for n = 1, we have C+(H2) = 2, C×(H2) = 0. To calculate the complex-
ity of the H4 transform given above, let XT = (x0, x1, x2, x3) be a real-valued vectorof length 4. The 1D forward Haar transform of order 4 can be performed as follows:
H4X =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 1 11 1 −1 −1√2 −√2 0 00 0
√2 −√2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝x0
x1
x2
x3
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ =⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝y0
y1
y2
y3
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (11.93)
wherey0 = (x0 + x1) + (x2 + x3),y1 = (x0 + x1) − (x2 + x3),y2 =
√2(x0 − x1),
y3 =√
2(x2 − x3).
(11.94)
Then, the complexity of H4 transform is C×(H2) = 6, C×(H4) = 2.Now, let XT = (x0, x1, . . . , xN−1) be a real-valued vector of length N = 2n. We
introduce the following notations: Pi is a (0, 1) column vector of length N/2 whoseonly i’th (i = 1, 2, . . . ,N/2) element equals 1, and (Xi)T = (x2i−2, x2i−1). The 1Dforward Haar transform of order N can be performed as follows:
HN X =
⎛⎜⎜⎜⎜⎜⎜⎝ (HN/2 ⊗ i2)X[(√2)n−1
I2n−1 ⊗ j2]
X
⎞⎟⎟⎟⎟⎟⎟⎠ . (11.95)
Using the above-given notations, we have
(HN/2 ⊗ i2)X = (HN/2 ⊗ i2)(P1 ⊗ X1 + P2 ⊗ X2 + · · · + PN/2 ⊗ XN/2
)= HN/2P1 ⊗ i2X1 + HN/2P2 ⊗ i2X2 + · · · + HN/2PN/2 ⊗ i2XN/2
= HN/2P1(x0 + x1) + HN/2P2(x2 + x3) + · · · + HN/2PN/2(xN−2 + xN−1)
= HN/2
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝x0 + x1
x2 + x3...
xN−2 + xN−1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (11.96)
Then, we can write
C+(HN/2 ⊗ i2) = C+(HN/2) +N2,
C×(HN/2 ⊗ i2) = C×(HN/2).(11.97)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Extended Hadamard Matrices 369
Now we compute the complexity of the (√
2n−1
IN/2 ⊗ j2)X transform,
(√
2n−1
IN/2 ⊗ j2)X =(√
2n−1
IN/2 ⊗ j2) (
P1 ⊗ X1 + P2 ⊗ X2 + · · · + PN/2 ⊗ XN/2)
=√
2n−1 [
P1(x0 − x1) + P2(x2 − x3) + · · · + PN/2(xN−2 − xN−1)]
=√
2n−1
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝x0 − x1
x2 − x3...
xN−2 − xN−1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ (11.98)
from which we obtain
C+(√
2n−1
IN/2 ⊗ j2)=
N2,
C×(√
2n−1
IN/2 ⊗ j2)=
N2.
(11.99)
Finally, the complexity of the HN transform can be calculated as follows:
C+(H2n) = 2n+1 − 2,
C×(H2n) = 2n − 2, n = 1, 2, 3, . . . .(11.100)
11.4.3 3n-point generalized Haar transform
Let us introduce the following notations: i3 = (1, 1, 1), b3 = (1, a, a2), wherea = exp[ j(2π/3)], j =
√−1. From the relations in Eq. (11.84), we obtain (below,s =√
3):
for n = 1,
H3 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 11 a a2
1 a2 a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎠ =⎛⎜⎜⎜⎜⎜⎜⎜⎝
i3b3
b∗3
⎞⎟⎟⎟⎟⎟⎟⎟⎠ ;(11.101)
for n = 2,
H9 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 1 11 1 1 a a a a2 a2 a2
1 1 1 a2 a2 a2 a a a
s sa sa2 0 0 0 0 0 00 0 0 s sa sa2 0 0 00 0 0 0 0 0 s sa sa2
s sa2 sa 0 0 0 0 0 00 0 0 s sa2 sa 0 0 00 0 0 0 0 s sa2 sa
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝H3 ⊗ i3sI3 ⊗ b3
sI3 ⊗ b∗3
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ;(11.102)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
370 Chapter 11
for n = 3,
H27 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝H9 ⊗ i3
s2I9 ⊗ b3
s2I3 ⊗ b∗3
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (11.103)
Continuing this process, we obtain a recursive representation of the generalizedHaar matrices of any order 3n as follows:
H3n =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝H3n−1 ⊗ i3
sn−1I3n−1 ⊗ b3
sn−1I3n−1 ⊗ b∗3
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (11.104)
Now we compute the complexity of the generalized Haar transform of order 3n.First, we calculate the complexity of the H3 transform. Let ZT = (z0, z1, z2) = (x0 +
jy0, x1 + jy1, x2 + jy2) be a complex-valued vector of length 3, a = exp[ j(2π/3)],j =√−1. Because the generalized Haar transform matrix H3 is identical to the
Chrestenson matrix of order 3, the 1D forward generalized Haar transform oforder 3 can be performed in the manner that was shown in Section 11.3.1 [seeEqs. (11.57) and (11.58)], and has the complexity
C+(i3Z) = 4, C×(i3Z) = 0,C+(b3Z) = 6, C×(b3Z) = 4,C+(b∗3Z) = 1, C×(b∗3Z) = 0.
(11.105)
That is, C+(H3) = 11,C×(H3) = 4.Now, let ZT = (z0, z1, . . . , zN−1) be a complex-valued vector of length N = 3n.
We introduce the following notations: Pi denotes a (0, 1) column vector of lengthN/3 whose only i’th element is equal to 1 (i = 0, 1, . . . ,N/3 − 1), and (Zi)T =
(z3i, z3i+1, z3i+2). The 1D forward generalized Haar transform of order N can beperformed as follows:
H3nZ =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝(H3n−1 ⊗ i3) Z(
sn−1I3n−1 ⊗ b3
)Z(
sn−1I3n−1 ⊗ b∗3)
Z
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (11.106)
Using the above-given notations, we have
(H3n−1 ⊗ i3)Z = (H3n−1 ⊗ i3)(P0 ⊗ Z0 + P1 ⊗ Z1 + · · · + PN/3−1 ⊗ ZN/3−1
)= H3n−1 P0 ⊗ i3Z0 + H3n−1 P1 ⊗ i3Z1 + · · · + H3n−1 PN/3−1 ⊗ i3ZN/3−1
= H3n−1 P0(z0 + z1 + z2) + H3n−1 P1(z3 + z4 + z5)
+ · · · + H3n−1 PN/3−1(zN−3 + zN−2 + zN−1)
= H3n−1
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝z0 + z1 + z2
z3 + z4 + z5...
zN−3 + zN−2 + zN−1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (11.107)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Extended Hadamard Matrices 371
Then, we can write
C+(H3n−1 ⊗ i3) = C+(H3n−1 ) + 4 · 3n−1,
C×(H3n−1 ⊗ i3) = C×(H3n−1 ).(11.108)
Now, compute the complexity of the (sn−1I3n−1 ⊗ b3)Z transform:
(sn−1I3n−1 ⊗ b3)Z = sn−1(I3n−1 ⊗ b3)(P0 ⊗ Z0 + P1 ⊗ Z1 + · · · + PN/3−1 ⊗ ZN/3−1
)= sn−1(P0 ⊗ b3Z0 + P1 ⊗ b3Z1 + · · · + PN/3−1 ⊗ bZN/3−1)
= sn−1
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝z0 + az1 + a2z2
z3 + az4 + a2z5...
zN−3 + azN−2 + a2zN−1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (11.109)
We obtain a similar result for the (sn−1I3n−1 ⊗ b∗3)Z transform. Hence, using Eq.(11.105), we obtain
C+(sn−1I3n−1 ⊗ b3) = 7 · 3n−1,
C×(sn−1I3n−1 ⊗ b3) = 5 · 3n−1;C+(sn−1I3n−1 ⊗ b∗3) = 3n−1,
C×(sn−1I3n−1 ⊗ b∗3) = 3n−1.
(11.110)
Finally, the complexity of the H3n transform can be calculated as follows:
C+(H3n) = C+(H3n−1 ) + 12 · 3n−1,
C×(H3n) = C×(H3n−1 ) + 6 · 3n−1;(11.111)
or
C+(H3n) = 6(3n − 3) + 11,C×(H3n) = 3(3n − 3) + 4.
(11.112)
11.4.4 4n-point generalized Haar transform
Let us introduce the following notations: i4 = (1, 1, 1, 1), a1 = (1, j,−1,− j),a2 = (1,−1, 1,−1), where j =
√−1. From the relations in Eq. (11.84), we obtain
H4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 1 11 j −1 − j1 −1 1 −11 − j −1 j
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ =⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
i4a1
a2
a∗1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , H16 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝H4 ⊗ i42I4 ⊗ a1
2I4 ⊗ a2
2I4 ⊗ a∗1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (11.113)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
372 Chapter 11
Continuing this process, we obtain a recursive representation of the generalizedHaar matrices of any order 4n:
H4n =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝H4n−1 ⊗ i4
2n−1I4n−1 ⊗ a1
2n−1I4n−1 ⊗ a2
2n−1I4n−1 ⊗ a∗1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (11.114)
Now we will compute the complexity of a generalized Haar transform oforder 4n. First, we calculate the complexity of the H4 transform. Let ZT =
(z0, z1, z2, z3) be a complex-valued vector of length 4. The 1D forward-generalizedHaar transform of order 4 can be performed as follows:
H4Z =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 1 11 j −1 − j1 −1 1 −11 − j −1 j
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝z0
z1
z2
z3
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ =⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝i4a1
a2
a∗1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝z0
z1
z2
z3
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ =⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝v0
v1
v2
v3
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (11.115)
where
v0 = (x0 + x2) + (x1 + x3) + j[(y0 + y2) + (y1 + y3)],v1 = (x0 − x2) − (y1 − y3) + j[(y0 − y2) + (x1 − x3)],v2 = (x0 + x2) − (x1 + x3) + j[(y0 + y2) − (y1 + y3)],v3 = (x0 − x2) + (y1 − y3) + j[(y0 − y2) − (x1 − x3)].
(11.116)
Then, the complexity of the H4 transform is C+(H4) = 16, and no multiplicationsare needed.
Now, let ZT = (z0, z1, . . . , zN−1) be a complex-valued vector of length N = 4n,n > 1. We introduce the following notations: Pi is a (0, 1) column vector of lengthN/4 whose only i’th (i = 1, 2, . . . ,N/4) element equals 1, and (Zi)T = (z4i−4,z4i−3, z4i−2, z4i−1). The 1D forward generalized Haar transform of order N = 4n canbe performed as follows:
H4nZ =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
(H4n−1 ⊗ i4) Z(2n−1I4n−1 ⊗ a1
)Z(
2n−1I4n−1 ⊗ a2
)Z(
2n−1I4n−1 ⊗ a∗1)
Z
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (11.117)
Using the above-given notations, we have
(H4n−1 ⊗ i4)Z = (H4n−1 ⊗ i4)(P1 ⊗ Z1 + P2 ⊗ Z2 + · · · + PN/4 ⊗ ZN/4
)= H4n−1 P1 ⊗ i4Z1 + H4n−1 P2 ⊗ i4Z2 + · · · + H4n−1 PN/4 ⊗ i4ZN/4
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Extended Hadamard Matrices 373
= H4n−1 P1(z0 + z1 + z2 + z3) + H4n−1 P2(z4 + · · · + z7)
+ · · · + H4n−1 PN/4(zN−4 + · · · + zN−1)
= H4n−1
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝z0 + z1 + z2 + z3
z4 + z5 + z6 + z7...
zN−4 + zN−3 + zN−2 + zN−1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (11.118)
Because
z4i−4 + · · · + z4i−1 = (x4i−4 + x4i−2) + (x4i−3 + x4i−1)
+ j[(y4i−4 + y4i−2) + (y4i−3 + y4i−1)] (11.119)
we can write
C+(H4n−1 ⊗ i4) = C+(H4n−1 ) + 6 · 4n−1,
C×(H4n−1 ⊗ i4) = Cshift(H4n−1 ) = 0.(11.120)
Now we compute the complexity of the (2n−1I4n−1 ⊗ a1)Z transform:
(2n−1I4n−1 ⊗ a1)Z = (2n−1I4n−1 ⊗ a1)(P1 ⊗ Z1 + P2 ⊗ Z2 + · · · + PN/4 ⊗ ZN/4
)= 2n−1
(P1 ⊗ a1Z1 + P2 ⊗ a1Z2 + · · · + PN/4 ⊗ a1ZN/4
)(11.121)
Because
a1Zi =(1 j −1 − j
) ⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝z4i−4
z4i−3
z4i−2
z4i−1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ =(1 j −1 − j
) ⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝x4i−4 + jy4i−4
x4i−3 + jy4i−3
x4i−2 + jy4i−2
x4i−1 + jy4i−1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠= (x4i−4 − x4i−2) − (y4i−3 − y4i−1)
+ j[(y4i−4 − y4i−2) + (x4i−3 − x4i−1)], (11.122)
we obtain
C+(2n−1I4n−1 ⊗ a1) = 6 · 4n−1,
Cshift(2n−1I4n−1 ⊗ a1) = 2 · 4n−1.(11.123)
Similarly, we find that
(2n−1I4n−1 ⊗ a2)Z = (2n−1I4n−1 ⊗ a2)(P1 ⊗ Z1 + P2 ⊗ Z2 + · · · + PN/4 ⊗ ZN/4
)= 2n−1
(P1 ⊗ a2Z1 + P2 ⊗ a2Z2 + · · · + PN/4 ⊗ a2ZN/4
), (11.124)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
374 Chapter 11
and
a2Zi = (x4i−4 + x4i−2) − (x4i−3 + x4i−1)
+ j[(y4i−4 + y4i−2) − (y4i−3 + y4i−1)]. (11.125)
Now, taking into account Eq. (11.119), we obtain
C+(2n−1I4n−1 ⊗ a2) = 2 · 4n−1,
Cshift(2n−1I4n−1 ⊗ a2) = 2 · 4n−1.(11.126)
The (2n−1I4n−1 ⊗ a∗1)Z transform has the form
(2n−1I4n−1 ⊗ a∗1)Z = (2n−1I4n−1 ⊗ a∗1)(P1 ⊗ Z1 + P2 ⊗ Z2 + · · · + PN/4 ⊗ ZN/4
)= 2n−1
(P1 ⊗ a∗1Z1 + P2 ⊗ a∗1Z2 + · · · + PN/4 ⊗ a∗1ZN/4
)(11.127)
because
a∗1Zi = (x4i−4 − x4i−2) + (y4i−3 − y4i−1)
+ j[(y4i−4 − y4i−2) − (x4i−3 − x4i−1)]. (11.128)
Now, taking into account Eq. (11.122), we obtain
C+(2n−1I4n−1 ⊗ a∗1) = 2 · 4n−1,
Cshift(2n−1I4n−1 ⊗ a∗1) = 2 · 4n−1.(11.129)
Finally, the complexity of the H4n transform can be calculated as follows:
C+(H4) = 16, Cshift(H4) = 0,
C+(H4n) =16(4n − 1)
3, Cshift(H4n) = 6 · 4n−1, n ≥ 2.
(11.130)
11.4.5 5n-point generalized Haar transform
Let us introduce the following notations: i5 = (1, 1, 1, 1, 1), a1 = (1, a, a2, a3, a4),a2 = (1, a2, a4, a, a3), and a = exp( j 2π
5 ), where j =√−1. From Eq. (11.84), we
obtain
H5 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 11 a a2 a3 a4
1 a2 a4 a a3
1 a3 a a4 a2
1 a4 a3 a2 a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
i5a1
a2
a∗2a∗1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, H25 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
H5 ⊗ i5√5I5 ⊗ a1√5I5 ⊗ a2√5I5 ⊗ a∗2√5I5 ⊗ a∗1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (11.131)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Extended Hadamard Matrices 375
Continuing this process, we obtain recursive representation of generalized Haarmatrices of any order 5n:
H5n =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
H5n−1 ⊗ i5√5
n−1I5n−1 ⊗ a1√
5n−1
I5n−1 ⊗ a2√5
n−1I5n−1 ⊗ a∗2√
5n−1
I5n−1 ⊗ a∗1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, H1 = 1, n = 1, 2, . . . (11.132)
Now we will compute the complexity of the generalized Haar transform of order5n. First, we calculate the complexity of the H5 transform. Let ZT = (z0, z1, . . . , z4)be a complex-valued vector of length 5; then,
H5Z =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 11 a a2 a3 a4
1 a2 a4 a a3
1 a3 a a4 a2
1 a4 a3 a2 a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
z0
z1
z2
z3
z4
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
i5a1
a2
a∗2a∗1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
z0
z1
z2
z3
z4
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
v0
v1
v2
v3
v4
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (11.133)
where
vr0 = x0 + (x1 + x4) + (x2 + x3), vi
0 = y0 + (y1 + y4) + (y2 + y3),
vr1 = x0 + (x1 + x4) cos
2π5− (x2 + x3) cos
π
5
−[(y1 − y4) sin
2π5+ (y2 − y3) sin
π
5
],
vi1 = y0 + (y1 + y4) cos
2π5− (y2 + y3) cos
π
5
+
[(x1 − x4) sin
2π5+ (x2 − x3) sin
π
5
],
vr4 = x0 + (x1 + x4) cos
2π5− (x2 + x3) cos
π
5
+
[(y1 − y4) sin
2π5+ (y2 − y3) sin
π
5
],
vi4 = y0 + (y1 + y4) cos
2π5− (y2 + y3) cos
π
5
−[(x1 − x4) sin
2π5+ (x2 − x3) sin
π
5
], (11.134)
vr2 = x0 − (x1 + x4) cos
π
5+ (x2 + x3) cos
2π5
−[(y1 − y4) sin
π
5− (y2 − y3) sin
2π5
],
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
376 Chapter 11
vi2 = y0 − (y1 + y4) cos
π
5+ (y2 + y3) cos
2π5
+
[(x1 − x4) sin
π
5− (x2 − x3) sin
2π5
],
vr3 = x0 − (x1 + x4) cos
π
5+ (x2 + x3) cos
2π5
+
[(y1 − y4) sin
π
5− (y2 − y3) sin
2π5
],
vi3 = y0 − (y1 + y4) cos
π
5+ (y2 + y3) cos
2π5
−[(x1 − x4) sin
π
5− (x2 − x3) sin
2π5
].
Now we introduce the following notations:
X1 = x1 + x4, X2 = x2 + x3, X1 = x1 − x4, X2 = x2 − x3,
Y1 = y1 + y4, Y2 = y2 + y3, Y1 = y1 − y4, Y2 = y2 − y3;
C1 = X1 cos2π5, C2 = X2 cos
π
5, C3 = Y1 cos
2π5, C4 = Y2 cos
π
5,
T1 = X1 cosπ
5, T2 = X2 cos
2π5, T3 = Y1 cos
π
5, T4 = Y2 cos
2π5
;
S 1 = X1 sin2π5, S 2 = X2 sin
π
5, S 3 = Y1 sin
2π5, S 4 = Y2 sin
π
5,
R1 = Y1 sinπ
5, R2 = Y2 sin
2π5, R3 = X1 sin
π
5, R4 = X2 sin
2π5.
(11.135)
Using the above-given notations, Eq. (11.134) can be represented as
v0 = x0 + X1 + X2 + j(y0 + Y1 + Y2),
v1 = (x0 +C1 −C2) − (S 3 + S 4) + j[(y0 +C3 −C4) + (S 1 + S 2)],
v4 = (x0 +C1 −C2) + (S 3 + S 4) + j[(y0 +C3 −C4) − (S 1 + S 2)],
v2 = (x0 − T1 + T2) − (R1 − R2) + j[(y0 − T3 + T4) + (R3 − R4)],
v3 = (x0 − T1 + T2) + (R1 − R2) + j[(y0 − T3 + T4) − (R3 − R4)].
(11.136)
Now, it is not difficult to find that C+(i5) = C+(a1) = C+(a2) = 8,C+(a∗1) =C+(a∗2) = 2,C×(i5) = C×(a∗1) = C×(a∗2) = 0,C×(a1) = C×(a2) = 8. Therefore, weobtain C+(H5) = 28, C×(H5) = 16.
Now, let ZT = (z0, z1, . . . , zN−1) be a complex-valued vector of length N 5n (n >1). We introduce the following notations: Pi is a (0, 1) column vector of lengthN/5 whose only i’th element equals 1 (i = 1, 2, . . . ,N/5), and (Zi)T = (z5i−5, z5i−4,z5i−3, z5i−2, z5i−1). The 1D forward generalized Haar transform of order N = 5n can
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Extended Hadamard Matrices 377
be performed as follows:
H5nZ =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
(H5n−1 ⊗ i5) Z(√5
n−1I5n−1 ⊗ a1
)Z(√
5n−1
I5n−1 ⊗ a2
)Z(√
5n−1
I5n−1 ⊗ a∗2)
Z(√5
n−1I5n−1 ⊗ a∗1
)Z
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (11.137)
Using the above-given notations, we have
(H5n−1 ⊗ i5) Z = (H5n−1 ⊗ i5)(P1 ⊗ Z1 + P2 ⊗ Z2 + · · · + PN/5 ⊗ ZN/5
)= HN/5P1(z0 + · · · + z4) + HN/5P2(z5 + · · · + z9)
+ · · · + HN/5PN/5(zN−5 + · · · + zN−1)
= HN/5
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝z0 + z1 + · · · + z4
z5 + z6 + · · · + z9...
zN−5 + · · · + zN−1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (11.138)
Then, we can write
C+(H5n−1 ⊗ i5) = C+(H5n−1 ) + 8 · 5n−1,
C×(H5n−1 ⊗ i5) = C×(H5n−1 ).(11.139)
Now we compute the complexity of the (√
5n−1
I5n−1 ⊗ a1)Z transform,
(√5
n−1I5n−1 ⊗ a1
)Z =
(√5
n−1I5n−1 ⊗ a1
) (P1 ⊗ Z1 + P2 ⊗ Z2 + · · · + PN/5 ⊗ ZN/5
)=√
5n−1 (
P1 ⊗ a1Z1 + P2 ⊗ a1Z2 + · · · + PN/5 ⊗ a1ZN/5), (11.140)
from which we obtain
C+(√
5n−1
I5n−1 ⊗ a1
)= 5n−1C+(a1) = 8 · 5n−1,
C×(√
5n−1
I5n−1 ⊗ a1
)= 5n−1 + 5n−1C×(a1) = 9 · 5n−1.
(11.141)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
378 Chapter 11
Table 11.2 Results of the complexities of the generalized Haar transforms.
Size N Addition Multiplication Shift2n 2n+1 − 2 2n − 2 0
2 1 2 0 04 2 6 2 08 3 14 6 0
16 4 30 14 0
3n 6(3n − 3) + 11 3(3n − 3) + 4 0
3 1 11 4 09 2 47 22 0
27 3 155 76 081 4 479 238 0
4n 16(4n − 1)/3 6 · 4n−2
4 1 16 0 016 2 80 0 664 3 336 0 24
256 4 1360 0 96
5n 7(5n − 1) 6 · 5n − 14 0
5 1 28 16 025 2 168 136 0
125 3 968 736 0625 4 4368 3736 0
Similarly, we find that
C+(√
5n−1
I5n−1 ⊗ a2
)= 5n−1C+(a2) = 8 · 5n−1,
C×(√
5n−1
I5n−1 ⊗ a2
)= 5n−1 + 5n−1C×(a1) = 9 · 5n−1.
C+(√
5n−1
I5n−1 ⊗ a∗2)= 5n−1C+(a∗2) = 2 · 5n−1,
C×(√
5n−1
I5n−1 ⊗ a∗2)= 5n−1 + 5n−1C×(a∗2) = 3 · 5n−1.
C+(√
5n−1
I5n−1 ⊗ a∗1)= 5n−1C+(a∗1) = 2 · 5n−1,
C×(√
5n−1
I5n−1 ⊗ a∗2)= 5n−1 + 5n−1C×(a∗1) = 3 · 5n−1.
(11.142)
Finally, the complexity of the H5n transform can be calculated as follows:
C+ (H5n) = 7(5n − 1),C× (H5n) = 6 · 5n − 14, n = 1, 2, . . . .
(11.143)
The numerical results of the complexities of the generalized Haar transforms aregiven in Table 11.2.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Extended Hadamard Matrices 379
References
1. A. T. Butson, “Generalized Hadamard matrices,” Proc. Am. Math. Soc. 13,894–898 (1962).
2. S. S. Agaian, Hadamard Matrices and their Applications, Lecture Notes inMathematics, 1168, Springer, New York (1985).
3. C. Mackenzie and J. Seberry, “Maximal q-ary codes and Plotkin’s bound,”Ars Combin 26B, 37–50 (1988).
4. D. Jungnickel and H. Lenz, Design Theory, Cambridge University Press,Cambridge, UK (1993).
5. S. S. Agaian, “Advances and problems of fast orthogonal transforms forsignal-images processing applications, Part 1,” in Ser. Pattern Recognition,Classification, Forecasting Yearbook, The Russian Academy of Sciences,146–215 Nauka, Moscow (1990).
6. G. Beauchamp, Walsh Functions and their Applications, Academic Press,London (1980).
7. I. J. Good, “The interaction algorithm and practiced Fourier analysis,” J. R.Stat. Soc. London B-20, 361–372 (1958).
8. A. M. Trachtman and B. A. Trachtman, Fundamentals of the Theory ofDiscrete Signals on Finite Intervals, Sov. Radio, Moscow (1975) (in Russian).
9. P. J. Slichta, “Higher dimensional Hadamard matrices,” IEEE Trans. Inf.Theory IT-25 (5), 566–572 (1979).
10. J. Hammer and J. Seberry, “Higher dimensional orthogonal designs andapplications,” IEEE Trans. Inf. Theory IT-27, 772–779 (1981).
11. S. S. Agaian and K. O. Egiazarian, “Generalized Hadamard matrices,” Math.Prob. Comput. Sci. 12, 51–88 (1984) (in Russian), Yerevan.
12. W. Tadej and K. Zyczkowski, “A concise guide to complex Hadamardmatrices,” Open Syst. Inf. Dyn. 13, 133–177 (2006).
13. T. Butson, “Relations among generalized Hadamard matrices, relativedifference sets, and maximal length linear recurring sequences,” Can. J. Math.15, 42–48 (1963).
14. R. J. Turyn, Complex Hadamard Matrices. Combinatorial Structures and theirApplications, Gordon and Breach, New York (1970) pp. 435–437.
15. C. H. Yang, “Maximal binary matrices and sum of two squares,” Math.Comput 30 (133), 148–153 (1976).
16. C. Watari, “A generalization of Haar functions,” Tohoku Math. J. 8 (3),286–290 (1956).
17. S. Agaian, J. Astola, and K. Egiazarian, Binary Polinomial Transforms andNonlinear Digital Filters, Marcel Dekker, New York (1995).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
380 Chapter 11
18. V. M. Sidel’nikov, “Generalized Hadamard matrices and their applications,”Tr. Diskr. Mat. 3, 249–268 (2000) Fizmatlit, Moscow.
19. B. W. Brock, “Hermitian congruences and the existence and completion ofgeneralized Hadamard matrices,” J. Combin. Theory A 49, 233–261 (1988).
20. W. De Launey, “Generalised Hadamard matrices whose rows and columnsform a group,” in Combinatorial Mathematics X, L. R. A Casse, Ed., LectureNotes in Mathematics, Springer, Berlin (1983).
21. T. P. McDonough, V. C. Mavron, and C. A. Pallikaros, “GeneralisedHadamard matrices and translations,” J. Stat. Planning Inference 86, 527–533(2000).
22. X. Jiang, M. H. Lee, R. P. Paudel, and T. C. Shin, “Codes from generalizedHadamard matrices,” in Proc. of Int. Conf. on Systems and NetworksCommunication, ICSNC’06, 67. IEEE Computer Society, Washington, DC(2006).
23. K. J. Horadam, “An introduction to cocyclic generalized Hadamard matrices,”Discrete Appl. Math. 102, 115–131 (2000).
24. S. A. Stepanov, “Nonlinear codes from modified Butson–Hadamardmatrices,” Discrete Math. Appl. 16 (5), 429–438 (2006).
25. J. H. Beder, “Conjectures about Hadamard matrices,” J. Stat. Plan. Inference72, 7–14 (1998).
26. D. A. Drake, “Partial geometries and generalized Hadamard matrices,” Can.J. Math. 31, 617–627 (1979).
27. J. L. Hayden, “Generalized Hadamard matrices,” Des. Codes Cryptog. 12,69–73 (1997).
28. A. Pererra and K. J. Horadam, “Cocyclic generalized Hadamard matrices andcentral relative difference sets,” Des. Codes Cryptog. 15, 187–200 (1998).
29. A. Winterhof, “On the non-existence of generalized Hadamard matrices,”J. Stat. Plan. Inference 84, 337–342 (2000).
30. D. A. Drake, “Partial geometries and generalized Hadamard matrices,” Can.J. Math. 31, 617–627 (1979).
31. P. Levy, “Sur une generalisation des fonctions orthogonales de M. Rade-macher,” Commun. Math. Helv. 16, 146–152 (1944).
32. H. E. Chrestenson, “A class of generalized Walsh functions,” Pacific J. Math.5, 17–31 (1955).
33. N. Ahmed and K. R. Rao, Orthogonal Transforms for Digital SignalProcessing, Springer-Verlag, Berlin (1975).
34. M. G. Karpovsty, Finite Orthogonal Series in the Designs of Digital Devices,John Wiley & Sons, Hoboken, NJ (1976).
35. S. L. Hurst, M. D. Miller, and J. C. Muzio, Spectral Techniques in DigitalLogic, Academic Press, New York (1985).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Extended Hadamard Matrices 381
36. K. R. Rao and R. K. Narasimham, “A family of discrete Haar transforms,”Comput. Elec. Eng. 2, 367–368 (1975).
37. S. S. Agaian, K. O. Egiazarian, and N. A. Babaian, “A family of fastorthogonal transforms reflecting psychophisical properties of vision,” PatternRecogn. Image Anal. 2 (1), 1–8 (1992).
38. J. Seberry and X. M. Zhang, “Some orthogonal designs and complexHadamard matrices by using two Hadamard matrices,” Austral. J. Combin.Theory 4, 93–102 (1991).
39. S. Agaian and H. Bajadian, “Generalized orthogonal Haar systems: synthesis,metric and computing properties,” in Proc. of Haar Memorial Conf., Collog.Math. Soc. Janos Bolyai, 1 97–113 North-Holland, Amsterdam (1987).
40. N. Ahmed and K. R. Rao, Orthogonal Transforms for Digital SignalProcessing, Springer-Verlag, New York (1975).
41. Z. Haar, “Zur Theorie der Orthogonalen Funktionensysteme,” Math. Ann. 69,331–371 (1914).
42. K. R. Rao and R. K. Narasimham, “A family of discrete Haar transforms,”Comput. Elec. Eng. 2, 367–368 (1975).
43. J. Seberry and X. M. Zhang, “Some orthogonal designs and complexHadamard matrices by using two Hadamard matrices,” Austral. J. Combin.Theory 4, 93–102 (1991).
44. Z. Mingyong, Z. Lui, and H. Hama, “A resolution-controllable harmonicalretrieval approach on the Chrestenson discrete space,” IEEE Trans. SignalProcess 42 (5), 1281–1284 (1994).
45. H. G. Sarukhanyan, “Fast generalized Haar transforms,” Math. Prob. Comput.Sci. 31, 79–89 (2008) Yerevan, Armenia.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Chapter 12Jacket Hadamard Matrices
In this chapter, we present a variation to the HT, which is called a centered-weighted HT, such as the reverse jacket transform (RJT), complex RJT (CRJT),extended CRJT (ECRJT) and extended CRJT over finite fields, and the generalizedRJT. Centered-weighted HTs have found several interesting applications in imageprocessing, communication sequencing, and cryptology that have been pointedout.1–26 These transforms have a similar simplicity to that of the HT, but offera better quality of representation over the same region of the image.2 The reasonfor developing this theory is motivated by the fact that (1) the human visual systemis most sensitive to the special (in general midspatial) fragments and (2) the samepart of data sequences or the middle range of frequency components are moreimportant.
First, we present the recursive generation of the real weighted HT matrices. Thenwe introduce the methods of generating complex weighted Hadamard matrices.
12.1 Introduction to Jacket Matrices
Definition 12.1.1: The square matrix A = (ai, j) of order n is called a jacket matrixif its entries are nonzero and real, complex, or from a finite field, and satisfy
AB = BA = In, (12.1)
where In is the identity matrix of order n, B = 1n (a−1
i, j )T ; and T denotes the transposeof the matrix. In other words, the inverse of a jacket matrix is determined by itselementwise or blockwise inverse. The definition above may also be expressedas
n∑i=1
a−1i, j ai,t =
{n, if j = t,0, if j � t,
j, t = 1, 2, . . . , n. (12.2)
12.1.1 Example of jacket matrices
(1) J2 =
(1 11 α
), J−1
2 =12
(1 11 α
), (12.3)
383
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
384 Chapter 12
where 1+α = 0, α2 = 1, which means that α = −1, i.e., J2 is the Hadamard matrixof order 2.
(2) A =
(a√
ac√ac −c
), A−1 =
12
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1a
1√ac
1√ac−1
c
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ; (12.4)
hence, A is the jacket matrix for all nonzero a and c, and when a = c = 1, it is aHadamard matrix.
(3) In Ref. 9, the kernel jacket matrix of order 2 is defined as
J2 =
(a b
bT −c
), (12.5)
where a, b, c are all nonzero real numbers. Considering that J2 is orthogonal, weshould have
J2JT2 = 2I2 =
(a b
bT −c
) (a bT
b −c
)=
(a2 + b2 abT − bc
bT a − cb (bT )2 + c2
). (12.6)
Therefore, we have bT = b, c = a; then, the orthogonal J2 can be rewritten as
J2 =
(a bb −a
). (12.7)
According to Definition 12.1.1, the inverse of J2 should be rewritten as
J−12 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1a
1b
1b−1
a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (12.8)
where we must accept that a = b. Clearly, the result is a classical Hadamard matrixof order 2,
J2 = aH2 =
(a aa −a
). (12.9)
(4) Let α2 + α + 1 = 0,α3 = 1; then, we have
J3 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 11 α α2
1 α2 α
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎠ , J−13 =
13
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 11 α2 α
1 α α2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (12.10)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Jacket Hadamard Matrices 385
(5) Let w be a third root of unity, i.e., w = exp( j 2π3 ) = cos 2π
3 + j sin 2π3 , j =
√−1;then, we have
B3 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 11 w w2
1 w2 w
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎠ , B−13 =
13
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1
11w
1w2
11
w2
1w
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (12.11)
(6) Let w be a fourth root of unity, i.e., w = exp( j 2π4 ) = cos 2π
4 + j sin 2π4 = j;
then, we have a complex Hadamard matrix, which is a jacket matrix as well:
B4 = ( jnk)3n,k=0 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 1 11 j −1 − j1 −1 1 −11 − j −1 j
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , B−14 =
14
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 1 11 − j −1 j1 −1 1 −11 j −1 − j
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (12.12)
(7) Let w be a fifth root of unity, i.e., w = exp( j 2π5 ) = cos 2π
5 + j sin 2π5 ; then, we
have the Fourier matrix of order 5, which is a jacket matrix as well:
B5 = (wnk)4n,k=0 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 11 w w2 w3 w4
1 w2 w4 w w3
1 w3 w w4 w2
1 w4 w3 w2 w
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, B−1
5 =15
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 11 w4 w3 w2 w1 w3 w w4 w2
1 w2 w4 w w3
1 w w2 w3 w4
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (12.13)
12.1.2 Properties of jacket matricesSome preliminary properties of jacket matrices are given below.
(1) The matrix A = (ai, j)n−1i, j=0, over a field F, is a jacket matrix if and only if
n−1∑i=0
a j,i
ai,k=
n−1∑i=0
ai,k
a j,i= 0, for all j � k, j, k = 0, 1, . . . , n − 1. (12.14)
A proof follows from the definition of jacket matrices.(2) For any integer n, there exists a jacket matrix of order n.
Indeed, let A = (at, j)n−1t, j=0 be a matrix with elements at, j = exp
{i 2π
n k j}, where
i =√−1. It is easy to show that
n−1∑t=0
a j,t
ak,t=
n−1∑t=0
exp{
i2πn
( j − k)t}= 0 for all j � k. (12.15)
Hence, A = (at, j)n−1t, j=0 is the jacket matrix of order n.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
386 Chapter 12
(3) Let A = (at, j)n−1t, j=o be a jacket matrix of order n. Then, it can be shown that
(a) If |at, j| = 1 for all t, j = 0, 1, . . . , n − 1, then A = (at, j)n−1t, j=o is a complex
Hadamard matrix.(b) If at, j is real and a2
t, j = 1 for all t, j = 0, 1, . . . , n − 1, then A = (at, j)n−1t, j=o is a
Hadamard matrix.
(4) Let A = (at, j)n−1t, j=o be a jacket matrix of order n. Then, it can be shown that
(a) AT , A−1, AH are also jacket matrices {AH = [(1/a j,t)]T }.(b) (det A)(det AH) = nn.(c) Proof: Because A is a jacket matrix, AAH = nIn. Thus, A−1 = (1/n)AH .
Hence, AH(AH)H = AHA = nIn. Thus, AH is a jacket matrix. Similarly, A−1
and AT are also jacket matrices. Item (b) follows from nn = det(AAH) =(det A), (det AH).
(d) Let A = (at, j)n−1t, j=o be a jacket matrix of order n and let it be that P and
Q both are diagonal or permutation matrices. Then, PAQ is also a jacketmatrix.
(5) The Kronecker product of two jacket matrices is also a jacket matrix.Proof: Let it be that A and B are jacket matrices of order n and m, respectively.Then, AAH = nIn and BBH = mIm. Hence,
(A ⊗ B)(A ⊗ B)H = (A ⊗ B)(AH ⊗ BH) = AAH ⊗ BBH = mnImn. (12.16)
Thus, A ⊗ B is a jacket matrix of order mn.(6) Let it be that A and B are jacket matrices of order n, and α is a nonzero number.
Then (A αBA −αB
)(12.17)
is also jacket matrix of order 2n. For example, let
A =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 11 w w2
1 w2 w
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎠ , B =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 11 w2 w1 w w2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (12.18)
where w is a third root of unity. Then, the following matrix is the jacket matrixdependent on a parameter α:
J6(α) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 α α α
1 w w2 α αw2 αw1 w2 w α αw αw2
1 1 1 −α −α −α1 w w2 −α −αw2 −αw1 w2 w −α −αw −αw2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (12.19a)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Jacket Hadamard Matrices 387
J−16 (α) =
16
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1
11w
1w2
11w
1w2
11
w2
1w
11
w2
1w
1α
1α
1α− 1α− 1α
− 1α
1α
1αw2
1α− 1α− 1αw2
− 1αw
1α
1α
1αw2
− 1α− 1α− 1αw2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (12.19b)
The jacket matrices and their inverse matrices of order 6 for various values of αare given below (remember that w is a third root of unity):
(1) α = 2:
J6 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 2 2 21 w w2 2 2w2 2w1 w2 w 2 2w 2w2
1 1 1 −2 −2 −21 w w2 −2 −2w2 −2w1 w2 w −2 −2w −2w2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (12.20a)
J−16 =
16
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1
11w
1w2
11w
1w2
11
w2
1w
11
w2
1w
12
12
12−1
2−1
2−1
212
12w2
12w
−12− 1
2w2− 1
2w12
12w
12w2
−12− 1
2w− 1
2w2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (12.20b)
(2) α = 1/2:
J6
(12
)=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 112
12
12
1 w w2 12
w2
2w2
1 w2 w12
w2
w2
2
1 1 1 −12−1
2−1
2
1 w w2 −12−w2
2−w
2
1 w2 w −12−w
2−w2
2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
, J−16
(12
)=
16
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1
11w
1w2
11w
1w2
11
w2
1w
11
w2
1w
2 2 2 −2 −2 −2
22
w2
2w−2 − 2
w2− 2
w
22w
2w2−2 − 2
w− 2
w2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (12.21)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
388 Chapter 12
Consider the jacket matrix of order n of the following form:
Jn =
(1 eT
e A
), (12.22)
where e is the column vector of all 1 elements of length n − 1. The matrix A oforder n − 1 we call the core of the matrix Jn. Lee10 proves the following theorem:
Theorem 12.1.2.1: (Lee10) Let A1, B1, C1, and D1 be the core of jacket matricesA, B, C, and D of order n, respectively. Then, ACH = BDH if and only if
X2n =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 eT eT 1e A1 B1 ee C1 −D1 −e
1 eT −eT 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ (12.23)
is a jacket matrix of order 2n, where e is the column vector of all 1 elements oflength n − 1 (remember that if A = (ai, j), then AH = [(1/ai, j)]T ).
Proof: Because A is a jacket matrix of order n with a core A1, we have
AAH = AHA =
(1 eT
e A1
) (1 eT
e AH1
)=
(1 eT
e AH1
) (1 eT
e A1
)= nIn. (12.24)
From Eq. (12.24), we obtain
eT + eT A1 = 0, (In−1 + A1)e = 0,
eeT + A1AH1 = eeT + AH
1 A1 = nIn−1.(12.25)
Similarly, we find that
eT + eT B1 = 0, (In−1 + B1)e = 0,
eeT + B1BH1 = eeT + BH
1 B1 = nIn−1,
eT + eTC1 = 0, (In−1 +C1)e = 0,
eeT +C1CH1 = eeT +CH
1 C1 = nIn−1,
eT + eT D1 = 0, (In−1 + D1)e = 0,
eeT + D1DH1 = eeT + DH
1 D1 = nIn−1.
(12.26)
Using Eqs. (12.25) and (12.26), we obtain
ACH =
(n 00 eeT + A1CH
1
), BDH =
(n 00 eeT + B1DH
1
). (12.27)
From Eq. (12.27), it follows that ACH = BDH if and only if A1CH1 = B1DH
1 . On theother hand, from the fourth properties of jacket matrices given above, we obtain
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Jacket Hadamard Matrices 389
CHC = nIn and DHD = nIn. Therefore, we have
ACHCAH = A(CHC)AH = n2In,
BDHDBH = B(DHD)BH = n2In,(12.28)
from which it follows that ACH = BDH if and only if CAH = DBH . Finally, wefind that
ACH = BDH ⇔ A1CH1 = B1DH
1 and CAH = DBH ⇔ XXH = 2nI2n.
Example: Let A = B = C = D =
(1 1 11 w w2
1 w2 w
), where w is the third root of unity.
Then, by Theorem 12.1.2.1, we obtain the following jacket matrix of order 6:
J6 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 11 w w2 w w2 11 w2 w w2 w 11 w w2 −w −w2 −11 w2 w −w2 −w −11 1 1 −1 −1 −1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (12.29)
12.2 Weighted Sylvester–Hadamard Matrices
In this section, we present weighted Sylvester–Hadamard matrices and their simpledecomposition, which then is used to develop the fast algorithm. The matrixdecomposition has the form of the matrix product of Hadamard matrices andsuccessively lower-order coefficient matrices.
The main property of weighted Sylvester–Hadamard matrices is that the inversematrices of their elements can be obtained very easily and have a special structure.Using the orthogonality of Hadamard matrices, a generalized weighted Hadamardmatrix called a reverse jacket matrix with a reverse geometric structure wasconstructed in Refs. 2 and 4–6
Note that the lowest order of weighted Sylvester–Hadamard matrix is 4, and thematrix is defined as follows (see Ref. 7):
[S W]4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 1 11 −2 2 −11 2 −2 −11 −1 −1 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ =14
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 1 11 −1 1 −11 1 −1 −11 −1 −1 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝4 0 0 00 6 −2 00 −2 6 00 0 0 4
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (12.30)
The inverse of Eq. (12.30) is
[S W]−14 =
18
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝2 2 2 22 −1 1 −22 1 −1 −22 −2 −2 2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ =1
16
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 1 11 −1 1 −11 1 −1 −11 −1 −1 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝4 0 0 00 3 1 00 1 3 00 0 0 4
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (12.31)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
390 Chapter 12
We can see that the matrix [S W]4 is derived by doubling elements of theinner 2 × 2 submatrix of the Sylvester–Hadamard matrix. Such matrices arecalled weighted Hadamard or centered matrices.4 As for the Sylvester–Hadamardmatrix, a recursive relation governs the generation of higher orders of weightedSylvester–Hadamard matrices and their inverses, i.e.,
[S W]2k = [S W]2k−1 ⊗ H2, k = 3, 4, . . . ,
[S W]−12k = [S W]−1
2k−1 ⊗ H2, k = 3, 4, . . . ,(12.32)
where H2 =(+ +
+ −).
The forward and inverse weighted Sylvester–Hadamard transform matrices aregiven below (Fig. 12.1).
[S W]8 = [S W]4 ⊗ H2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 11 −1 1 −1 1 −1 1 −11 1 −2 −2 2 2 −1 −11 −1 −2 2 2 −2 −1 11 1 2 2 −2 −2 −1 −11 −1 2 −2 −2 2 −1 11 1 −1 −1 −1 −1 1 11 −1 −1 1 −1 1 1 −1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (12.33a)
[S W]−18 =
116
[S W]−144⊗ H2
=1
16
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
2 2 2 2 2 2 2 22 −2 2 −2 2 −2 2 −22 2 −1 −1 1 1 −2 −22 −2 −1 1 1 −1 −2 22 2 1 1 −1 −1 −2 −22 −2 1 −1 −1 1 −2 22 2 −2 −2 −2 −2 2 22 −2 −2 2 −2 2 2 −2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (12.33b)
Let us introduce the weighted coefficient matrix as7
[RC]2n = H2n[S W]2n . (12.34)
The expression in Eq. (12.35) can be represented as
[RC]2n = (H2n−1 ⊗ H2)([S W]2n−1 ⊗ H2)
= (H2n−1 [S W]2n−1 ) ⊗ (H2H2)
= (H2n−1 [S W]2n−1 ) ⊗ (2I2)
= 2[RC]2n−1 ⊗ I2. (12.35)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Jacket Hadamard Matrices 391
Figure 12.1 The first (a) four and (b) eight continuous weighted Sylvester–Hadamardfunctions in the interval (0, 1).
Therefore, we have
[RC]2n−1 = 2[RC]2n−2 ⊗ I2. (12.36)
Hence, continuation of this process is given by
[RC]2n = 2n−2[RC]4 ⊗ I2n−2 . (12.37)
It can be shown that [RC]4 has the following form:
[RC]4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝4 0 0 00 6 −2 00 −2 6 00 0 0 4
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (12.38)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
392 Chapter 12
Therefore, the weighted coefficient matrix [RC]2n can be presented as
[RC]2n = 2n−2
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝4I2n−2 O2n−2 O2n−2 O2n−2
O2n−2 6I2n−2 −2I2n−2 O2n−2
O2n−2 −2I2n−2 6I2n−2 O2n−2
O2n−2 O2n−2 O2n−2 4I2n−2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (12.39)
where Om is the zero matrix of order m. The 8 × 8 weighted coefficient matrix hasthe following form:
[RC]8 = 2 ·
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
4 0 0 0 0 0 0 00 4 0 0 0 0 0 00 0 6 0 −2 0 0 00 0 0 6 0 −2 0 00 0 −2 0 6 0 0 00 0 0 −2 0 6 0 00 0 0 0 0 0 4 00 0 0 0 0 0 0 4
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (12.40)
Because [RC]4 is the symmetric matrix and has at most two nonzero elementsin each row and column, from Eq. (12.37) it follows that the same is true for[RC]2n (n ≥ 2). Note that from Eq. (12.34) it follows that
[S W]2n =12n
H2n[RC]2n . (12.41)
Using Eq. (12.37), from Eq. (12.34), we can find that
[S W]2n = [S W]4 ⊗ H2n−2 ,
[S W]−12n =
12n−2
[S W]−14 ⊗ H2n−2 .
(12.42)
Note that the weighted Sylvester–Hadamard matrix [S W]2n is a symmetric matrix,i.e., [S W]2n = [S W]T
2n .
12.3 Parametric Reverse Jacket Matrices
References 2, 4, 6, and 8 introduced and investigated the reverse jacket matricesdepending only on one and three parameters. In this section, we introduce aparametric reverse jacket matrix that is more general than the above-mentionedmatrices.
Definition 12.3.1: Let [RJ]n be a real parametric matrix of order n with elementsx1, x2, . . . , xr and their linear superposition, and let Hn be a Hadamard matrix of
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Jacket Hadamard Matrices 393
order n. A matrix [RJ]n with the property
[RJ]n =1n
HTn [RJ]nHn (12.43)
is called the parametric reverse jacket matrix. Furthermore, we will consider thecase when Hn is a Sylvester–Hadamard matrix of order n = 2k, i.e., Eq. (12.43)takes the following form: [RJ]n = (1/n)Hn[RJ]nHn.
Examples of parametric reverse jacket matrices of order 4 with one and twoparameters are given as follows:
[RJ]4(a) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 1 11 −a a −11 a −a −11 −1 −1 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , [RJ]4(a, b) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝b 1 1 b1 −a a −11 a −a −1b −1 −1 b
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ,
[RJ]−14 (a) =
14
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1
1 −1a
1a−1
11a−1
a−1
1 −1 −1 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, [RJ]−1
44(a, b) =
14
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1b
1 11b
1 −1a
1a−1
11a−1
a−1
1b−1 −1
1b
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠.
(12.44)
Hence, we can formulate the following theorem.
Theorem 12.3.1: The matrix [RJ]n is a parametric reverse jacket matrix if andonly if
[RJ]nHn = Hn[RJ]n. (12.45)
Note that if [RJ]n is a reverse jacket matrix, then the matrix ([RJ]n)k (k is an integer)is a reverse jacket matrix, too. Indeed, we have
([RJ]n)2Hn = [RJ]n([RJ]nHn) = [RJ]n(Hn[RJ]n)
= ([RJ]nHn)[RJ]n = Hn([RJ]n)2. (12.46)
We can check that
[RJ]2(a, b) =(a bb a − 2b
)(12.47)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
394 Chapter 12
is a parametric reverse jacket matrix of order 2 because we have
(+ +
+ −) (
a bb a − 2b
)=
(a bb a − 2b
) (+ +
+ −). (12.48)
For a2 − 2ab − b2 � 0, we obtain
[RJ]−12 (a, b) =
1a2 − 2ab − b2
(a − 2b −b−b a
). (12.49)
Note that the matrix [RJ]2(a, b) is a unique parametric reverse jacket matrix oforder 2.
12.3.1 Properties of parametric reverse jacket matrices
(1) The Kronecker product of two parametric reverse jacket matrices satisfiesEq. (12.45). Indeed, let [RJ]n(x0, . . . , xk−1), [RJ]m(y0, . . . , yr−1) be parametricreverse jacket matrices and Hn, Hm be Hadamard matrices of order n and m,respectively; then, we have
([RJ]n ⊗ [RJ]m)Hmn = ([RJ]n ⊗ [RJ]m)(Hn ⊗ Hm)
= [RJ]nHn ⊗ [RJ]mHm
= Hn[RJ]n ⊗ Hm[RJ]m
= (Hn ⊗ Hm)([RJ]n ⊗ [RJ]m)
= Hmn([RJ]n ⊗ [RJ]m). (12.50)
(2) The Kronecker product of a parametric reverse jacket matrix on the reversejacket (nonparametric) matrix is the parametric reverse jacket matrix. Indeed,let [RJ]n(x0, . . . , xk−1), [RJ]m be parametric and nonparametric reverse jacketmatrices, and Hn, Hm be Hadamard matrices of order n and m, respectively;then, we have
([RJ]n(x0, . . . , xk−1) ⊗ [RJ]m)Hmn = ([RJ]n(x0, . . . , xk−1) ⊗ [RJ]m)(Hn ⊗ Hm)
= ([RJ]n(x0, . . . , xk−1)Hn) ⊗ ([RJ]mHm)
= (Hn[RJ]n(x0, . . . , xk−1)) ⊗ (Hm[RJ]m)
= (Hn ⊗ Hm)([RJ]n(x0, . . . , xk−1) ⊗ [RJ]m)
= Hmn([RJ]n(x0, . . . , xk−1) ⊗ [RJ]m). (12.51)
Some examples of jacket matrices using the above-given properties are asfollows:
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Jacket Hadamard Matrices 395
[RJ]2(a, b) ⊗ [RJ]2(1, 1) =(a bb a − 2b
)⊗
(+ +
+ −)
=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝a a b ba −a b −bb b a − 2b a − 2bb −b a − 2b −a + 2b
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (12.52)
[RJ]2(a, b) ⊗ [RJ]2(1, 2) =(a bb a − 2b
)⊗
(1 22 −3
)
=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝a 2a b 2b
2a −3a 2b −3bb 2b a − 2b 2a − 4b
2b −3b 2a − 4b −3a + 6b
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (12.53)
[RJ]2(a, b) ⊗ [RJ]2(2, 1) =(a bb a − 2b
)⊗
(2 11 0
)
=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝2a a 2b ba 0 b 02b b 2a − 4b a − 2bb 0 a − 2b 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (12.54)
Sylvester-like construction for parametric reverse jacket matrices is held, i.e., ifP2 = [RJ]2(a, b) is a parametric reverse jacket matrix of order 2, then the matrix
P2n =
(P2n−1 P2n−1
P2n−1 −P2n−1
)(12.55)
is the parametric reverse jacket matrix of order 2n, n = 2, 3, . . ..Here we provide an example. Let P2 =
(1 22 −3
)be a reverse jacket matrix. Then,
the following matrix also is a reverse jacket matrix:
P4 =
(P2 P2
P2 −P2
)=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 2 1 22 −3 2 −31 2 −1 −22 −3 −2 3
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (12.56)
Note that the matrix in Eq. (12.56) satisfies the condition of Eq. (12.45) for aHadamard matrix of the following form:
H4 =
(H2 H2
H2 −H2
)=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 1 11 −1 1 −11 1 −1 −11 −1 −1 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (12.57)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
396 Chapter 12
One can show that(H2 H2R
RH2 −RH2R
) (P2 P2R
RP2 −RP2R
)=
(P2 P2R
RP2 −RP2R
) (H2 H2R
RH2 −RH2R
), (12.58)
where R =(
0 11 0
). This equality means that the matrix
(P2 P2R
RP2 −RP2R
)=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 2 2 12 −3 −3 22 −3 3 −21 2 −2 −1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ (12.59)
is the jacket matrix according to the following Hadamard matrix:
(H2 H2R
RH2 −RH2R
)=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 1 11 −1 −1 11 −1 1 −11 1 −1 −1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (12.60)
The inverse matrix of Eq. (12.59) has the form
114
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝3 2 2 32 −1 −1 32 1 1 −23 −2 −2 −3
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (12.61)
By substituting the matrix [RJ]2(a, b) =(
a bb a − 2b
)for P2 into Eq. (12.59), we
obtain the parametric reverse jacket matrix of order 4 depending on two parameters,
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝a b b ab a − 2b a − 2b bb a − 2b −a + 2b −ba b −b −a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (12.62)
It is not difficult to check that if A, B,C are invertible matrices of order n, thenthe matrix
Q =12
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝A B B AB −C C −BB C −C −BA −B −B A
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ (12.63)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Jacket Hadamard Matrices 397
is also an invertible matrix, and its inverse matrix is
Q−1 =12
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝A−1 B−1 B−1 A−1
B−1 −C−1 C−1 −B−1
B−1 C−1 −C−1 −B−1
A−1 −B−1 −B−1 A−1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (12.64)
Theorem 12.3.1.1: Let A, B, and C be parametric reverse jacket matrices of ordern. Then, the matrix Q from Eq. (12.63) is a parametric reverse jacket matrix oforder 4n.
Note that if A, B, and C are nonzero matrices of order 1, i.e., they are nonzeronumbers a, b, c, then the matrices in Eqs. (12.63) and (12.64) take the followingforms, respectively:
Q1(a, b, c) =12
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝a b b ab −c c −bb c −c −ba −b −b a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , Q−11 (a, b, c) =
12
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1a
1b
1b
1a
1b−1
c1c−1
b1b
1c−1
c−1
b1a−1
b−1
b1a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (12.65)
Note that the parametric reverse jacket matrix in Eq. (12.65) was introduced inRef. 6.
For a = c = 1, b = 2, we have
Q1(1, 2, 1) =12
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 2 2 12 −1 1 −22 1 −1 −21 −2 −2 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , Q−11 (1, 2, 1) =
12
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
112
12
112−1 1 −1
212
1 −1 −12
1 −12−1
21
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (12.66)
For a = 1, b = 2, c = 3, we have the following matrices:
Q1(1, 2, 3) =12
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 2 2 12 −3 3 −22 3 −3 −21 −2 −2 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , Q−11 (a, b, c) =
12
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
112
12
1
12−1
313−1
212
13−1
3−1
2
1 −12−1
21
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠.
(12.67)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
398 Chapter 12
Let
A = [RJ]2(a, b) =(a bb a − 2b
), B = [RJ]2(c, d),C = [RJ]2(c, d);
then, from Eq. (12.63), we find the following reverse jacket matrix of order 8depending on six parameters:
[RJ]8 =12
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
[a bb a − 2b
] [c dd c − 2d
] [c dd c − 2d
] [a bb a − 2b
][c dd c − 2d
] [−e − f− f −e + 2 f
] [e ff e − 2 f
] [−c −d−d −c + 2d
][c dd c − 2d
] [e ff e − 2 f
] [−e − f− f −e + 2 f
] [−c −d−d −c + 2d
][a bb a − 2b
] [−c −d−d −c + 2d
] [−c −d−d −c + 2d
] [a bb a − 2b
]
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (12.68)
Using Theorem 12.3.1.1 and the parametric reverse jacket matrix Q1(a, b, c)from Eq. (12.43), we can construct a reverse jacket matrix of order 16 dependingon nine parameters. This matrix has the following form:
12
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
a b b a d e e d d e e d a b b ab −c c −b e − f f −e e − f f −e b −c c −bb c −c −b e f − f −e e f − f −e b c −c −ba −b −b a d −e −e d d −e −e d a −b −b ad e e d −g −h −h −g g h h g −d −e −e −de − f f −e −h q −q h h −q q −h −e f − f ee f − f −e −h −q q h h q −q −h −e − f f ed −e −e d −g h h −g g −h −h g −d e e −dd e e d g h h g −g −h −h −g −d −e −e −de − f f −e h −q q −h −h q −q h −e f − f ee f − f −e h q −q −h −h −q q h −e − f f ed −e −e d g −h −h g −g h h −g −d e e −da b b a −d −e −e −d −d −e −e −d a b b ab −c c −b −e f − f e −e f − f e b −c c −bb c −c −b −e − f f e −e − f f e b c −c −ba −b −b a −d e e −d −d e e −d a −b −b a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (12.69)
Remark: (1) If a = b = c = d = e = f = 1, then the parametric reverse jacketmatrix from Eq. (12.68) is a Sylvester–Hadamard matrix of order 8, i.e.,
[RJ]8(1, 1, . . . , 1) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + + + + + + +
+ − + − + − + −+ + − − + + − −+ − − + + − − ++ + + + − − − −+ − + − − + − ++ + − − − − + ++ − − + − + + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (12.70)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Jacket Hadamard Matrices 399
(2) If a = b = c = d = 1 and e = f = 2, then the parametric reverse jacket matrixfrom Eq. (12.68) is the reverse jacket matrix of order 8 (see Ref. 7), i.e.,
[RJ]8(1, 1, 1, 1, 2, 2) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 11 −1 1 −1 1 −1 1 −11 1 −2 −2 2 2 −1 −11 −1 −2 2 2 −2 −1 11 1 2 2 −2 −2 −1 −11 −1 2 −2 −2 2 −1 11 1 −1 −1 −1 −1 1 11 −1 −1 1 −1 1 1 −1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (12.71)
(3) If a = b = c = 1 and d = e = f = 2, then the parametric reverse jacket matrixfrom Eq. (12.68) gives the following reverse jacket matrix of order 8:
[RJ]8(1, 1, 1, 2, 2, 2) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 2 1 2 1 11 −1 2 −3 2 −3 1 −11 2 −2 −2 2 2 −1 −22 −3 −2 2 2 −2 −2 31 2 2 2 −2 −2 −1 −22 −3 2 −2 −2 2 −2 31 1 −1 −2 −1 −2 1 11 −1 −2 3 −2 3 1 −1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (12.72)
12.4 Construction of Special-Type Parametric Reverse JacketMatrices
In this section, we consider a special type of parametric reverse jacket matrix. Someelements of such matrices are fixed integers and others are parameters.
It is well known that a Walsh–Hadamard or Sylvester transform matrix of orderN = 2n can be presented as
HN =[(−1)〈i, j〉
]N−1
i, j=0, (12.73)
where 〈i, j〉 = in−1 jn−1 + in−2 jn−2 + · · · + i1 j1 + i0 j0.The parametric reverse jacket matrix of order N depending on one parameter
can be represented as8
[RJ]N(a) =[(−1)〈i, j〉a(in−1⊕in−2)( jn−1⊕ jn−2)
]N−1
i, j=0, (12.74)
where ⊕ is the sign of modulo-2 addition.Note that [RJ]N(1) is the WHT matrix. The parametric reverse jacket matrices
and their inverse matrices of order 4 and 8 corresponding to the weight a are,
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
400 Chapter 12
respectively,
[RJ]4(a) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 1 11 −a a −11 a −a −11 −1 −1 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , [RJ]−14 (a) =
14
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1
1 −1a
1a−1
11a−1
a−1
1 −1 −1 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (12.75a)
[RJ]8(a) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 11 −1 1 −1 1 −1 1 −11 1 −a −a a a −1 −11 −1 −a a a −a −1 11 1 a a −a −a −1 −11 −1 a −a −a a −1 11 1 −1 −1 −1 −1 1 11 −1 −1 1 −1 1 1 −1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (12.75b)
[RJ]−18 (a) =
18
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 1
1 −1 1 −1 1 −1 1 −1
1 1 −1a−1
a1a
1a−1 −1
1 −1 −1a
1a
1a−1
a−1 1
1 11a
1a−1
a−1
a−1 −1
1 −11a−1
a−1
a1a−1 1
1 1 −1 −1 −1 −1 1 1
1 −1 −1 1 −1 1 1 −1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (12.75c)
Now, we introduce the following formulas:
[RJ]4(a; i, j) = (−1)〈i, j〉ai1⊕i0+ j1⊕ j0 , i1, i0, j1, j0 = 0, 1,
[RJ]8(a; i, j) = (−1)〈i, j〉ai2⊕i1+ j2⊕ j1 , i2, i1, j2, j1 = 0, 1.(12.76)
One can check that the inverse matrices of Eq. (12.76) can be defined as
[RJ]4(a; i, j) = (−1)〈i, j〉a−(i1⊕i0+ j1⊕ j0), i1, i0, j1, j0 = 0, 1,
[RJ]8(a; i, j) = (−1)〈i, j〉a−(i2⊕i1+ j2⊕ j1), i2, i1, j2, j1 = 0, 1.(12.77)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Jacket Hadamard Matrices 401
It can be shown that the matrices generated by these formulas are also reversejacket matrices, examples of which are given as follows:
[RJ]4(a) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 a a 1a −a2 a2 −aa a2 −a2 −a1 −a −a 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ,
[RJ]−14 (a) =
14
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
11a
1a
1
1a− 1
a2
1a2−1
a1a
1a2− 1
a2−1
a
1 −1a−1
a1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠,
(12.78a)
[RJ]8(a) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 a a a a 1 11 −1 a −a a −a 1 −1a a −a2 −a2 a2 a2 −a −aa −a −a2 a2 a2 −a2 −a aa a a2 a2 −a2 −a2 −a −aa −a a2 −a2 −a2 a2 −a a1 1 −a −a −a −a 1 11 −1 −a a −a a 1 −1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (12.78b)
[RJ]−18 (a) =
18
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 11a
1a
1a
1a
1 1
1 −11a−1
a1a−1
a1 −1
1a
1a− 1
a2− 1
a2
1a2
1a2−1
a−1
a1a−1
a− 1
a2
1a2
1a2− 1
a2−1
a1a
1a
1a
1a2
1a2− 1
a2− 1
a2−1
a−1
a1a−1
a1a2− 1
a2− 1
a2
1a2−1
a1a
1 1 −1a−1
a−1
a−1
a1 1
1 −1 −1a
1a−1
a1a
1 −1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (12.78c)
Substituting into the matrices of Eqs. (12.75a)–(12.75c) and (12.78a)–(12.78c)a = j =
√−1, we obtain the following complex reverse jacket matrices, which are
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
402 Chapter 12
also complex Hadamard matrices:
[RJ]4( j) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 1 11 − j j −11 j − j −11 −1 −1 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , [RJ]−14 ( j) =
14
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 1 11 j − j −11 − j j −11 −1 −1 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (12.79a)
[RJ]8( j) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 11 −1 1 −1 1 −1 1 −11 1 − j − j j j −1 −11 −1 − j j j − j −1 11 1 j j − j − j −1 −11 −1 j − j − j j −1 11 1 −1 −1 −1 −1 1 11 −1 −1 1 −1 1 1 −1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (12.79b)
[RJ]−18 ( j) =
18
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 1 1 1 11 −1 1 −1 1 −1 1 −11 1 j j − j − j −1 −11 −1 j − j − j j −1 11 1 − j − j j j −1 −11 −1 − j j j − j −1 11 1 −1 −1 −1 −1 1 11 −1 −1 1 −1 1 1 −1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (12.79c)
[RJ]4( j) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 j j 1j 1 −1 − jj −1 1 − j1 − j − j 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , [RJ]−14 ( j) =
14
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 − j − j 1− j 1 −1 j− j −1 1 j
1 j j 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (12.79d)
[RJ]8( j) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 j j j j 1 11 −1 j − j j − j 1 −1j j 1 1 −1 −1 − j − jj − j 1 −1 −1 1 − j jj j −1 −1 1 1 − j − jj − j −1 1 1 −1 − j j1 1 − j − j − j − j 1 11 −1 − j j − j j 1 −1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (12.79e)
[RJ]−18 ( j) =
18
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 − j − j − j − j 1 11 −1 − j j − j j 1 −1− j − j 1 1 −1 −1 j j− j j 1 −1 −1 1 j − j− j − j −1 −1 1 1 j j− j j −1 1 1 −1 j − j
1 1 j j j j 1 11 −1 j − j j − j 1 −1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (12.79f)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Jacket Hadamard Matrices 403
We now introduce the notation [RJ]8(a, b, c, d, e, f ) = (Ri, j)3i, j=0,where Ri, j is the
parametric reverse jacket matrices of order 2 [see Eq. (12.68)], i.e.,
R0,0 = R0,3 = [RJ]2(a, b), R0,1 = R0,2 = [RJ]2(c, d),
R1,0 = −R1,3 = [RJ]2(c, d), R1,1 = −R1,2 = −[RJ]2(e, f ),
R2,0 = −R2,3 = [RJ]2(c, d), R2,1 = −R2,2 = [RJ]2(e, f ),
R3,0 = R3,3 = [RJ]2(a, b), R3,1 = R3,2 = −[RJ]2(c, d).
(12.80)
Let us consider the following matrices:
R(1)8 (a, b, . . . , e, f ; w) =
(w(i1⊕i0)( j1⊕ j0)Ri, j
)3
i, j=0,
R(2)8 (a, b, . . . , e, f ; w) =
(w(i1⊕i0)+( j1⊕ j0)Ri, j
)3
i, j=0,
R(3)8 (a, b, . . . , e, f ; w) =
(w(i1⊕i0⊕ j1⊕ j0)Ri, j
)3
i, j=0.
(12.81)
The inverse matrices of Eq. (12.81) can be presented as
(R(1)
8
)−1(a, b, . . . , e, f ; w) =
(w−(i1⊕i0)( j1⊕ j0)R−1
i, j
)3
i, j=0,(
R(2)8
)−1(a, b, . . . , e, f ; w) =
(w−(i1⊕i0)−( j1⊕ j0)R−1
i, j
)3
i, j=0,(
R(3)8
)−1(a, b, . . . , e, f ; w) =
(w−(i1⊕i0⊕ j1⊕ j0)R−1
i, j
)3
i, j=0.
(12.82)
One can show that the matrices of Eq. (12.81) and their inverse matrices inEq. (12.82) are parametric reverse jacket matrices and have the following form,respectively:
R(1)8 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝R0,0 R0,1 R0,1 R0,0
R0,1 −wR1,1 wR1,1 −R0,1
R0,1 wR1,1 −wR1,1 −R0,1
R0,0 −R0,1 −R0,1 R0,0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (12.83a)
(R(1)
8
)−1=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
R−10,0 R−1
0,1 R−10,1 R−1
0,0
R−10,1 −1/w R−1
1,1 1/w R−11,1 −R−1
0,1
R−10,1 1/w R−1
1,1 −1/w R−11,1 −R−1
0,1
R−10,0 −R−1
0,1 −R−10,1 R−1
0,0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (12.83b)
R(2)8 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝R0,0 wR0,1 wR0,1 R0,0
wR0,1 −w2R1,1 w2R1,1 −wR0,1
wR0,1 w2R1,1 −w2R1,1 −wR0,1
R0,0 −wR0,1 −wR0,1 R0,0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (12.83c)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
404 Chapter 12
(R(2)
8
)−1=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
R−10,0
1w
R−10,1
1w
R−10,1 R−1
0,0
1w
R−10,1 −
1w2
R−11,1
1w2
R−11,1 −
1w
R−10,1
1w
R−10,1
1w2
R−11,1 −
1w2
R−11,1 −
1w
R−10,1
R−10,0 − 1
wR−1
0,1 − 1w
R−10,1 R−1
0,0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (12.83d)
R(3)8 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝R0,0 wR0,1 wR0,1 R0,0
wR0,1 −R1,1 R1,1 −wR0,1
wR0,1 R1,1 −R1,1 −wR0,1
R0,0 −wR0,1 −wR0,1 R0,0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (12.83e)
(R(3)
8
)−1=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
R−10,0
1w
R−10,1
1w
R−10,1 R−1
0,0
1w
R−10,1 −R−1
1,1 R−11,1 −
1w
R−10,1
1w
R−10,1 R−1
1,1 −R−11,1 −
1w
R−10,1
R−10,0 −
1w
R−10,1 −
1w
R−10,1 R−1
0,0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (12.83f)
12.5 Fast Parametric Reverse Jacket Transform
As follows from Theorem 12.3.1.1, the general form of parametric reverse jacketmatrix has the following form [see also Eq. (12.63)]:
Q =12
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝A B B AB −C C −BB C −C −BA −B −B A
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (12.84)
A, B, C are also parametric reverse jacket matrices of order n. Let X = (x0, x1,. . . , xN−1)T be an input signal-vector column of length N = 4n.We split this vectorin four part as follows:
X = (X0, X1, X2, X3)T , (12.85)
where
XT0 = (x0, x1, . . . , xn−1), XT
1 = (xn, xn+1, . . . , x2n−1),
XT2 = (x2n, x2n+1, . . . , x3n−1), XT
3 = (x3n, x3n+1, . . . , x4n−1).(12.86)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Jacket Hadamard Matrices 405
Figure 12.2 Flow graph of a Q transform [see Eq. (12.87)].
Now the parametric RJT can be presented as (the coefficient 1/2 is omitted)
QX =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝A B B AB −C C −BB C −C −BA −B −B A
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝X0
X1
X2
X3
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ =⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝A(X0 + X3) + B(X1 + X2)B(X0 − X3) −C(X1 − X2)B(X0 − X3) +C(X1 − X2)A(X0 + X3) − B(X1 + X2)
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (12.87)
A flow graph of the Eq. (12.87) transform is given in Fig. 12.2.It is not difficult to calculate that the number of required operations for the
Eq. (12.87) transform is given by
C+Q(N) = 2N +C+A(n) + 2C+B(n) +C+C(n),
C×Q(N) = C×A(n) + 2C×B(n) +C×C(n),(12.88)
where C+P(n) and C×P(n) denote the number of additions and multiplications of thejacket transform P. Below, we present in detail the reverse jacket transforms forsome small orders.
12.5.1 Fast 4 × 4 parametric reverse jacket transform
12.5.1.1 One-parameter case
(1) Let X = (x0, x1, x2, x3) be a column vector. Consider the parametric reversejacket matrix with one parameter given in Eq. (12.74). The forward 1Dparametric RJT depending on one parameter can be calculated as
[RJ]4(a)X =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 1 11 −a a −11 a −a −11 −1 −1 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝x0
x1
x2
x3
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ =⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝(x0 + x3) + (x1 + x2)(x0 − x3) − a(x1 − x2)(x0 − x3) + a(x1 − x2)(x0 + x3) − (x1 + x2)
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (12.89)
We see that the parametric RJT needs eight addition and one multiplicationoperations. The higher-order parametric RJT matrix generated by the
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
406 Chapter 12
Figure 12.3 Flow graph of an 8-point parametric reverse jacket transform.
Kronecker product has the following form (N = 2n):
[RJ]N(a) = [RJ]4(a) ⊗ HN/4. (12.90)
Taking into account Eq. (12.88), we find that
C+[RJ]N(a) = N log2 N, C×[RJ]N
(a) =N4. (12.91)
From Eq. (12.75a), it follows that the inverse 1D parametric RJT has the samecomplexity. Note that if a is a power of 2, then we have
C+[RJ]N(a) = N log2 N, Cshi f t
[RJ]N(a) =
N4, (12.92)
where
H2 =
(+ +
+ −), XT = (x0, x1, . . . , x7), YT = (y0, y1, . . . , y7),
XT0 = (x0, x1), XT
1 = (x2, x3), XT2 = (x4, x5), XT
3 = (x6, x7),YT
0 = (y0, y1), YT1 = (y2, y3), YT
2 = (y4, y5), YT3 = (y6, y7).
(12.93)
A flow graph of an 8-point [RJ]8(a)X = ([RJ]4(a)⊗H2) = Y transform is givenin Fig. 12.3.
(2) Consider the parametric RJT from Eq. (12.78a). Let X = (x0, x1, x2, x3) be acolumn vector. The forward 1D parametric RJT depending on one parametercan be calculated as
[RJ]4(a)X =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 a a 1a −a2 a2 −aa a2 −a2 −a1 −a −a 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝x0
x1
x2
x3
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝(x0 + x3) + a(x1 + x2)a(x0 − x3) − a2(x1 − x2)a(x0 − x3) + a2(x1 − x2)(x0 + x3) − a(x1 + x2)
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (12.94)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Jacket Hadamard Matrices 407
Figure 12.4 Flow graph of the 8-point parametric RJT in Eq. (12.94).
We see that this transform needs only eight addition and three multiplicationoperations. The higher-order parametric RJT matrix generated by theKronecker product has the following form (N = 2n):
[RJ]N(a) = [RJ]4(a) ⊗ HN/4. (12.95)
Taking into account Eq. (12.88), we find that
C+[RJ]N(a) = N log2 N, C×[RJ]N
(a) =3N4. (12.96)
The same complexity is required for the inverse parametric RJT fromEq. (12.78a). Note that if a is the power of 2, then we have
C+[RJ]N(a) = N log2 N, Cshi f t
[RJ]N(a) =
3N4. (12.97)
A flow graph of an 8-point [RJ]8(a)X = ([RJ]4(a) ⊗ H2)X = Y transform [seeEqs. (12.93) and (12.94)] is given in Fig. 12.4.
12.5.1.2 Case of three parameters
Let X = (x0, x1, x2, x3) be a column vector. Consider the parametric reverse jacketmatrix with three parameters given in Eq. (12.65). The forward 1D parametric RJTdepending on three parameters can be calculated as
Q1(a, b, c)X =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝a b b ab −c c −bb c −c −ba −b −b a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝x0
x1
x2
x3
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝a(x0 + x3) + b(x1 + x2)b(x0 − x3) − c(x1 − x2)b(x0 − x3) + c(x1 − x2)a(x0 + x3) − b(x1 + x2)
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ =⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝y0
y1
y2
y3
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (12.98)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
408 Chapter 12
Figure 12.5 Flow graph of the 4-point transform in Eq. (12.98).
Figure 12.6 Flow graph of the N-point transform in Eq. (12.99).
From Eq. (12.98), we can see that the forward 1D parametric RJT of order 4requires eight addition and four multiplication operations. A flow graph of the4-point transform in Eq. (12.98) is given in Fig. 12.5.
Note that if a, b, c are powers of 2, then the forward 1D parametric RJT of order 4can be performed without multiplication operations. It requires only eight additionand four shift operations.
A parametric reverse jacket matrix of higher order N = 2k (k > 2) can begenerated recursively as
[RJ]N(a, b, c) = [RJ]4(a, b, c) ⊗ HN/4. (12.99)
It can be shown that the complexity of a three-parameter RJT in Eq. (12.99) isequal to
C+N(a, b, c) = N log2 N, C×N(a, b, c) = N. (12.100)
A flow graph of an N-point transform in Eq. (12.99) is given in Fig. 12.6.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Jacket Hadamard Matrices 409
12.5.2 Fast 8 × 8 parametric reverse jacket transform
In this section, we will consider a parametric reverse jacket matrix [RJ]8(a, b, c,d, e, f ) [see Eq. (12.68)] with a varying number of parameters.
12.5.2.1 Case of two parameters
Let a = b = c = d, e = f , and X and Y be input and output of vectors. From thematrix in Eq. (12.68), we find that
[RJ]8(a, e)X =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
a a a a a a a aa −a a −a a −a a −aa a −e −e e e −a −aa −a −e e e −e −a aa a e e −e −e −a −aa −a e −e −e e −a aa a −a −a −a −a a aa −a −a a −a a a −a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
x0
x1
x2
x3
x4
x5
x6
x7
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
y0
y1
y2
y3
y4
y5
y6
y7
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (12.101)
Hence, the 8-point transform [RJ]8(a, e)X = Y can be computed as
y0 = a[(x0 + x7) + (x1 + x6)] + a[(x2 + x5) + (x3 + x4)],y6 = a[(x0 + x7) + (x1 + x6)] − a[(x2 + x5) + (x3 + x4)],y1 = a[(x0 − x7) − (x1 − x6)] + a[(x2 − x5) − (x3 − x4)],y7 = a[(x0 − x7) − (x1 − x6)] − a[(x2 − x5) − (x3 − x4)],y2 = a[(x0 − x7) + (x1 − x6)] − e[(x2 − x5) + (x3 − x4)],y4 = a[(x0 − x7) + (x1 − x6)] + e[(x2 − x5) + (x3 − x4)],y3 = a[(x0 + x7) − (x1 + x6)] − e[(x2 + x5) − (x3 + x4)],y5 = a[(x0 + x7) − (x1 + x6)] + e[(x2 + x5) − (x3 + x4)].
(12.102)
From Eq. (12.102), it follows that an 8-point parametric RJT with two parametersneeds 24 addition and eight multiplication operations. A flow graph of thistransform is given in Fig. 12.7.
12.5.2.2 Case of three parameters
(1) Let a = b, c = d, and e = f . From the matrix in Eq. (12.68), we find that
[RJ]8(a, c, e)X =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
a a c c c c a aa −a c −c c −c a −ac c −e −e e e −c −cc −c −e e e −e −c cc c e e −e −e −c −cc −c e −e −e e −c ca a −c −c −c −c a aa −a −c c −c c a −a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
x0
x1
x2
x3
x4
x5
x6
x7
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
y0
y1
y2
y3
y4
y5
y6
y7
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (12.103)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
410 Chapter 12
Figure 12.7 Flow graph of the 8-point transform in Eq. (12.102).
From Eq. (12.103), we obtain
y0 = a[(x0 + x7) + (x1 + x6)] + c[(x2 + x5) + (x3 + x4)],y6 = a[(x0 + x7) + (x1 + x6)] − c[(x2 + x5) + (x3 + x4)],y1 = a[(x0 − x7) − (x1 − x6)] + c[(x2 − x5) − (x3 − x4)],y7 = a[(x0 − x7) − (x1 − x6)] − c[(x2 − x5) − (x3 − x4)],y2 = c[(x0 − x7) + (x1 − x6)] − e[(x2 − x5) + (x3 − x4)],y4 = c[(x0 − x7) + (x1 − x6)] + e[(x2 − x5) + (x3 − x4)],y3 = c[(x0 + x7) − (x1 + x6)] − e[(x2 + x5) − (x3 + x4)],y5 = c[(x0 + x7) − (x1 + x6)] + e[(x2 + x5) − (x3 + x4)].
(12.104)
From Eq. (12.104), it follows that an 8-point parametric RJT with three parametersneeds 24 addition and eight multiplication operations. A flow graph of thistransform is given in Fig. 12.8.
(2) Let a = b = c = d. From the matrix in Eq. (12.68), we find (below r = e−2 f )that
[RJ]8(a, e, f )X =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
a a a a a a a aa −a a −a a −a a −aa a −e − f e f −a −aa −a − f −r f r −a aa a e f −e − f −a −aa −a f r − f −r −a aa a −a −a −a −a a aa −a −a a −a a a −a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
x0x1x2x3x4x5x6x7
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
y0y1y2y3y4y5y6y7
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (12.105)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Jacket Hadamard Matrices 411
Figure 12.8 Flow graph of the 8-point transform in Eq. (12.103).
where
y0 = a[(x0 + x7) + (x1 + x6)] + a[(x2 + x4) + (x3 + x5)],y1 = a[(x0 − x7) − (x1 − x6)] + a[(x2 + x4) − (x3 + x5)],y2 = a[(x0 − x7) + (x1 − x6)] − e(x2 − x4) − f (x3 − x5),y3 = a[(x0 + x7) − (x1 + x6)] − f (x2 − x4) − r(x3 − x5),y4 = a[(x0 − x7) + (x1 − x6)] + e(x2 − x4) + f (x3 − x5),y5 = a[(x0 + x7) − (x1 + x6)] + f (x2 − x4) + r(x3 − x5),y6 = a[(x0 + x7) + (x1 + x6)] − a[(x2 + x4) + (x3 + x5)],y7 = a[(x0 − x7) − (x1 − x6)] − a[(x2 + x4) − (x3 + x5)].
(12.106)
We see that the 8-point parametric RJT in Eq. (12.105) with three parametersneeds 24 addition and 10 multiplication operations. A flow graph of this transformis given in Fig. 12.9.
12.5.2.3 Case of four parameters
(1) Let a = b, c = d. From the matrix in Eq. (12.68), we find that
[RJ]8(a, c, e, f )X =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
a a c c c c a aa −a c −c c −c a −ac c −e − f e f −c −cc −c − f −r f r −c cc c e f −e − f −c −cc −c f r − f −r −c ca a −c −c −c −c a aa −a −c c −c c a −a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
x0
x1
x2
x3
x4
x5
x6
x7
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
y0
y1
y2
y3
y4
y5
y6
y7
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (12.107)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
412 Chapter 12
Figure 12.9 Flow graph of the 8-point transform in Eq. (12.105).
where r = e − 2 f and
y0 = a[(x0 + x7) + (x1 + x6)] + c[(x2 + x4) + (x3 + x5)],y1 = a[(x0 − x7) − (x1 − x6)] + c[(x2 + x4) − (x3 + x5)],y2 = c[(x0 − x7) + (x1 − x6)] − [e(x2 − x4) + f (x3 − x5)],y3 = c[(x0 + x7) − (x1 + x6)] − [ f (x2 − x4) + r(x3 − x5)],y4 = c[(x0 − x7) + (x1 − x6)] + [e(x2 − x4) + f (x3 − x5)],y5 = c[(x0 + x7) − (x1 + x6)] + [ f (x2 − x4) + r(x3 − x5)],y6 = a[(x0 + x7) + (x1 + x6)] − c[(x2 + x4) + (x3 + x5)],y7 = a[(x0 − x7) − (x1 − x6)] − c[(x2 + x4) − c(x3 + x5)].
(12.108)
From Eq. (12.108), it follows that an 8-point parametric RJT with four parametersneeds 24 addition and 10 multiplication operations. A flow graph of this transformis given in Fig. 12.10.
(2) Let e = f and c = d. From the matrix in Eq. (12.68), we find
[RJ]8(a, b, c, e)X =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
a b c c c c a bb p c −c c −c b pc c −e −e e e −c −cc −c −e e e −e −c cc c e e −e −e −c −cc −c e −e −e e −c ca b −c −c −c −c a bb p −c c −c c b p
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
x0
x1
x2
x3
x4
x5
x6
x7
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
y0
y1
y2
y3
y4
y5
y6
y7
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (12.109)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Jacket Hadamard Matrices 413
Figure 12.10 Flow graph of the 8-point transform in Eq. (12.107).
where p = a − 2b and
y0 = a(x0 + x6) + b(x1 + x7) + c[(x2 + x4) + (x3 + x5)],y1 = b(x0 + x6) + p(x1 + x7) + c[(x2 + x4) − (x3 + x5)],y2 = c[(x0 − x6) + (x1 − x7)] − e[(x2 − x4) + (x3 − x5)],y3 = c[(x0 − x6) − (x1 − x7)] − e[(x2 − x4) − (x3 − x5)],y4 = c[(x0 − x6) + (x1 − x7)] + e[(x2 − x4) + (x3 − x5)],y5 = c[(x0 − x6) − (x1 − x7)] + e[(x2 − x4) − (x3 − x5)],y6 = a(x0 + x6) + b(x1 + x7) − c[(x2 + x4) + (x3 + x5)],y7 = b(x0 + x6) + p(x1 + x7) − c[(x2 + x4) − (x3 + x5)].
(12.110)
From Eq. (12.110), it follows that an 8-point parametric RJT with fourparameters needs 24 addition and 10 multiplication operations. A flow graph ofthis transform is given in Fig. 12.11.
12.5.2.4 Case of five parameters
(1) Let e = f . From the matrix in Eq. (12.68), we find that
[RJ]8(a, b, c, d, e)X =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
a b c d c d a bb p d q d q b pc d −e −e e e −c −dd q −e e e −e −d −qc d e e −e −e −c −dd q e −e −e e −d −qa b −c −d −c −d a bb p −d −q −d −q b p
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
x0
x1
x2
x3
x4
x5
x6
x7
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
y0
y1
y2
y3
y4
y5
y6
y7
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (12.111)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
414 Chapter 12
Figure 12.11 Flow graph of the 8-point transform in Eq. (12.109).
where p = a − 2b, q = c − 2d, and
y0 = [a(x0 + x6) + b(x1 + x7)] + [c(x2 + x4) + d(x3 + x5)],y1 = [b(x0 + x6) + p(x1 + x7)] + [d(x2 + x4) + q(x3 + x5)],y2 = [c(x0 − x6) + d(x1 − x7)] − [e(x2 − x4) + e(x3 − x5)],y3 = [d(x0 − x6) + q(x1 − x7)] − [e(x2 − x4) − e(x3 − x5)],y4 = [c(x0 − x6) + d(x1 − x7)] + [e(x2 − x4) + e(x3 − x5)],y5 = [d(x0 − x6) + q(x1 − x7)] + [e(x2 − x4) − e(x3 − x5)],y6 = [a(x0 + x6) + b(x1 + x7)] − [c(x2 + x4) + d(x3 + x5)],y7 = [b(x0 + x6) + p(x1 + x7)] − [d(x2 + x4) + q(x3 + x5)].
(12.112)
From Eq. (12.112), it follows that an 8-point parametric RJT with fiveparameters needs 24 addition and 14 multiplication operations. A flow graph ofthis transform is given in Fig. 12.12.
12.5.2.5 Case of six parameters
Let X = (x0, x1, . . . , x7) be a column vector. The forward 1D parametric reversejacket transforms depending on six parameters [see Eq. (12.68)] can be realized as
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Jacket Hadamard Matrices 415
Figure 12.12 Flow graph of the 8-point transform in Eq. (12.111).
follows:
[RJ]8(a, b, c, d, e, f )X =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
a b c d c d a bb p d q d q b pc d −e − f e f −c −dd q − f −r f r −d −qc d e f −e − f −c −dd q f r − f −r −d −qa b −c −d −c −d a bb p −d −q −d −q b p
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
x0
x1
x2
x3
x4
x5
x6
x7
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
y0
y1
y2
y3
y4
y5
y6
y7
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (12.113)
where p = a − 2b, q = c − 2d, r = e − 2 f , and yi is defined as
y0 = [a(x0 + x6) + b(x1 + x7)] + [c(x2 + x4) + d(x3 + x5)],y1 = [b(x0 + x6) + p(x1 + x7)] + [d(x2 + x4) + q(x3 + x5)],y2 = [c(x0 − x6) + d(x1 − x7)] − [e(x2 − x4) + f (x3 − x5)],y3 = [d(x0 − x6) + q(x1 − x7)] − [ f (x2 − x4) + r(x3 − x5)],y4 = [c(x0 − x6) + d(x1 − x7)] + [e(x2 − x4) + f (x3 − x5)],y5 = [d(x0 − x6) + q(x1 − x7)] + [ f (x2 − x4) + r(x3 − x5)],y6 = [a(x0 + x6) + b(x1 + x7)] − [c(x2 + x4) + d(x3 + x5)],y7 = [b(x0 + x6) + p(x1 + x7)] − [d(x2 + x4) + q(x3 − x5)].
(12.114)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
416 Chapter 12
Figure 12.13 Flow graph of the 8-point transform in Eq. (12.113).
From Eq. (12.114), we can see that a forward 1D parametric RJT of order8 requires 24 addition and 16 multiplication operations. A flow graph of thistransform is given in Fig. 12.13.
References
1. S. S. Agaian, K. O. Egiazarian, and N. A. Babaian, “A family of fastorthogonal transforms reflecting psychophisical properties of vision,” PatternRecogn. Image Anal. 2 (1), 1–8 (1992).
2. M. Lee and D. Kim, Weighted Hadamard transformation for S/N ratioenhancement in image transformation, in Proc. of IEEE Int. Symp. Circuitsand Syst. Proc., Vol. 1, May, Montreal, 65–68 (1984).
3. D. M. Khuntsariya, “The use of the weighted Walsh transform in problems ofeffective image signal coding,” GPI Trans. Tbilisi. 10 (352), 59–62 (1989).
4. M.H. Lee, Ju.Y. Park, M.W. Kwon and S.R. Lee, The inverse jacket matrixof weighted Hadamard transform for multidimensional signal processing, inProc. 7th IEEE Int. Symp. Personal, Indoor and Mobile Radio Communi-cations, PIMRC’96, 15–18 Oct. pp. 482–486 (1996).
5. P. P. Vaidyanathan, Multirate Systems and Filter Banks, Prentice-Hall,Englewood Cliffs, NJ (1993).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Jacket Hadamard Matrices 417
6. M. H. Lee and S. R. Lee, “On the reverse jacket matrix for weighted Hadamardtransform,” IEEE Trans. Circuits Syst. 45, 436–441 (1998).
7. M. H. Lee, “A new reverse jacket transform and its fast algorithm,” IEEETrans. Circuits Syst. 47 (1), 39–47 (2000).
8. M. Lee, B. Sundar Rajan, and J. Y. Park, “Q generalized reverse jackettransform,” IEEE Trans. Circuits Syst.-II 48 (7), 684–690 (2001).
9. J. Hou, J. Liu and M.H. Lee, “Doubly stochastic processing on jacketmatrices,” in Proc. IEEE Region 10 Conference: TENCON, 21–24 Nov. 2004,1, 681–684 (2004).
10. M.H. Lee, “Jacket matrix and its fast algorithms for cooperative wireless signalprocessing,” Report, 92 (July 2008).
11. M. H. Lee, “The center weighted Hadamard transform,” IEEE Trans. CircuitsSyst. 36 (9), 1247–1249 (1989).
12. K. J. Horadam, “The jacket matrix construction,” in Hadamard Matrices andtheir Applications, 85–91 Princeton University Press, London (2007) Chapter4.5.1.
13. W. P. Ma and M. H. Lee, “Fast reverse jacket transform algorithms,” Electron.Lett. 39 (18), 47–48 (2003).
14. M. H. Lee, “A new reverse jacket transform and its fast algorithm,” IEEETrans. Circuits Syst. II 47 (1), 39–47 (2000).
15. M. H. Lee, B. S. Rajan, and J. Y. Park, “A generalized reverse jackettransform,” IEEE Trans. Circuits Syst. II 48 (7), 684–691 (2001).
16. G.L. Feng and M.H. Lee, “An explicit construction of co-cyclic Jacket matriceswith any size,” in Proc. of 5th Shanghai Conf. on Combinatorics, May 14–18,Shanghai (2005).
17. R. A. Horn and C. R. Johnson, Topics in Matrix Analysis, Cambridge Univ.Press, New York (1991).
18. F. J. MacWilliams and N. J. A. Sloane, The Theory of Error Correcting Codes,Elsevier, Amsterdam (1988).
19. E. Viscito and P. Allebach, “The analysis and design of multidimensional FIRperfect reconstruction filter banks for arbitrary sampling lattices,” IEEE Trans.Circuits Syst. 38, 29–41 (1991).
20. P. P. Vaidyanathan, Multirate Systems and Filter Banks, Prentice-Hall,Englewood Cliffs, NJ (1993).
21. S.R. Lee and M.H. Lee, “On the reverse jacket matrix for weighted Hadamardtransform,” Schriftenreihe des Fachbereichs Math., SM-DU-352, Duisburg(1996).
22. M.H. Lee, “Fast complex reverse jacket transform,” in Proc. 22nd Symp. onInformation Theory and Its Applications: SITA99, Yuzawa, Niigata, Japan.Nov. 30–Dec. 3 (1999).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
418 Chapter 12
23. M. H. Lee, B. S. Rajan, and J. Y. Park, “A generalized reverse jackettransform,” IEEE Trans. Circuits Syst. II 48 (7), 684–690 (2001).
24. M.G. Parker and M.H. Lee, “Optimal bipolar sequences for the complexreverse jacket transform,” in Proc. of Int. Symp. on Information Theory andApplications, Honolulu, Hawaii, 1, 425–428 (2000).
25. C. P. Fan and J.-F. Yang, “Fast center weighted Hadamard transformalgorithms,” IEEE Trans. Circuits Syst. II 45 (3), 429–432 (1998).
26. M. H. Lee and M. Kaven, “Fast Hadamard transform based on a simple matrixfactorization,” IEEE Trans. Acoust. Speech Signal Process 34, 1666–1668(1986).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Chapter 13Applications of HadamardMatrices in CommunicationSystems
Modern communication systems and digital signal processing (signal modeling),1,2
image compression and image encoding,3 and digital signal processing systems4
are heavily reliant on statistical techniques to recover information in the presenceof noise and interference. One of the mathematical structures used to achievethis goal is the Hadamard matrix.4–17 Historically, Plotkin18 first showed theerror-correcting capabilities of codes generated from Hadamard matrices. Later,Bose and Shrikhande19 found the connection between Hadamard matrices andsymmetrical block code designs. In this chapter, we will discuss some of theseapplications in error-control coding and in CDMAs.
13.1 Hadamard Matrices and Communication Systems
13.1.1 Hadamard matrices and error-correction codesThe storage and transmission of digital data lies at the heart of modern computersand communications systems. When a message is transmitted, it has the potentialto become scrambled by noise. The goal of this section is to provide a briefintroduction to the basic definitions, goals, and constructions in coding theory. Wedescribe some of the classical algebraic constructions of error-correcting codes,including the Hadamard codes. The Hadamard codes are relatively easy to decode;they are the first large class of codes to correct more than a single error. AHadamard code was used in the Mariner and Voyager space probes to encodeinformation transmitted back to the Earth when the probes visited Mars and theouter planets of the solar system from 1969 to 1976.20 Mariner 9 was a space shuttlewhose mission was to fly to Mars and transmit pictures back to Earth. Fig. 13.1 isone of the pictures transmitted by 9. With Mariner 5, six-bit pixels were encodedusing 32-bit long Hadamard code that could correct up to seven errors.
13.1.2 Overview of Error-Correcting CodesThe basic communication scenario between a sender and receiver is that the senderwants to send k-message symbols over a noisy channel by encoding the k-message
419
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
420 Chapter 13
Figure 13.1 Part of the Grand Canyon on Mars. This photograph was transmitted byMariner 9 (from Ref. 20).
symbols into n-symbols. The receiver obtains a received word consisting ofn-symbols and tries to decode and recover the original k-message symbols, evenif pieces are corrupted. The receiver wants to correct as many errors as possible.In order for the receiver to recover (decode) the correct message, even after thechannel corrupts the transmitted k-message symbols, the sender, instead of sendingthe k-bit message, encodes the message by adding several redundancy bits, andinstead, sends an n-bit encoding of it across the channel. The encoding is chosenin such a way that a decoding algorithm exists to recover the message from a“codeword” that has not been too badly corrupted by the channel.
Formally, a code C is specified by an injective map E : Σk → Σn that mapsk-symbol messages to the n-symbol codeword, where Σ is the underlying set ofsymbols called the alphabet. For this example, we will only consider the binaryalphabet (i.e., Σ = {0, 1}). The map E is called the encoding. The image of E isthe set of codewords of the code C. Sometimes, we abuse notation and refer to theset of codewords E : {0, 1}k → {0, 1}n as the code, where k is referred to as themessage length of the code C, while n is called the block length.
Error-correcting code is a “smart technique” of representing data so that onecan recover the original information, even if parts of it are corrupted. Ideally, wewould like a code that is capable of correcting all errors that are due to noise;we do not want to waste time sending extraneous data. It is natural that the moreerrors that a code needs to correct per message digit, the less efficient is the timetransmission. In addition, there are also probably more complicated encoding anddecoding schemes in such a message.
One of the key goals in coding theory is the design of optimal error-correctingcodes. In addition, we would also like to have easy encoding/decoding systems thatcan be very easily implemented by hardware.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Applications of Hadamard Matrices in Communication Systems 421
Figure 13.2 A digital channel for error-control coding.
Figure 13.3 Shannon (Bell Laboratories) and Hamming (AT&T Bell Laboratories) (fromhttp://www-groups.dcs.st-and.ac.uk/history/Bioglndex.html).
The key components of a typical communication system and the relationshipsbetween those components are depicted graphically in Fig. 13.2. This is outlinedas follows:
• The sender takes the k-message symbols, which uses E (encoder) to convert it ton-symbols or codewords suitable for transmission. This is transmitted over thechannel (air waves, microwaves, radio waves, telephone lines, etc.).• The receiver obtains and converts the message signal back into a form useful for
the receiver (decoded process).• A message sink often tries to detect and correct problems (reception errors)
caused by noise.
The fundamental questions that communication systems theory investigates areas follows:
• How much information passes through a channel?• How can one detect and correct errors brought into the channel?• How can one easily encode and decode systems?• How can one achieve better reliability of the transmission?
Major input for the development of coding theory comes from Shannon andHamming (see Fig. 13.3). A mathematical theory of communication was developed
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
422 Chapter 13
in 1940 by C. Shannon of Bell Laboratories. In 1948, Shannon21 discovered thatit is possible to have the best of both worlds, i.e., good error correction and a fasttransmission rate. However, Shannon’s theorem does not tell us how to design suchcodes. The first progress was made by Hamming (see Ref. 22).
In telecommunications, a redundancy check is to add extra data to a messagefor the purposes of error detection. Next, we will examine how to construct anerror-correcting code.
Example 13.1.2.1: Let us assume that we want to send the message 1101. Supposethat we receive 10101. Is there an error? If so, what is the correct bit pattern? Toanswer these questions, we add a 0 or 1 to the end of this message so that theresulting message has an even number of ls. Thus, we may encode 1101 as 11011.If the original message were 1001, we would encode that as 10010, because theoriginal message already had an even number of ls. Now, consider receiving themessage 10101. Because the number of ls in the message is odd, we know that anerror has been made in transmission. However, we do not know how many errorsoccurred in transmission or which digit or digits were affected. Thus, a parity checkscheme detects errors, but does not locate them for correction. The number of extrasymbols is called the redundancy of the code.
All error-detection codes (which include all error-detection-and-correctioncodes) transmit more bits than were in the original data. We can imagine that asthe number of parity bits increases, it should be possible to correct more errors.However, as more and more parity check bits are added, the required transmissionbandwidth increases as well. Because of the resultant increase in bandwidth, morenoise is introduced, and the chance of error increases. Therefore, the goal of theerror-detection-and-correction coding theory is to choose extra added data in sucha way that it corrects as many errors as possible, while keeping the communicationsefficiency as high as possible.
Example 13.1.2.2: The parity check code can be used to design a code that cancorrect an error of one bit. Let the input message symbol have 20 bits: (1001001101 10110 01101).
Parity check error-detection algorithm:
Input: Suppose we have 20 bits and arrange them in a 4 × 5 array:
1 0 0 1 0
0 1 0 0 1
1 0 1 1 0
0 1 1 0 1
(13.1)
Step 1. Calculate the parity along the rows and columns and define the lastbit in the lower right by the parity of the column/row of parity bits:
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Applications of Hadamard Matrices in Communication Systems 423
1 0 0 1 0 : 00 1 0 0 1 : 01 0 1 1 0 : 10 1 1 0 1 : 1.. .. .. .. .. . ..0 0 0 0 0 . 0
(13.2)
Step 2. This larger matrix is sent.Step 3. Suppose that an error occurs at the third row, fourth column. Then
the fourth column and third row parity checks will fail. This locatesthe error and allows us to correct it.
Note that this scheme can detect two errors, but cannot correct “2” errors.A block code is a set of words that has a well-defined mathematical property or
structure, and where each word is a sequence of a fixed number of bits. The wordsbelonging to a block code are called codewords. Table 13.1 shows an example of asimple block code with five-bit codewords where each codeword has odd (i.e., anodd number of 1s) and even (i.e., an even number of 1s) parity block codes.
A codeword consists of information bits that carry information pairs, and paritychecks that carry no information in the sense of that carried by the informationbits, but ensure that the codeword has the correct structure required by the blockcode. Blocks of information bits, referred to as information words, are encodedinto codewords by an encoder for the code. The encoder determines the parity bitsand appends them to the information word, so giving a codeword.
A code whose codewords have k information bits and r parity bits has n-bitcodewords, where n = k+r. Such a code is referred to as an (n, k) block code, wheren and k are, respectively, the block length and information length of the code. Theposition of the parity bits within a codeword is quite arbitrary. Fig. 13.4 shows a
Table 13.1 Odd-parity andeven-parity block codes withfive-bit codewords.
(00001) (00000)(00010) (00011)(00100) (00101)(00111) (00110)(01000) (01001)(01011) (01010)(01101) (01100)(01110) (01111)(10000) (10001)(10011) (10010)(10101) (10100)(10110) (10111)(11001) (11000)(11010) (11011)(11100) (11101)(11111) (11110)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
424 Chapter 13
Figure 13.4 An n-bit systematic codeword.
codeword whose parity bits are on the right-hand side of the information bits:
n = k + r bits
The rate R of a code is a useful measure of the redundancy within a block codeand is defined as the ratio of the number of information bits to the block length,R = k/n. Informally, a rate is the amount of information (about the message)contained in each bit of the codeword. We can see that the code rate is boundedby 0 ≤ R ≤ 1.
For a fixed number of information bits, the code rate R tends to 0 as the numberof parity bits r increases. Take the case where the code rate R = 1 if n = k. Thismeans that no coding occurs because there are no parity bits. Low code rates reflecthigh levels of redundancy. Several definitions are provided, as follows.
The Hamming distance d(v1, v2) of two codewords v1 and v1, having the same nnumber of bits, is defined as the number of different positions of words v1 and v2,or
d(v1, v2) = v11 ⊕ v1
2 + v21 ⊕ v2
2 + · · · + vn1 ⊕ vn
2, (13.3)
where ⊕ is the sum of modulo-2.The Hamming weight w(v) of the codeword v is the number of nonzero elements
in v. For example, for codewords
v = (01101101) and u = (10100010) (13.4)
the Hamming weights and distance are
w(v) = w(01101101) = 5, w(u) = w(10100010) = 3,
d(u, v) = d(01101101, 10100010) = 6. (13.5)
The minimum distance d(C) of a code C is the minimum number of allHamming distances between distinct codewords, i.e. d(C) = mini� j d(vi, v j). Theminimum distance is found by taking a pair of codewords, determining the distancebetween them, and then repeating this for all pairs of different codewords. Thesmallest value obtained is the minimum distance of the code. It easy to verify thatboth of the codes given in Example 13.1.2.2 and in Table 13.1 have the minimumdistance 2.
In coding theory, codes whose encoding and decoding operations may beexpressed in terms of linear operations are called linear codes. A block code issaid to be a linear code if the sum of modulo-2 of any two codewords gives anothercodeword of that code. Hence, if ci and c j are the codewords of a linear code, thenck = ci ⊕ c j is also a codeword, where ⊕ is the sign of modulo-2 addition.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Applications of Hadamard Matrices in Communication Systems 425
The linear code C, with n code length, m information symbols, and minimumdistance d, is said to be an [n, m, d] linear code. We will refer to an any code C thatmaps m message bits to n codewords with distance d as an (n,m, d) code. Hence, alinear code of dimension m contains 2m codewords.
A linear code has the following properties:
• The all-zero word (0 0 0 · · · 0) is always a codeword.• A linear code can be described by a set of linear equations, usually in the shape
of a single matrix, called the parity check matrix. That is, for any [n, k, d] linearcode C, there exists an (n − k) × n matrix P such that
c ∈ C ⇔ cPT = 0.
• For any given three codewords ci, c j, and ck such that ck = c1 ⊕ c j, thedistance between two codewords equals the weight of its sum codewords, i.e.,d(ci, c j) = w(ck).• The minimum distance of the code dmin = wmin, where wmin is the weight of any
nonzero codeword with the smallest weight.
The third property is of particular importance because it enables the minimumdistance to be found quite easily. For an arbitrary block code, the minimum distanceis found by considering the distance between all codewords. However, with alinear code, we only need to evaluate the weight of every nonzero codeword. Theminimum distance of the code is then given by the smallest weight obtained. Thisis much quicker than considering the distance between all codewords. Because an[n, m, d] linear code encodes a message of length m as a codeword of length n, theredundancy of a linear [n, m, d] code is n − m.
13.1.3 How to create a linear codeLet S be a set of vectors from a vector space, and let (S ) be the set of all linearcombinations of vectors from S . Then, for any subset S of a linear space, (S ) is alinear space that consists of the following words: (1) the zero word, (2) all wordsin S , and (3) all sums of two or more words in S .
Example 13.1.3.1: Let S = {v1, v2, v3, v4, v5} = {01001, 11010, 11100, 00110,10101}. Then, we obtain
(S ) = {v0, v1, v2, v3, v4, v5, v1 ⊕ v2, v1 ⊕ v3, v1 ⊕ v4, v1 ⊕ v5, v2 ⊕ v3,
v2 ⊕ v4, v2 ⊕ v5, v3 ⊕ v4, v3 ⊕ v5, v4 ⊕ v5}, (13.6)
or
(S ) = {00000, 01001, 11010, 11100, 00110, 10101, 10011, 10101
01111, 11100, 00110, 11100, 01111, 11010, 01001, 10011}. (13.7)
The advantages of linear codes are as follows:
• Minimal distance d(C) is easy to compute if C is an [n, m, d] linear code.• Linear codes provide an easy description of detectable and correctable errors,
i.e., to specify a linear [n, m] code, it is enough to list m codewords.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
426 Chapter 13
• There are simple encoding/decoding procedures for linear codes.• Easy computation of error probabilities and other properties exists.• Several families of linear codes are known.• It is easy to choose one for an application.
Definition: In a k × n matrix, whose rows form a basis of a linear [n, k] code(subspace), C is said to be the generator matrix of C.
Example 13.1.3.2: From the base (generator) 3 × 4 matrix⎧⎪⎪⎪⎨⎪⎪⎪⎩0 0 1 10 1 0 11 0 0 1
⎫⎪⎪⎪⎬⎪⎪⎪⎭ , (13.8)
we obtain the following code (codewords):⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩
0 0 0 00 0 1 10 1 0 11 0 0 10 1 1 01 0 1 01 1 0 0
⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎬⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭. (13.9)
The 4 × 7 base matrix ⎧⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎩1 1 1 1 1 1 11 0 0 0 1 0 11 1 0 0 0 1 00 1 1 0 0 0 1
⎫⎪⎪⎪⎪⎪⎬⎪⎪⎪⎪⎪⎭ (13.10)
generates the following codewords:⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩
0 0 0 0 0 0 01 1 1 1 1 1 11 0 0 0 1 0 11 1 0 0 0 1 00 1 1 0 0 0 10 1 1 1 0 1 00 0 1 1 1 0 11 0 0 1 1 1 00 1 0 0 1 1 11 1 1 0 1 0 01 0 1 0 0 1 11 0 1 1 1 0 00 0 0 1 0 1 10 1 0 1 1 0 00 0 1 0 1 1 01 1 0 1 0 0 1
⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎬⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭
. (13.11)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Applications of Hadamard Matrices in Communication Systems 427
Theorem 13.1.3.1: Let C be a binary (n, k, d) code. Then, k ≤ 2n–d+1.
Proof: For a codeword c = (a1, a2, . . . , an), define c = (ad, ad−1, . . . , an), that is,cut out the first d−1 places from c. If c1 � c2 are any two codewords from the (n, k,d) code, then they differ in at least d places. Because c1, c2 are arrived at from c1, c2
by cutting d − 1 entries, c1, c2 differ at least in one place, hence c1 � c2. Therefore,the number k of codewords in C is at most the number of vectors c obtained in thisway. Because there are at most 2n−d+1 vectors c, we have k ≤ 2n–d+1.
If the distance of a codeword is large, and if not too many codeword bits arecorrupted by the channel (more precisely, if not more than d/2 bits are flipped),then we can uniquely decode the corrupted codeword by picking the codewordwith the smallest Hamming distance from it. Note that for this unique decodingto work, it must be the case that there are no more than d/2 errors caused by thechannel.
13.1.4 Hadamard codeHadamard code is one of the family [2n, 2n+1, 2n–1] codes. Remember that in thiscode there is a subset of binary sets described by the following parameters: n is alength of code, d is a minimal distance between various codewords, and M is thenumber of codewords (capacity of a code).
Theorem 13.1.4.1: (Peterson22) If there is an n × n Hadamard matrix, then abinary code with 2n code vectors of length n and minimum distance n/2 exists.
The [n, 2n, n/2] Hadamard code construction algorithm:
Input: A normalized Hadamard matrix Hn of order n.Step 1. Generate matrix C2n:
C2n =
(Hn
−Hn
). (13.12)
Step 2. Generate the set 2n vectors from vi and −vi, i = 1, 2, . . . , n, wherevi is a row of the matrix Hn.
Step 3. Generate codeword, replacing +1 with 0, and −1 with 1.Output:Codewords, i.e., the set 2n of binary vectors of length n.
Example 13.1.4.1:8,16,4 Hadamard code. This code is obtained from theSylvester–Hadamard matrix of order 8:
H8 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + + + + + + +
+ − + − + − + −+ + − − + + − −+ − − + + − − ++ + + + − − − −+ − + − − + − ++ + − − − − + ++ − − + − + + −
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠⇒
(H8
−H8
)(13.13)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
428 Chapter 13
or, the codewords of Hc8 are
Hc8 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + + + + + + +
+ 0 + 0 + 0 + 0+ + 0 0 + + 0 0+ 0 0 + + 0 0 ++ + + + 0 0 0 0+ 0 + 0 0 + 0 ++ + 0 0 0 0 + ++ 0 0 + 0 + + 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (13.14)
We changed the encoding alphabet from {−1, 1} to {0, 1}. It is also possible tochange the encoding alphabet from {−1, 1} to {1, 0}.
The codewords of C16 are
1111 1111, 1010 1010,
1100 1100, 1001 1001,
1111 0000, 1010 0101,
1100 0011, 1001 0110
0000 0000, 0101 0101,
0011 0011, 0110 0110,
0000 1111, 0101 1010,
0011 1100, 0110 1001.
Properties: Let vi be the i’th row of the vector C2n. It is not difficult to show thefollowing:
• This code has 2n codewords of length n.• d(vi,−vi) = n and d(vi, v j) = d(−vi,−v j) = n/2 for i � j, i, j = 1, 2, . . . , n,
which means that the minimum distance between any distinct codewords is n/2.Hence, the constructed code has minimal distance n/2 and the code correctsn/4 − 1 errors in an n-bit encoded block, and also detects n/4 errors.• The Hadamard codes are optimal for this Plotkin distance/bound (see more
detail in the next section).• The Hadamard codes are self-dual.• Let e be a vector of 1s and −1s of length n. If vector e differs from vi
(a) in at most n/4−1 positions, then it differs from v j in at least n/4+1 positions,whenever i � j.
(b) in n/4 positions, then it differs from v j in at least n/4 positions.
• A generator matrix of the Hadamard code of length 2n has an (n + 1) × 2n
rectangular generator matrix with 0, 1 elements. A Hadamard code of length16 based on a 5 × 24 generator matrix has the form
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Applications of Hadamard Matrices in Communication Systems 429
Figure 13.5 Matrix of the Hadamard code (32, 6, 16) for the NASA space probe Mariner9 (from Ref. 20).
G16 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 11 1 1 1 1 1 1 1 0 0 0 0 0 0 0 01 1 1 1 0 0 0 0 1 1 1 1 0 0 0 01 1 0 0 1 1 0 0 1 1 0 0 1 1 0 01 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (13.15)
Note that the corresponding code encodes blocks of length five to blocks oflength 16. In practice, columns 1, 9, 5, 3, and 2 of matrix G16 form a basis for thecode. Every codeword from C16 is representable as a unique linear combinationof basis vectors. The generator matrix of the (32, 6, 16) Hadamard code (basedon Hadamard matrix of order 16) is a 6 × 32 rectangular matrix. The technicalcharacteristics of this code are: (1) codewords are 32 bits long and there are 64of them, (2) the minimum distance is 16, and (3) it can correct seven errors. Thiscode was used on Mariner’s space mission in 1969 (see Fig. 13.5).
The encoding algorithm is also simple:
Input: The received (0, 1) signal vector v of length n (n must be divided to 4,i.e., n is the order of Hadamard matrix).Step 1. Replace each 0 by +1 and each 1 by −1 of the received signal v.
Denote the resulting vector y.Step 2. Compute the n-point FHT u = HyT of the vector y.Step 3. Find the maximal by modulus coefficient of the vector u.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
430 Chapter 13
Step 4. Apply the decision rule as follows:(a) If the maximum absolute value of the transform coefficient ui
is positive, then take the i’th codeword.(b) If the transform coefficient ui is negative, then take the
(i + 2n)’th codeword.Step 5. Change all −1s to zeroes and all 1s to 1.
Output:The codeword that was sent.
An example of an encoding algorithm:
Input: Assume that a = (00011010) is the received word.Step 1. Generate the following vector v by changing all zeroes to −1s:
a = (00011010)→ v = (−1,−1,−1, 1, 1,−1, 1,−1). (13.16)
Step 2. Calculate an eight-point HT s = H8vT
s = H8vT = (−2, 2,−2, 2,−2,−6,−2, 2). (13.17)
Step 3. Find the absolute value of the largest component, i.e., s6 = −6.Step 4. Apply the decision rule: the 8 + 6 = 14 codeword was sent, i.e.,
(−1, 1,−1, 1, 1,−1, 1,−1).Step 5. Change all −1s to zeroes.
Output:Codeword that has been sent: 01011010.
Remarks:
• It can be shown that the Hadamard code is the first-order Reed–Muller codein the case of q = 2. These codes are some of the oldest error-correctingcodes. Reed–Muller codes were invented independently in 1954 by Muller andReed. Reed–Muller codes are relatively easy to decode, and first-order codesare especially efficient. Reed–Muller codes are the first large class of codes tocorrect more than a single error. From time to time, the Reed–Muller code isused in magnetic data storage systems.• The Sylvester–Hadamard matrix codes are all linear codes.• It is possible to construct the normalized Hadamard matrices Hn-based codes by
replacing in Hn each of the elements +1 to 0, and −1 to 1 (denote it Qn). Forinstance,• [n − 1, n, n/2] code An consisting of the rows of Qn with the first column (of
1s) deleted.• [n−1, 2n, n/2−1] code Bn consisting of the rows of An and their complements.• [n, 2n, n/2] code Cn consisting of the rows of Qn and their complements.• In general, Hadamard codes are not necessarily linear codes. A Hadamard code
can be made linear by forming a code with the generator matrix (In,Hn), whereHn is a binary Hadamard matrix of order n.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Applications of Hadamard Matrices in Communication Systems 431
Figure 13.6 Graphical representation of the Hadamard code with generator matrix (I4,H4).
13.1.5 Graphical representation of the (7, 3, 4) Hadamard code
Example:40 Fig. 13.6 shows a graphical representation of the (7, 3, 4) 2-Hadamardcode with generator matrix (I4,H4), where H4 is a binary Hadamard matrix of order4: ⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 0 0 0 0 1 10 1 0 0 1 0 10 0 1 0 1 1 00 0 0 1 1 1 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (13.18)
It can be verified that the minimum distance of this code is at least 3. In thisrepresentation, the left nodes (right nodes) are called the “variable nodes” (“checknodes”). Thus, the code is defined as the set of all binary settings on the variablenodes such that for all check nodes, the sum of the settings of the adjacent variablenodes is zero.
Indeed, the minimum distance is not one; otherwise, there is a variable nodethat is not connected to any check node, which is a contradiction to the fact that thedegree of the variable nodes is larger than one. Suppose that the minimum distanceis two, and assume that the minimum weight word is (1, 1, 0, . . . , 0). Consider thesubgraph induced by the two first variable nodes. All check nodes in this graphmust have an even degree (or else they would not be satisfied). Moreover, thereare at least two check nodes in this graph of a degree greater than zero, since thedegrees of the variable nodes are supposed to be greater or equal to two. Then, thegraph formed by the two first variable nodes and these two check nodes is a cycleof length four, contrary to the assumption.
13.1.6 Levenshtein constructions
The minimum distance between any pair of codewords in a code cannot be largerthan the average distance between all pairs of different codewords. Using thisobservation, Plotkin found the upper bound for the minimum distance of a linearcode with respect to the Hamming distance. For 0 < d < n, let A(n, d) denote themaximum possible number of codewords in a binary block code of length n andminimum (Hamming) distance d. Note that if d is odd, then C is an (n, k, d) code if
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
432 Chapter 13
and only if the code C′ obtained by adding a parity-check bit to each codeword inC is an (n+ 1, k, d + 1) code. Therefore, if d is even, then A(n, d) = A(n− 1, d − 1).The challenge here is to understand the behavior of A(n, d) for the case when d iseven.18,23
In 1965, Plotkin18 gave a simple counting argument that leads to an upper boundB(n, d) for A(n, d) when d < n/2. The following also holds:
• If d ≤ n < 2d, then A(n, d) ≤ B(n, d) = 2[d/(2d − n)].• If n = 2d, then A(n, d) ≤ B(n, d) = 4d.
Levenshtein13 proved that if Hadamard’s conjecture is true, then Plotkin’s boundis sharp. Let Qn be a binary matrix received from a normalized Hadamard matrixof order n by replacement of +1 by 0 and −1 by 1. It is clear that the matrix Qn
allows design of the following Hadamard codes:
• (n−1, n, n/2) code An consisting of rows of a matrix Qn without the first columnof Qn.• (n − 1, 2n, n/2 − 1) code Bn consisting of codewords of a code An and their
complements.• (n, 2n, n/2) code Cn consisting of rows of a matrix Qn and their complements.
In Ref. 13, it was proved that if there are the suitable Hadamard matrices, thenthe Plotkin bounds have the following form:
• If d is an even number, then
M(n, d) = 2[
d2d − n
], n < 2d,
M(2d, d) = 4d.(13.19)
• If d is an odd number, then
M(n, d) = 2[
d + 12d + 1 − n
], d ≤ n < 2d + 1,
M(2d + 1, d) = 4d + 4.(13.20)
Now we shall transition to a method of construction of the maximal codes.A square (0, 1) matrix of order m is called the correct matrix13 if the Hammingdistance between two distinct rows is equal to m/2. It can be shown that the correctmatrix of order m exists if and only if a Hadamard matrix of order m exists. Wewill call k a correct number if a correct matrix of order 4k exists.
Let us introduce the following notations: Am is a correct matrix of order m,the last column of which consists of zeros, A1
m is a matrix received from Am afterremoval of the last (zero) column, and A2
m is a matrix received from Am afterremoval of the two last columns and all rows, where in the penultimate columnthere is a zero. The conditions and formulas of construction of the maximal codesfor the given n and d are displayed in Table 13.2.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Applications of Hadamard Matrices in Communication Systems 433
Table 13.2 Conditions and formulas of construction of the maximal codes for the given nand d.
N d/(2d − n) k = [d/(2d − n)] Correct Code
Even Fraction k and k + 1 (a/2)A24k ◦ (b/2)A2
4(k+1)
Odd Fraction Even k/2 and k + 1 aA12k ◦ (b/2)A2
4(k+1)
Odd Fraction Odd k and (k + 1)/2 (a/2)A24k ◦ bA1
2(k+1)
Even Integer k (a/2)A24k
Odd Integer Even k/2 aA12k
In this table, ◦ means matrix connections, and a and b are defined by thefollowing:
ka + (k + 1)b = d
(2k − 1)a + (2k + 1)b = n.(13.21)
Example 13.1.6.1: Construction of a maximal equidistant code with parametersn = 13, d = 8,M = 4.
One can verify that d/(2d − n) is a fractional number, and that k = 2. Theseparameters correspond to the second row of Table 13.2. There are also correctmatrices of order 2k and 4(k+1), obtained from Hadamard matrices of order 4 and12. Solving the above linear system, we find that a = 1, b = 2. Hence, the codefound can be represented as A1
4 ◦ A212.
Consider the Hadamard matrix of order 4 with last column consisting of +1:
H4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + + +
− + − +− − + ++ − − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (13.22)
Hence, according to the definition, we find that
A4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝0 0 0 0
1 0 1 0
1 1 0 0
0 1 1 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, A1
4 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝0 0 0
1 0 0
1 1 1
0 1 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (13.23)
For a construction A212 matrix, consider the Williamson–Hadamard matrix H12
and, corresponding to it, the matrix H+12 of order 12:
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
434 Chapter 13
H12 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + + + − − + − − + − −+ + + − + − − + − − + −+ + + − − + − − + − − +− + + + + + − + + + − −+ − + + + + + − + − + −+ + − + + + + + − − − +− + + + − − + + + − + ++ − + − + − + + + + − ++ + − − − + + + + + + −− + + − + + + − − + + ++ − + + − + − + − + + ++ + − + + − − − + + + +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
, H+12 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
− − − − + + − + + − + +− − − + − + + − + + − ++ + + − − + − − + − − ++ − − − − − + − − − + +− + − − − − − + − + − ++ + − + + + + + − − − +− + + + − − + + + − + ++ − + − + − + + + + − +− − + + + − − − − − − +− + + − + + + − − + + ++ − + + − + − + − + + ++ + − + + − − − + + + +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
.
(13.24)
Hence, according to definition from H+12, we find that
A12 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 1 0 0 1 0 0 1 0 01 1 1 0 1 0 0 1 0 0 1 00 0 0 1 1 0 1 1 0 1 1 00 1 1 1 1 1 0 1 1 1 0 01 0 1 1 1 1 1 0 1 0 1 00 0 1 0 0 0 0 0 1 1 1 01 0 0 0 1 1 0 0 0 1 0 00 1 0 1 0 1 0 0 0 0 1 01 1 0 0 0 1 1 1 1 1 1 01 0 0 1 0 0 0 1 1 0 0 00 1 0 0 1 0 1 0 1 0 0 00 0 1 0 0 1 1 1 0 0 0 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
,
A212 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 1 0 1 0 0 1 0 00 0 0 1 1 0 1 1 0 11 0 1 1 1 1 1 0 1 00 0 1 0 0 0 0 0 1 10 1 0 1 0 1 0 0 0 01 1 0 0 0 1 1 1 1 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠.
(13.25)
Hence, the codewords of the maximal equidistant code A14◦A2
12 with the parametersn = 13, d = 8,M = 4 are represented as
A14 ◦ A2
12 =
⎧⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎩0 0 0 1 1 1 0 1 0 0 1 0 01 0 1 0 0 0 1 1 0 1 1 0 11 1 0 1 0 1 1 1 1 1 0 1 00 1 1 0 0 1 0 0 0 0 0 1 1
⎫⎪⎪⎪⎪⎪⎪⎪⎬⎪⎪⎪⎪⎪⎪⎪⎭. (13.26)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Applications of Hadamard Matrices in Communication Systems 435
Figure 13.7 Block diagram of a typical multiple-access communication system.
13.1.7 Uniquely decodable base codesMultiple-user communication systems were first studied by Shannon in 1961.21
Multiple input, multiple output (MIMO) systems provide a number of advan-tages over single-antenna-to-single-antenna communication. The advantages ofmultiple-user communication, exploiting the physical channel between many trans-mitters and receivers, are currently receiving significant attention.24–27 Fig. 13.7presents a block diagram of a typical multiple-access communication systemin which T statistically independent sources are attempting to transmit data toT separate destinations over a common discrete memoryless channel. The Tmessages emanating from the T sources are encoded independently according toC1,C2, . . . ,CT block codes of the same length N.10
The concept of “unique decodability” codes is that you have some input sym-bols, and each input symbol is represented with one output symbol. Then, supposeyou receive a combined message. How can the original input be detected?
Let Ci, i = 1, 2, . . . , k be a set of (0, 1) vectors of length n. The set (C1,C2,. . . ,Ck) is called k-user code of length n. In Ref. 10, (C1,C2, . . . ,Ck) is called auniquely decodable code with k users, if for any vectors Ui,Vi ∈ Ci, i = 1, 2, . . . , k,they satisfy the condition (Ci are called components of a code)
k∑i=1
Ui �k∑
i=1
Vi. (13.27)
Next, we consider a uniquely decodable base code, in which the individualcomponents contain only two codewords.10 Let (C1,C2, . . . ,Ck) be the uniquelydecodable base code, i.e., Ci = {Ui,Vi}. A vector di = Ui − Vi is called a differencevector of the component Ci, and the matrix D = (d1, d2, . . . , dk)T is called adifference matrix of the code (C1,C2, . . . ,Ck).
Let Ui = {u ji }
n
j=1,Vi = {v ji }
n
j=1, i = 1, 2, . . . , k. For a given difference matrix D,the codewords of components Ci can be defined as
u ji =
{0, if d j
i = 0 or d ji = −1,
1, if d ji = 1,
v ji =
{0, if d j
i = 0 or d ji = 1,
1, if d ji = −1.
(13.28)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
436 Chapter 13
In Ref. 10, it is proved that (C1,C2, . . . ,Ck) is a uniquely decodable base code ifand only if for any (0,±1) vector P, the condition PD � 0 holds, and PD = 0 onlyfor P = 0. Hence, the problem of construction of a uniquely decodable base codewith k users of length n results in a problem in construction of a (0,±1) matrix Dof dimension k × n, the rows of which are linearly independent in {0,+1,−1}.
Let us consider the method of construction of uniquely decodable base code.The difference matrix of this code is defined by the following formula:
Dt =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝Dt−1 Dt−1
Dt−1 −Dt−1
I2t−1 O2t−1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (13.29)
where D0 = (1) and Om is zero matrix of order m. Note that the matrix Dt hasa dimension (t + 2)2t−1 × 2t, i.e., it is a difference matrix of a uniquely decodablebase code with (t+2)2t−1 users of length 2t. Note that in the formula in Eq. (13.29),instead of D0, one can substitute a Hadamard matrix.
Now we shall consider a problem of decoding. Let (C1,C2, . . . ,Ck) be a uniquelydecodable base code of length n, and let Ci = {Ui,Vi}. Y = V1+V2+· · ·+Vk is calleda base vector of a code. Let Xi ∈ Ci be a message of the i’th user. Let us calculater = X1 + X2 + · · · + Xk. S = r − Y is called a syndrome of a code corresponding toa vector r. Because
Xi − Vi =
{di, if Xi = Ui,0, if Xi = Vi,
S =k∑
i=1
qidi, (13.30)
where
qi =
{1, if Xi = Ui,0, if Xi = Vi.
(13.31)
q = (q1, q2, . . . , qk) is called a determining vector of a code. Thus, S = qD andthe decoding problem consists of defining the vector q. The following theoremholds:
Theorem 13.1.7.1: 13 Let there be a uniquely decodable base code with k usersof length n and Williamson matrices of order m. Then, there is also a uniquelydecodable base code with 2mk users of length 2nk.
Let D1 be a difference matrix of a uniquely decodable base code with k users oflength n, and A, B, C, D be Williamson matrices of order m. We can check that
X ⊗ D1 + Y ⊗ D1 (13.32)
is a difference matrix of a required code, where
X =12
(A + B C + DC + D −A − B
), Y =
12
(A − B C − D−C + D A − B
). (13.33)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Applications of Hadamard Matrices in Communication Systems 437
We now provide an example of a base code with 30 users of length 18. Let ushave a difference matrix with five users of length three:
D1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + +
+ + −+ − ++ 0 −+ − 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (13.34)
According to Eq. (13.28), we determine the components of this code to be
C1 = {(111), (000)}, C2 = {(110), (001)},C3 = {(101), (010)}, C4{(100), (001)} C5 = {(100), (010)}. (13.35)
Now, let A and B = C = D be cyclic Williamson matrices of order 3 with first rows(+++) and (++−), respectively. Using Theorem 13.1.7.1, we obtain the followingdifference matrix:
D2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
D1 D1 D1 D1 −D1 −D1
D1 D1 D1 −D1 D1 −D1
D1 D1 D1 −D1 −D1 D1
D1 −D1 −D1 −D1 D1 D1
−D1 D1 −D1 D1 −D1 D1
−D1 −D1 D1 D1 D1 −D1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠(13.36)
the components of which are (i = 1, 2, 3, 4, 5):
C1i = {(ui, ui, ui, ui, vi, vi); (vi, vi, vi, vi, ui, ui)} ,
C15+i = {(ui, ui, ui, vi, ui, vi); (vi, vi, vi, ui, vi, ui)} ,
C110+i = {(ui, ui, ui, vi, vi, ui); (vi, vi, vi, ui, ui, vi)} ,
C115+i = {(ui, vi, vi, vi, ui, ui); (vi, ui, ui, ui, vi, vi)} ,
C120+i = {(vi, ui, vi, ui, vi, ui); (ui, vi, ui, vi, ui, vi)} ,
C125+i = {(vi, vi, ui, ui, ui, vi); (ui, ui, vi, vi, vi, ui)} .
(13.37)
Denote C1i = {Ui; Vi}, i = 1, 2, . . . , 30. The base vector of this code will be
Y = {10, 12, 12, 10, 12, 12, 10, 12, 12, 15, 12,
12, 15, 12, 12, 15, 12, 12}. (13.38)
Let the following vectors be sent: Ui, V5+i,U10+i,U15+i,V20+i,U25+i, i = 1, 2, 3, 4, 5.The total vector of this message will be
r = {20, 12, 12, 10, 12, 12, 20, 12, 12, 15,
12, 12, 15, 12, 12, 15, 12, 12}. (13.39)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
438 Chapter 13
The syndrome S = r − Y of the vector r has the form
S = {10, 0, 0, 0, 0, 0, 10, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}. (13.40)
Define the vector q from the equation
S = qD2. (13.41)
Denote S = (S 1, S 2, . . . , S 6), q = (q1, q2, . . . , q6), where S i and qi are vectorsof length three and five, respectively. Now, solving this system with respect toqiD1, we find that q1D1 = (5, 0, 0), q2D1 = (0, 0, 0), q3D1 = (5, 0, 0), q4D1 =
(5, 0, 0), q5D1 = (0, 0, 0), q6D1 = (5, 0, 0). Finally, we find a vector q : q =(111110000011111111110000011111). From Eq. (13.31), it follows that thefollowing vectors were sent: Ui, V5+i,U10+i,U15+i,V20+i,U25+i, i = 1, 2, 3, 4, 5.
13.1.8 Shortened code construction and application to data coding anddecoding
Let (c1, c2, . . . , ck) be a binary information word, Pi = {pi,1, pi,2, . . . , pi,k}, i = 1,2, . . . , n be a set of binary vectors, and the power of set P be 2k − 1. The binarywords Pi are called code projectors.4,9 Note that projectors are the columns ofthe generating matrix of a linear code.9,28 The codeword u = (u1, u2, . . . , un),corresponding to the information word (c1, c2, . . . , ck), is determined as
ui = c1 pi,1 ⊕ c2 pi,2 ⊕ · · · ⊕ ck pi,k, i = 1, 2, . . . , n, (13.42)
where ⊕ is the summation modulo-2.The decoding process is as follows: The decoder receives the codeword u =
(u1, u2, . . . , un) and processes it, writing down the results in a table. If ui = 0,then the result in the table corresponding to address Pi increases by 1; if ui = 1,then it decreases by 1. After these operations, we find the vector V of length 2k.Then, a HT HkV is applied and the maximal positive coefficient of the transform isdetermined. The address in the decoder table, which corresponds to this coefficient,will be the decimal representation of the initial information word (c1, c2, . . . , ck).
As an example, let us consider the coder of repeated projectors for M = 21,k = 3 with information word (0, 1, 1). As projectors, we consider binary sets ofdecimal numbers {1, 2, . . . , 7}. Each projector is repeated three times. According toEq. (13.42), we obtain (111111000000111111000), which is transmitted throughthe channel. The decoder forms the vector V = (0,−3,−3,+3,+3,−3,−3,+3).Then, we obtain H3V = (−3,−3,−3,+21,−3,−3,−3,−3)T . Since the maximumelement equals +21 and has the index 3, the decoder decides that the transmittedcodeword was (011). As will be shown next, the constructed code of length 21corrects five errors. Indeed, let five errors occur during transmission. Let thereceived codeword be (110111110000111110100). The received codeword errorbits are written in boldface type. In this case, the decoder defines the vector
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Applications of Hadamard Matrices in Communication Systems 439
V = (0,−1,−3,−1,+3,−3,−1,+1). Then, computing the spectral vector H3V =(−5,+3,+3,+11,−5,−5,+3,−5), the decoder again resolves that the informationword (011) was transmitted.
Let r be the repeating number of each projector, and k be an information wordlength. In Ref. 13, we proved that the above-constructed code can be corrected to
t = 2k−2r − 1 (13.43)
errors.Suppose that there are 2k − 1 projectors, with each of them being repeated r − 1
times. Then, the length of the codewords will be M1 = (2k − 1)(r − 1), and byEq. (13.43), this code corrects t1 = 2k−2(r − 1) − 1 errors. However, if each of theprojectors is repeated r times, the codeword length is M2 = (2k − 1)r, and thatcode corrects t2 = 2k−2r − 1 errors. It is necessary to build an optimal code with aminimal length M(M1 < M < M2) that can correct t errors, t1 < t < t2.
Let d = 2m; (m < k) is the number of projectors. In Ref. 9, it is shown thatthe shortened projection code of length M2 = (2k − 1)r − 2m can be corrected fort = 2k−2r − 2m−2 − 1 errors. Note that m = [log2(2k−2r − t − 1)] + 2.
Now we give an example. Let the information word (011) be transmitting.The repetition is r = 3, and it is necessary to correct four errors. From theprevious formula, we obtain m = 2; hence, in d = 22 = 4 projectors, therepetitions must be reduced by one. As was shown above, if in the first 2m
projectors with small values the repeating is reduced by one, the resulting codeis optimal. The coder forms the following shortened code: (11110000111111000)of length 17. If no errors occur in the channel, the decoder receiving the codewordwill determine the vector V = (0,−2,−2, 2, 2,−3,−3, 3). Furthermore, we obtainH3V = (−3,−3,−3, 17,−1,−1,−1,−5)T . We see that the maximal coefficient 17correctly identifies the information word (011).
Now, suppose that four errors occur in the channel, which is shown in boldin the received codeword (01111100111111100). In this case, the decoder willdetermine the vector V = (0, 0,−2,−2, 2,−3,−3, 1). Next, we obtain H3V =
(−7, 1, 5, 9,−1,−1, 3,−9)T , which means that the maximal coefficient 9 stillcorrectly identifies the transmitted information word. Thus, using a code of length17, four errors can be corrected.
The experiments are made for grayscale and color images, the results of whichare given in Table 13.3. For an eight-bit 256 × 256 image, a total of 255 projectorsis required. The first 26 = 64 projectors were repeated two times each, and theother ones three times, i.e., the codeword length is 701 and, therefore, the resultingcode can correct all combinations of t = 26 · 3 − 24 − 1 = 175 errors.
Furthermore, in the codeword using a pseudo-random number generator, t ≥175 errors have been entered. In Table 13.3, encoding results are given, where“Err.num.” stands for the number of errors, “M.filter” with “+” is placed showingthat after decoding, the median filtering was performed. A similar trend is observedalso for other types of signals.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
440 Chapter 13
Table 13.3 Encoding results.
No. Err. num. M. filter MSE PSNR
1 0–175 - 0 Infinity
2 200 - 0.00000 Infinity
3 210 - 0.00024 84.25
4 220 - 0.057 60.53
5 230 - 8.65 38.76
6 240 - 35.91 32.57
7 250 - 143.21 26.57
8 250 + 67.16 29.86
9 260 - 476.26 21.33
10 260 + 75.58 29.34
11 270 - 1186.22 17.39
12 270 + 104.48 27.94
13 280 - 2436.11 14.26
14 280 + 166.03 25.92
13.2 Space–Time Codes from Hadamard Matrices
13.2.1 The general wireless system model
Consider a mobile communication system where a base station is equipped withn antennas and the mobile unit is equipped with m antennas. Data is encoded bythe channel encoder. Then, the encoded data are passed through a serial-to-parallelconverter and divided into n streams. Each stream of data is used as an input to apulse shaper. The output of each shaper is then modulated. At each time slot t, theoutput of a modulator i is a signal ct,i transmitted using antenna i for i = 1, 2, . . . , n.We assume that n signals are transmitted simultaneously, each from a differenttransmitter antenna, and that all of these signals have the same transmission periodT . The signal at each receiving antenna is a noisy superposition of the n transmittedsignals corrupted by Rayleigh or Rician fading.24–27,29–39 We also assume that theelements of the signal constellation are contracted by a factor of
√Es, chosen so
that the average energy of the constellation is 1. The signal rt, j received by antennaj at time t is given by
rt, j =
n∑i=1
αi, jct,i + nt, j, (13.44)
where the noise nt, j at time t is modeled as independent samples of zero-meancomplex Gaussian random variables, with variance (1/2)N0 per dimension. Thecoefficient αi, j is the path gain from transmitting antenna i to receiving antenna j.αi, j is modeled as independent samples of zero-mean complex Gaussian randomvariables with variance 0.5 per dimension. It is assumed that these path gains areconstant during a frame and vary from one frame to another (i.e., quasi-static flatfading).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Applications of Hadamard Matrices in Communication Systems 441
Figure 13.8 Two-branch transmit diversity scheme with two transmitting and one receivingantenna.
Assuming that perfect channel state information is available, the receivercomputes the decision metric
l∑k=1
m∑j=1
∣∣∣∣∣∣∣rt, j −n∑
i=1
αi, jct,i
∣∣∣∣∣∣∣2
(13.45)
over all codewords ct,1, ct,2, . . . ct,n, t = 1, 2, . . . , l and decides in favor of thecodeword that minimizes the sum in Eq. (13.45).
After several mathematical manipulations, we see that the problem is to obtainct,i, which gives a minimum of the expression in Eq. (13.45), and leads tominimization of the following expression (x∗ is a conjugate of x):
l∑t=1
m∑j=1
⎡⎢⎢⎢⎢⎢⎢⎣ n∑i=1
(rt, jα
∗i, jc∗t,i + r∗t, jαi, jct,i
)−
n∑i,k=1
αi, jα∗k, jct,ic
∗t,k
⎤⎥⎥⎥⎥⎥⎥⎦ . (13.46)
Note that the l × n matrix C = (ci, j) is called the coding matrix. More completeinformation about wireless systems and space–time codes can be found inRefs. 39–53 and 57.
We examine two-branch transmit diversity schemes with two transmitting andone receiving antenna in Fig. 13.8. This scheme may be defined by the followingthree functions: (1) the encoding and transmission sequence of informationsymbols at the transmitter, (2) the combining scheme at the receiver, and (3) thedecision rule for maximum-likelihood detection.
(1) The encoding and transmission sequence: At a given symbol period T , twosignals are simultaneously transmitted from the two antennas. The signal trans-mitted from antenna 0 is denoted by x0 and from antenna 1 by x1. During thenext symbol period, signal (−x∗1) is transmitted from antenna 0, and signal x∗0 istransmitted from antenna 1. Note that the encoding is done in space and time
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
442 Chapter 13
(space–time coding). The channel at time t may be modeled by a complex mul-tiplicative distortion h0(t) for transmit antenna 0 and h1(t) for transmit antenna 1.Assuming that fading is constant across two consecutive symbols, we can write
h0(t) = h0(t + T ) = h0 = α0 exp( jθ0),
h1(t) = h1(t + T ) = h1 = α1 exp( jθ1),(13.47)
where T is the symbol of duration.The received signals can be represented as follows:
r0 = r(t) = h0x0 + h1x1 + n0,
r1 = r(t + T ) = −h0x∗1 + h1x∗1 + n1,(13.48)
where r0 and r1 are the received signals at time t and t+Tand n0 and n1 are complexrandom variables representing receiver noise and interference.
(2) The combining scheme: The combiner as shown in Fig. 13.8 builds thefollowing two combined signals that are sent to the maximum-likelihood detector:
x0 = h∗0r0 + h1r∗1,x1 = h∗1r0 − h0r∗1.
(13.49)
(3) The maximum-likelihood decision rule: Combined signals [Eq. (13.49)] arethen sent to the maximum-likelihood detector, which, for each of the signals x0 andx1, uses the decision rule expressed in
(α20 + α
21 − 1) |xi|2 + d2(x0, xi) ≤ (α2
0 + α21 − 1) |xk|2 + d2(x0, xk) (13.50)
or in the following equation for phase shift keying (PSK) signals (equal energyconstellations):
d2(x0, xi) ≤ d2(x0, xk). (13.51)
The maximal-ratio combiner may then construct the signal x0 as shown in Fig. 13.8so that the maximum-likelihood detector may produce x′0, which is a maximum-likelihood estimate of x0.
13.2.2 Orthogonal array and linear processing design
A square parametric n × n matrix A(x1, x2, . . . , xk) is called an orthogonal array oforder n and type (s1, s2, . . . , sk) 5–8 if the following are true:
• The elements of the matrix A have the form xi or −xi for i = 1, 2, . . . , k.• The number of elements xi and −xi in every row (and column) is si.• AAT = AT A =
∑ki=1 six2
i In.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Applications of Hadamard Matrices in Communication Systems 443
The matrices A2(a, b), A4(a, b, c, d), and A8(a, b, . . . , h) are called Yang,Williamson, and Plotkin arrays,14 respectively:
A2(a, b) =(
a b−b a
), A4(a, b, c, d) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝a b c d−b a −d c−c d a −b−d −c b a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ,
A8(a, b, . . . , h) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
a b c d e f g h−b a d −c f −e −h g−c −d a b g h −e − f−d c −b a h −g f −e−e − f −g −h a b c d− f e −h −g −b a −d c−g h e − f −c d a −b−h −g f e −d −c b a
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠.
(13.52)
In general, an orthogonal array A(x1, x2, x3, x4) of order n and type (k, k, k, k) iscalled the Baumert–Hall array of order n = 4k, and array A(x1, x2, . . . , x8) of ordern and type (k, k, k, k, k, k, k, k) is called the Plotkin array of order n = 8k.
There are two attractions in providing transmit diversity via ODs, as follows:
(1) There is no loss in bandwidth, in the sense that the orthogonal array providesthe maximum possible transmission rate at full diversity.
(2) There is a simple maximum-likelihood decoding algorithm, which uses onlylinear combining at the receiver. The simplicity of the algorithm comes fromthe orthogonality of the columns of the ODs.
These two properties are preserved even if we allow linear processing at thetransmitter. Hence, we relax the definition of ODs to allow linear processing atthe transmitter. That is, signals transmitted from different antennas will now be alinear combination of constellation symbols.
A linear processing OD of variables x1, x2, . . . , xk is an n × n matrix E 15 inwhich the elements are linear combinations of variables x1, x2, . . . , xk and
ET E = diag
⎧⎪⎪⎨⎪⎪⎩n∑
i=1
s1i x2
i ,
n∑i=1
s2i x2
i , . . . ,
n∑i=1
sni x2
i
⎫⎪⎪⎬⎪⎪⎭ , (13.53)
where s ji are positive integers.
Since the maximum-likelihood decoding algorithm is achieved using only theorthogonality of columns of a design matrix, a linear processing OD can begeneralized as follows. A generalized processing OD G with rate R = k/p is amatrix of dimension p × n with entries 0, x1, x2, . . . , xk
7 satisfying
GTG = diag
⎧⎪⎪⎨⎪⎪⎩k∑
i=1
p1,ix2i ,
k∑i=1
p2,ix2i , . . . ,
k∑i=1
pn,ix2i
⎫⎪⎪⎬⎪⎪⎭ . (13.54)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
444 Chapter 13
Let A(R, n) be the minimum number p such that there is a p × n dimensionalgeneralized OD with a rate of at least R. The generalized OD accompanying thevalue A(R, n) is called the delay optimal. It is evident that after removing somecolumns, we can obtain the delay-optimal designs with rate 1. For example, fromEq. (13.52), we find that
A34(a, b, c, d) =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝a b c−b a −d−c d a−d −c b
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (13.55)
13.2.3 Design of space–time codes from the Hadamard matrix
Based on the multiplicative theorem5,8 in Ref. 42, it was proved that from aHadamard matrix of order 4n, a generalized linear processing real OD of size 4n×2with rate R = 1/2 can be constructed, depending on 2n variables. Below, we givean example.
Consider the Williamson–Hadamard matrix H12 and represent it as follows (alsosee Chapters 2 and 3):
H12 = Q0 ⊗ I3 + Q1 ⊗ U + Q1 ⊗ U2, (13.56)
where
Q0 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ + + +
− + − +− + + −− − + +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , Q1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝+ − − −+ + + −+ − + ++ + − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , U =
⎛⎜⎜⎜⎜⎜⎜⎜⎝0 + 00 0 ++ 0 0
⎞⎟⎟⎟⎟⎟⎟⎟⎠ . (13.57)
Represent H12 as H12 = (++) ⊗ A1 + (+−) ⊗ A2, where
A1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+ + 0 − 0 −0 0 + 0 + 00 0 0 + 0 +− + + 0 + 00 − + + 0 −+ 0 0 0 + 00 + 0 0 0 ++ 0 − + + 00 − 0 − + ++ 0 + 0 0 00 + 0 + 0 0+ 0 + 0 − +
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
, A2 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
0 0 + 0 + 0− − 0 + 0 +− + + 0 + 00 0 0 − 0 −+ 0 0 0 + 00 + − − 0 ++ 0 − + + 00 − 0 0 0 −+ 0 + 0 0 00 + 0 + − −+ 0 + 0 − +0 − 0 − 0 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (13.58)
Let xT = (x1, x2, x3, x4, x5, x6) be a column vector. We can check that G =(A1x, A2x) is a generalized linear processing real OD with rate 1/2 of six variables
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Applications of Hadamard Matrices in Communication Systems 445
(i.e., in a wireless communication systems we have two transmitting antennas):
G =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
x1 + x2 − x4 − x6 x3 + x5
x3 + x5 −x1 − x2 + x4 + x6
x4 + x6 −x1 + x2 + x3 + x5
−x1 + x2 + x3 + x5 −x4 − x6
−x2 + x3 + x4 − x6 x1 + x5
x1 + x5 x2 − x3 − x4 + x6
x2 + x6 x1 − x3 + x4 + x5
x1 − x3 + x4 + x5 −x2 − x6
−x2 − x4 + x5 + x6 x1 + x3
x1 + x3 x2 + x4 − x5 − x6
x2 + x4 x1 + x3 − x5 + x6
x1 + x3 − x5 + x6 −x2 − x4
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (13.59)
References
1. H. F. Harmuth, Transmission of Information by Orthogonal Functions,Springer-Verlag, Berlin (1969).
2. H. F. Harmuth, Sequency Theory: Foundations and Applications, AcademicPress, New York (1977).
3. A. K. Jain, Fundamentals of Digital Image Processing, Prentice-Hall, Inc.,Englewood Cliffs, NJ (1989).
4. R. K. Yargaladda and J. E. Hershey, Hadamard Matrix Analysis and Synthesis:With Applications to Communications and Signal/Image Processing, Kluwer,Dordrecht (1997).
5. S. S. Agaian, Hadamard Matrices and Their Applications, Lecture Notes inMath., 1168, Springer-Verlag, Berlin (1985).
6. J. Seberry and M. Yamada, “Hadamard matrices, sequences and blockdesigns,” in Surveys in Contemporary Design Theory, Wiley-InterscienceSeries in Discrete Mathematics, John Wiley & Sons, Hoboken, NJ (1992).
7. V. Tarokh, H. Jafarkhani, and A. R. Calderbank, “Space–time codes fromorthogonal designs,” IEEE Trans. Inf. Theory 45 (5), 1456–1467 (1999).
8. H. Sarukhanyan, “Decomposition of the Hadamard matrices and fastHadamard transform,” in Computer Analysis of Images and Patterns, LectureNotes in Computer Sciences, 1298 575–581 Springer, Berlin (1997).
9. H. Sarukhanyan, A. Anoyan, K. Egiazarian, J. Astola and S. Agaian, Codesgenerated from Hadamard matrices, in Proc. of Int. Workshop on Trends andRecent Achievements in Information Technology, Cluj-Napoca, Romania, May16–17, pp. 7–18 (2002).
10. Sh.-Ch. Chang and E. J. Weldon, “Coding for T-user multiple-accesschannels,” IEEE Trans. Inf. Theory 25 (6), 684–691 (1979).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
446 Chapter 13
11. R. Steele, “Introduction to digital cellular radio,” in Mobile Radio Communi-cations, R. Steele and L. Hanzo, Eds., second ed., IEEE, Piscataway, NJ(1999).
12. A. W. Lam and S. Tantaratana, Theory and Applications of Spread-SpectrumSystems, IEEE/EAB Self-Study Course, IEEE, Piscataway, NJ (1994).
13. V.I. Levenshtein, A new lower bound on aperiodic crosscorrelation of binarycodes, Proc. of 4th Int. Symp. Commun. Theory and Appl., ISCTA’1997,Ambleside, U.K., 13–18 July 1997, 147–149 (1997).
14. J. Oppermann and B. C. Vucetic, “Complex spreading sequences with a widerange of correlation properties,” IEEE Trans. Commun. 45, 365–375 (1997).
15. J. Seberry and R. Craigen, “Orthogonal designs,” in Handbook of Combi-natorial Designs, C. J. Colbourn and J. Dinitz, Eds., 400–406 CRC Press,Boca Raton (1996).
16. H. Evangelaras, Ch. Koukouvinos, and J. Seberry, “Applications of Hadamardmatrices,” J. Telecommun. Inf. Technol. 2, 3–10 (2003).
17. J. Carlson, Error-correcting codes: an introduction through problems, Nov. 19,1999, http://www.math.utah.edu/hschool/carlson.
18. M. Plotkin, “Binary codes with given of minima distance,” Proc. Cybernet. 7,60–67 (1963).
19. R. C. Bose and S. S. Shrikhande, “On the falsity of Euler’s conjecture aboutthe nonexistence of two orthogonal Latin squares of order 4t + 2,” Proc. N.A.S45, 734–737 (1959).
20. Combinatorics in Space The Mariner 9 Telemetry System, http://www.math.cudenver.edu/∼wcherowi/courses/m6409/mariner9talk.pdf.
21. C.E. Shannon, Two-way communication channels, in Proc. of 4th BerkeleySymp. Math. Statist. and Prob.1, 611–644 (1961).
22. W. W. Peterson and E. J. Weldon Jr., Error-Correcting Codes, second ed., MITPress, Cambridge, MA (1972).
23. J. H. Van Lindt, Introduction to Coding Theory, Springer-Verlag, Berlin(1982).
24. G. J. Foschini, “Layered space–time architecture for wireless communicationin a fading environment when using multi-element antennas,” Bell Labs Tech.J. 1 (2), 41–59 (1996).
25. I. E. Telatar, “Capacity of multi-antenna Gaussian channels,” Eur. Trans.Telecommun. 10 (6), 585–595 (1999).
26. D.W. Bliss, K.W. Forsythe, A.O. Hero and A.L. Swindlehurst, MIMOenvironmental capacity sensitivity, in Conf. Rec. of 34th Asilomar Conf.on Signals, Systems and Computers 1, Oct. 29–Nov. 1, Pacific Grove, CA,764–768 (2000).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Applications of Hadamard Matrices in Communication Systems 447
27. D.W. Bliss, K.W. Forsythe and A.F. Yegulalp, MIMO communication capacityusing infinite dimension random matrix eigenvalue distributions, in Conf. Rec.35th Asilomar Conf. on Signals, Systems an Computers, 2, Nov. 4–7, PacificGrove, CA, 969–974 (2001).
28. F. J. MacWilliams and N. J. A. Sloane, The Theory of Error-Correcting Codes,North-Holland, Amsterdam (1977).
29. N. Balaban and J. Salz, “Dual diversity combining and equalization in digitalcellular mobile radio,” IEEE Trans. Vehicle Technol. 40, 342–354 (1991).
30. G. J. Foschini Jr, “Layered space–time architecture for wirelesscommunication in a fading environment when using multi-element antennas,”Bell Labs Tech. J. 1 (2), 41–59 (1996).
31. G. J. Foschini Jr. and M. J. Gans, “On limits of wireless communicationin a fading environment when using multiple antennas,” Wireless PersonalCommun. 6 (3), 311–335 (1998).
32. J.C. Guey, M.P. Fitz, M.R. Bell and W.-Y. Kuo, Signal design for transmitterdiversity wireless communication systems over Rayleigh fading channels, inProc. IEEE VTC’96, 136–140 (1996).
33. N. Seshadri and J. H. Winters, “Two signaling schemes for improving theerror performance of frequency-division-duplex (FDD) transmission systemsusing transmitter antenna diversity,” Int. J. Wireless Inf. Networks 1 (1), 49–60(1994).
34. V. Tarokh, N. Seshardi, and A. R. Calderbank, “Space–time codes forhigh data rate wireless communication: Performance analysis and codeconstruction,” IEEE Trans. Inf. Theory 44 (2), 744–756 (1998).
35. V. Tarokh, A. Naguib, N. Seshardi, and A. R. Calderbank, “Space–timecodes for high data rate wireless communications: Performance criteria inthe presence of channel estimation errors, mobility and multiple paths,” IEEETrans. Commun. 47 (2), 199–207 (1999).
36. E. Telatar, Capacity of multi-antenna Gaussian channels, AT&T-BellLaboratories Internal Tech. Memo (Jun. 1995).
37. J. Winters, J. Salz, and R. D. Gitlin, “The impact of antenna diversionthe capacity of wireless communication systems,” IEEE Trans. Commun. 42(2/3/4), 1740–1751 (1994).
38. A. Wittneben, Base station modulation diversity for digital SIMULCAST, inProc. IEEE VTC, 505–511 (May 1993).
39. A. Wittneben, A new bandwidth efficient transmit antenna modulationdiversity scheme for linear digital modulation, in Proc. IEEE ICC, 1630–1634(1993).
40. K. J. Horadam, Hadamard Matrices and Their Applications, PrincetonUniversity Press, Princeton (2007).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
448 Chapter 13
41. M. Bossert, E. Gabidulin and P. Lusina, Space–time codes based on Hadamardmatrices, in Proc. of Int. Symp. on Information Theory 2000, Sorrento, Italy,283 (2000).
42. H. Sarukhanyan, S. Agaian, K. Egiazarian and J. Astola, Space–time codesfrom Hadamard matrices, URSI 26 Convention on Radio Science, FinnishWireless Communications Workshop, Oct. 23–24, Tampere, Finland (2001).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Chapter 14Randomization of DiscreteOrthogonal Transforms andEncryption
Yue Wu, Joseph P. Noonan, and Sos Agaian∗
Previous chapters have discussed many discrete orthogonal transforms (DOTs),such as the discrete fourier transform (DFT), discrete cosine transform (DCT),and discrete Hadamard transform (DHT). These discrete orthogonal transformsare well known for their successful applications in the areas of digital signalprocessing1–19 and communications.20–29
As demand for electronic privacy and security increases, a DOT system thatresists attacks from possible intruders becomes more desirable. A randomizeddiscrete orthogonal transform (RDOT) provides one way to achieve this goal, and ithas already been used in secure communications and encryptions for various formsof digital data, such as speeches,2,30 images,31–33 and videos.34,35
Early efforts on RDOTs have been made in different areas: Cuzick and Laiintroduced a sequence of phase constants in the conventional Fourier series;36
Ferreira studied a special class of permutation matrix that is commutable with theDFT matrix;37 Dmitriyev and Chernov proposed a discrete M-transform that isorthogonal and is based on m-sequence;38 Liu and Liu proposed a randomizationmethod on the discrete fractional Fourier transform (DFRFT) by taking randompowers for eigenvalues of the DFT matrix;39 Pei and Hsue improved on this39 byconstructing parameterized eigenvectors as well;40 and Zhou, Panetta, and Agaiandeveloped a parameterized DCT with controllable phase, magnitude, and DCTmatrix size.31
However, these efforts (1) focused on randomizing one specific DOT ratherthan a general form of DOT; (2) proposed to obtain RDOT systems from scratchinstead of upgrading existent DOT systems to RDOT systems; (3) may have lackeda large set of parameters (for example, in Ref. 38, one parameter has to be a prime
∗Yue Wu ([email protected]) and Joseph P. Noonan ([email protected]) are with the Dept. of Electrical and ComputerEngineering, Tufts University, Medford, MA 02155 USA. Sos Agaian is with the Dept. of Electrical and Computer Engineering,University of San Antonio, TX 78249 USA.
449
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
450 Chapter 14
number); (4) may not have generated the exact form of RDOT,39,40 leading toinevitable approximation errors in implementation;41 and (5) contained minimalmention of requirements for cryptanalysis.
This chapter proposes a randomization theorem of DOTs and thus a generalmethod of obtaining RDOTs from DOTs conforming to Eq. (14.1). It can bedemonstrated that building the proposed RDOT system is equivalent to improvinga DOT system by adding a pair of pre- and post-processes. The proposedrandomization method is very compact and is easily adopted by any existentuser-selected DOT system. Furthermore, the proposed RDOT matrix is of theexact form, for it is directly obtained by a series of matrix multiplicationsrelated to the parameter matrix and the original DOT matrix. Hence, it avoidsapproximation errors commonly seen in those eigenvector-decomposition-relatedRDOT methods,39,40 while keeping good features or optimizations, such as a fastalgorithm, already designed for existing DOTs. Any current DOT system can beimproved to a RDOT system and fulfill the needs of secure communication anddata encryption.
The remainder of this chapter is organized as follows:
• Section 14.1 reviews several well-known DOTs in the matrix form and brieflydiscusses the model of secure communication.• Section 14.2 proposes the new model of randomizing a general form of the
DOT, including theoretical foundations, qualified candidates of the parametermatrix, properties and features of new transforms, and several examples of newtransforms.• Section 14.3 discusses encryption applications for both 1D and 2D digital data;
the confusion and diffusion properties of an encryption system are also explored.
14.1 Preliminaries
This section will briefly discuss two topics: the matrix form of a DOT, andcryptography backgrounds. The first step is to unify all DOTs in a general formso that the DOT randomization theory presented in Section 14.2 can be deriveddirectly from this general form. The second step is to explain the conceptionsand terminologies used in the model so that secure communication and encryptionapplications based on the RDOT can be presented clearly in Section 14.3.
14.1.1 Matrix forms of DHT, DFT, DCT, and other DOTs
Transforms, especially discrete transforms, play a vital role in our digital world;this chapter concentrates on discrete transforms with an orthogonal basis matrix,which have a general form of
{Forward Transform: y = xMn
Inverse Transform: x = yMn.(14.1)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Randomization of Discrete Orthogonal Transforms and Encryption 451
Without losing any generality, let the vector x be the time-domain signal of size1 × n, the vector y be the corresponding transform domain signal of size 1 × n, thematrix M be the forward transform matrix of size n × n, and the matrix M be theinverse transform matrix of size n × n.
Equation (14.1) is called the general matrix form of a DOT. Matrix theory statesthat the transform pair in Eq. (14.1) is valid for any time-domain signal x if and onlyif the matrix product of the forward transform matrix M and the inverse transformmatrix M is the identity matrix In.
MnMn = In, (14.2)
where In is the identity matrix of size n as Eq. (14.3), shows that:
In =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝1 0 · · · 00 1 · · · 0....... . ....
0 0 0 1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠n×n
. (14.3)
In reality, many discrete transforms are of the above type and can be denotedin the general form of Eq. (14.1). It is the transform matrix pair of M and M thatmakes a distinct DOT. For example, the DHT transform pair is of the form ofEq. (14.1) directly, with its forward transform matrix H and its inverse transformmatrix H defined in Eqs. (14.4) and (14.5), respectively, where ⊗ denotes theKronecker product and HT denotes the transpose of H [see equivalent definitionsin Eq. (1.1)]:
Hn =
⎧⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎩H1 = (1) , n = 1,
H2 =
(1 11 −1
), n = 2,
H2 ⊗ H2k−1 , n = 2km.
(14.4)
Hn =1n
HTn . (14.5)
Similarly, the pair of size n × n DFT matrices can be defined as Eqs. (14.6) and(14.7),
Fn =1√n
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
w11 w2
1 · · · wn1
w12 w2
2 · · · wn2
....... . ....
w1n w2
n · · · wnn
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (14.6)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
452 Chapter 14
Fn =1√n
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
w11 w2
1 · · · wn1
w12 w2
2 · · · wn2
....... . ....
w1n w2
n · · · wnn
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠= F∗n, (14.7)
where wkm is defined in Eq. (14.8), wk
m is the complex conjugate of wkm, and F∗n is
the complex conjugate of Fn. In Eq. (14.8), j denotes the standard imaginary unitwith property that j2 = −1:
wkm = exp−
[j2πn
(m − 1)(k − 1)]. (14.8)
Similarly, the pair of size n × n DCT matrices can be defined as Eqs. (14.9) and(14.10), where ck
m is defined in Eq. (14.11):
Cn =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1/√
N · c11 2/√
N · c21 · · · 2/
√N · cn
1
1/√
N · c12 2/√
N · c22 · · · 2/
√N · cn
2...
.... . ....
1/√
N · c1n 2/√
N · c2n · · · 2/
√N · cn
n
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠, (14.9)
Cn = CT , (14.10)
ckm = cos [π(2m − 1)(k − 1)/2n] . (14.11)
Besides the examples shown above, other DOTs can be written in the form ofEq. (14.1), for instance, the discrete Hartley transform1 and the discrete M-trans-form.38
14.1.2 Cryptography
The fundamental objective of cryptography is to enable two people, usuallyreferred to as Alice and Bob, to communicate over an insecure channel so thatan opponent, Oscar, cannot understand what is being said.42 The information thatAlice wants to send is usually called a plaintext, which can be numerical data, a textmessage, or anything that can be represented by a digital bit stream. Alice encryptsthe plaintext, using a predetermined key K, and obtains an encrypted version ofthe plaintext, which is called ciphertext. Then Alice sends this resulting ciphertextover the insecure channel. Oscar (the eavesdropper), upon seeing the ciphertext,cannot determine the contents of the plaintext. However, Bob (the genuine receiver)knows the encryption key K and thus can decrypt the ciphertext and reconstruct theplaintext sent by Alice. Figure 14.1 illustrates this general cryptography model.
In this model, it seems that only the ciphertext communicated over the insecurechannel is accessible by Oscar, making the above cryptosystem appear very secure.In reality, however, Oscar should be considered a very powerful intruder who
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Randomization of Discrete Orthogonal Transforms and Encryption 453
Figure 14.1 A general cryptography model.
knows everything in the above cryptosystem except the encryption key K. As aresult, the security of communication between Alice and Bob depends solely onthe encryption key and, consequently, the encryption key source.
Based on the data loss of cryptography, methods of cryptography can beclassified into two groups:43 lossless encryption algorithms and joint compres-sion/encryption algorithms. More specifically, the lossless encryption algorithmcan be further divided into43 affine transform algorithms,44 chaotic-system-basedalgorithms,45–47 and frequency-domain algorithms;30,31 and the joint compres-sion/encryption algorithm can be further divided into43 base switching algorithms,entropy coding algorithms,48 and SCAN-language-based algorithms.49–51
Although whether a cryptography system is secure is a very complex question,this chapter focuses on three fundamental aspects: (1) the key space, namely,how many keys Alice can choose; (2) the confusion property, which can generatesimilar ciphertexts for distinct plaintexts by using different keys so that Oscar isconfused; and (3) the diffusion property, which states that a minor change in aplaintext makes its ciphertext dissipated such that the new ciphertext is largelydifferent from the previous one when the encryption key remains the same.
The first concern is simple, yet important, because if the key space is not largeenough, Oscar can try key by key and thus guess the plaintext. Conventionally, thismethod is called a “brute-force attack.”52 According to the computer calculationcapacity of the time, it is believed that a 256-bit key, i.e., the key space is 2256, issafe. Many well-known commercial ciphers, encryption algorithms, and standardsadopt this key length.
The second and third concern are proposed in a 1949 paper53 by ClaudeShannon. In this paper, the term “confusion” refers to making the relationshipbetween the key and the ciphertext very complex and involved.53 In other words,it is desirable to ensure that ciphertexts generated by different keys have the samestatistics so that the statistics of a ciphertext give no information about the usedkey.53 The term “diffusion” refers to the property that the redundancy in thestatistics of the plaintext is “dissipated” in the statistics of the ciphertext.53
14.2 Randomization of Discrete Orthogonal TransformsThe previous section showed that many DOTs are of the general form of Eq. (14.1).This section concentrates on using Theorem 14.1 to obtain a class of new
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
454 Chapter 14
transforms that randomize the original transform’s basis matrix by introducing twonew square matrices P and Q, such that the response y in the transform domainwill dramatically change, while keeping the new transform pair valid for any giveninput signal x.
14.2.1 The theorem of randomization of discrete orthogonal transforms
Let M and M be the forward and inverse DOT square matrices, respectively. Onceagain, this M can be defined as Eq. (14.4), Eq. (14.6), Eq. (14.9), or anotherqualified transform matrix, and M is the corresponding inverse transform matrix.Then M and M together define a pair of DOTs as shown in Eq. (14.1). Thus,there exists a family of randomized DOTs defined by L and L as presented inTheorem 14.1.
Theorem 14.1: [Randomization Theorem for DOTs (RTDOT)]. If Mn and Mn
together define a valid pair of transforms, i.e.,{Forward Transform: y = xMn
Inverse Transform: x = yMn,
and square parameter matrices Pn and Qn are such that PnQn = In, then Ln andLn define a valid pair of transforms,{
Forward Transform: y = xLn
Inverse Transform: x = yLn,where
{Ln = PnMnQn
Ln = PnMnQn.
Proof: We want to show that for any signal vector x, the inverse transform (IT)response z of x’s forward transform (FT) response y is equal to x. Consider thefollowing:
z = IT (y) = yLn = y(PnMnQn) = FT (x) · (PnMnQn)= (xLn) · (PnMnQn) = (xPnMnQn) · (PnMnQn)= x(Pn(Mn(QnPn)Mn)Qn) = x(Pn(MnMn)Qn)= x(PnQn) = xIn = x.
Therefore, as long as PnQn = In is satisfied, Ln and Ln together define a pair ofDOTs conforming to the forms in Eq. (14.1). It is worth noting that Theorem 14.1is the core of the chapter. In order to simplify the notation, Mn, Mn, Ln, Ln, Pn, Qn,In, etc., will be denoted without the subscript n, which is the matrix size.
14.2.2 Discussions on the square matrices P and Q
Theorem 14.1 implies that the relationship between the matrix P and Q in generalis Eq. (14.12), namely Q is the inverse matrix of P and vice versa.
Q = P−1. (14.12)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Randomization of Discrete Orthogonal Transforms and Encryption 455
Matrix theory states that as long as a matrix P is invertible, then its inversematrix exists. Therefore, infinitely many matrices can be used here for matrixP. According to Eq. (14.12), Q is determined once P is determined. Theremainder of this section focuses only on the matrix P since Q can be determinedcorrespondingly.
One good candidate for the matrix P is the permutation matrix family P.The permutation matrix is sparse and can be compactly denoted. Two types ofpermutation matrices are introduced here: the unitary permutation matrix and thegeneralized permutation matrix.
Definition 14.1: The unitary permutation matrix.54 A square matrix P is called aunitary permutation matrix (denoted as P ∈ U), if in every column and every rowthere is exactly one nonzero entry, whose value is 1.
Definition 14.2: The generalized permutation matrix.54 A square matrix P iscalled a generalized permutation matrix (denoted as P ∈ G), if in every columnand every row there is exactly one nonzero entry.
If P is a unitary permutation matrix, i.e., P ∈ U, then an n × nP matrix can bedenoted by a 1 × n vector.
Example 14.2.1: The row permutation sequence [3, 4, 2, 1] denotes Eq. (14.13):54
P = PU =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝0 0 0 10 1 0 01 0 0 00 0 1 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (14.13)
Meanwhile, Q is also a permutation matrix55 and is defined as
Q = P−1 = PT . (14.14)
Correspondingly, the new DOT matrix L = PMQ can be interpreted as a shuffledversion of the original transform matrix M.
If P is a generalized permutation matrix, i.e., P ∈ G, then P and Q can bedenoted as Eqs. (14.15) and (14.16), respectively, where D is an invertible diagonalmatrix defined in Eq. (14.17) and d1, d2, . . . , dn are nonzero constants; D−1 isdefined in Eq. (14.18); PU is a unitary permutation matrix; and P can be denotedby two 1 × n vectors:
P = DPU , (14.15)
Q = PTU D−1, (14.16)
D =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝d1 0 · · · 00 d2 · · · 0....... . ....
0 0 · · · dn
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , (14.17)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
456 Chapter 14
D−1 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝d−1
1 0 · · · 00 d−1
2 · · · 0....... . ....
0 0 · · · d−1n
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . (14.18)
Example 14.2.2: The row permutation sequence [3, 4, 2, 1] and the weightsequence [w1,w2,w3,w4] define the generalized permutation matrix:
P = PG =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝0 0 0 w4
0 w2 0 0
w1 0 0 0
0 0 w3 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠. (14.19)
As a result, the new DOT matrix L = PMQ can be interpreted as a weighted andshuffled basis matrix of the original transform basis matrix M. It is worth notingthat the parameter matrix P can be any invertible matrix and that the permutationmatrix family is just one special case of the invertible matrix family.
14.2.3 Examples of randomized transform matrix Ms
Based on the used matrix P, the new transform basis matrix L may have differentconfigurations. Take the 8 × 8 DHT matrix as an example. The original transformbasis matrix H8 is obtained via Eq. (14.4) as follows:
H8 =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
+1 +1 +1 +1 +1 +1 +1 +1
+1 −1 +1 −1 +1 −1 +1 −1
+1 +1 −1 −1 +1 +1 −1 −1
+1 −1 −1 +1 +1 −1 −1 +1
+1 +1 +1 +1 −1 −1 −1 −1
+1 −1 +1 −1 −1 +1 −1 +1
+1 +1 −1 −1 −1 −1 +1 +1
+1 −1 −1 +1 −1 +1 +1 −1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. (14.20)
In this example, four Ps, namely Pa, Pb, Pc, and Pd, are used, and thecorresponding new transform matrices are La, Lb, Lc, and Ld, respectively.Figure 14.2 illustrates the used Ps and resulting transform matrices Ls. For Pa = I8,La = I8H8I−1
8 and thus La = H8 as plot (a) shows that for Pb = PU , a unitarypermutation matrix, Lb is a row-column-wise shuffled matrix of H8; for Pc = PG,a generalized permutation matrix, Lc is a row-column-wise shuffled matrix of H8
with additional weights; and for Pd = R8, an invertible square random matrix, Ld
also becomes a random matrix. Obviously, the new transform basis matrices (f), (g)and (h) in Fig. 14.2 are very different from the original transform basis matrix (e).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Randomization of Discrete Orthogonal Transforms and Encryption 457
Figure 14.2 Randomized transform matrices based on H8.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
458 Chapter 14
Table 14.1 Transform matrix statistics. Mean and standard deviations were calculated from100 experiments.
Ps \ Ms I F C H
I 0.0156 ± 0.1240 −0.0751 ± 1.8122 0.0019 ± 0.1250 0.0156 ± 1.0000
PU 0.0156 ± 0.1240 −0.0751 ± 1.8122 0.0019 ± 0.1250 0.0156 ± 1.0000
PG 0.0001 ± 0.0715 −0.0288 ± 1.8333 0.0063 ± 3.8480 −0.0986 ± 31.1434
S 0.0000 ± 0.4119 0.0045 ± 1.8172 0.0476 ± 5.6024 −0.0559 ± 47.1922
R 0.0000 ± 0.5778 0.0025 ± 1.8219 −0.0221 ± 5.7738 −0.2053 ± 43.4050
Therefore, the proposed randomization framework of the discrete transform is ableto generate distinct new transforms based on the existing transforms. Moreover,it is clear that the randomness of new transform matrices follows the order: (e) <(f) < (g) < (h). It is the matrix P that generates the randomness into the originaltransform matrix.
In order to evaluate the randomness of the new generated transform basis matrix,the following tests are used as measurements:
mean(M) = uM =
n∑i=1
n∑j=1
Mi, j/n2, (14.21)
std(M) = σM =
√√ n∑i=1
n∑j=1
(Mi, j − uM)2/n2, (14.22)
where Mi, j denotes the element at the intersection of the i’th row and j’th columnof the matrix M.
Table 14.1 illustrates the first two statistical moments of the new generatedtransform matrix L under various pairs of P and M at size 64. Here the randomfull rank matrix R is uniformly distributed on [−1, 1], and the symmetric matrixS is generated via Eq. (14.23); the nonzero elements in PG are also uniformlydistributed on [−1, 1].
S = (R + RT )/2. (14.23)
The selected M matrices are F (the DFT matrix), C (the DCT matrix) andH (the DHT matrix). For real Ls, statistical tests are made on its elementsdirectly; for complex Ls, tests are made on its phase elements. In general, a moreuniformly distributed P matrix on [−1, 1] leads to a higher standard deviation in theL matrix.
It is worth noting that the first two statistical moments measure only thepopulation properties; thus, the permutation within the matrix is not accountedfor. However, the permutations within the matrix will cause a significant differencein the resulting new generated transform.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Randomization of Discrete Orthogonal Transforms and Encryption 459
14.2.4 Transform properties and features
In general, the new transform basis matrix L based on the original transform matrixM has all of following properties:
• Identity matrix: LnLn = In.• M matrix: L ≡ M, if P and the matrix M are commutable, i.e., PM = MP.• New basis matrix: L � M, if P and the matrix M are not commutable, i.e.,
PM � MP.• Unitary L matrix: if P and M are both unitary, then L = PMQ = PMP−1 is also
unitary.• Symmetric L matrix: if P is a unitary permutation matrix and the matrix M is
symmetric, then L = PMQ = PMP−1 is also symmetric.
In addition, if the forward transform defined by matrix M is denoted as S (.) and itsinverse transform as S −1(.), and correspondingly, if the forward transform definedby matrix L is denoted as R(.) and its inverse transform as R−1(.),
{Original Forward Transform: S (x) = xMOriginal Inverse Transform: S −1(y) = yM,
(14.24){New Forward Transform: R(x) = xLNew Inverse Transform: R−1(y) = yL,
(14.25)
then the new transform system of R(.) and R−1(.) can be directly realized by theoriginal transform system of S (.) and S −1(.), as the following equation shows:
{R(x) = xL = x(PMQ) = S (xP) · QR−1(y) = yL = y(PMQ) = S −1(yP) · Q. (14.26)
Equation (14.26) is very important because it demonstrates that the new transformsystem is completely compatible with the original transform. Unlike otherrandomization transforms,39,40 the new randomized transform does not requireany computations on eigenvector decompositions and thus does not create anyapproximation-caused errors. Any existing transform system conforming to themodel can be used to obtain new transforms without any change.
14.2.5 Examples of randomized discrete orthogonal transforms
This section uses the DFT basis matrix F at a size of 64 as an example andcompares its new randomized transforms with conventional DFT, DRFT39 andDFRFT.40 The following definitions are based on the used P matrix:
• PDFT: permuted discrete Fourier transform, where the new transform L =PU FPT
U .• WPDFT: weighted permutated discrete Fourier transform, where the new trans-
form L = PGFP−1G .
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
460 Chapter 14
Table 14.2 Statistics of transformed signals.
Transform Phasemean
Phasestd.
Phasesymmetry
Magnitudesymmetry
Eigenvector approximation
DFT 0.0490 1.8703 −1.0000 1.0000 NoDRFT 0.2647 2.0581 0.9997 0.9999 YesDFRFT 0.2145 2.5824 0.9984 0.9974 YesPDFT 0.0490 1.9972 −0.0558 −0.0517 NoWPDFT 0.0490 1.9401 −0.0038 −0.0284 NoRMDFT 0.0379 1.7425 −0.0125 0.0963 No
• RMDFT: random-matrix-based discrete Fourier transform, where the new trans-form L = RFR−1.
The test discrete signal is a rectangular wave defined as:
x[k] ={
1, k ∈ [16, 48]0, k ∈ [1, 15] ∪ [49, 64].
(14.27)
The resulting signals in transform domains are shown in Fig. 14.3. It is easyto see that the signals in the proposed transforms tend to have more random-likepatterns than DFT, DRFT and DFRFT in both phase and magnitude components.More detailed statistics of each transformed signal are shown in Table 14.2. Inaddition, compared to other methods, our randomization method is advantageousbecause
• no eigenvector decomposition approximation is required;• it is completely compatible with DFT at arbitrary size; and• it can be directly implemented with fast Fourier transforms.
In Table 14.2, it is worth noting that a uniformly distributed random phasevariable on [−π,π] has a mean u = 0 and a standard deviation σ = π/
√3 ≈ 1.8138.
In addition, the symmetry of the phase and the magnitude component of eachtransformed signal was measured. This measure compares the left half and theright half of the transform domain signal and is defined in Eq. (14.28), where ydenotes the transformed signal, and corr denotes the correlation operation definedin Eq. (14.29), where E denotes the mathematical expectation:
Sym = corr[y(33 : 2), y(33 : 64)]. (14.28)
corr[A, B] =E[(A − uA)(B − uB)]
σAσB. (14.29)
14.3 Encryption Applications
Given the transform matrix M, a random transform L can be generated easily(depending on the parameter square matrix P) by using Theorem 14.1. From nowon, the parameter matrix P is called the key matrix, for different P matrices give adifferent resulting transform matrix L, as Theorem 14.1 shows.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Randomization of Discrete Orthogonal Transforms and Encryption 461
5
10
10
15
20
20
25
30
30 40 50 60
35
10 20 30 40 50 60
0
0
2468
101214
–2–4
10 20 30 40 50 60
5
0
10
10
15
20
20
25
30
30 40 50 60
35
5
0
–5
10
15
20
25
30
35
10 20 30 40 50 60
–2
–4
0
10
2
4
20
6
8
30 40 50 60
10
–6
–2–1
012345
–4–5
–3
DFT response PDFT response
WPDFT response
RMDFT responseDFRFT response
DRFT response
Magnitude of RMDFT
Figure 14.3 Randomized transform comparisons.
Such a randomized transform can be directly considered as an encryptionsystem, as Fig. 14.4 illustrates. Although the opponent Oscar knows the encryptionand decryption algorithms, i.e., Theorem 14.1, which is implemented as RDOT(randomized DOT) and IRDOT (inverse random DOT) modules in Fig. 14.4, heis not able to determine the exact DOT that Alice used because such a DOT israndom and only dependent on the key matrix P. Therefore, without knowing thekey matrix P, Oscar cannot perfectly restore the plaintext sent from Alice, whereasBob is always able to reconstruct the plaintext by using the exact key matrix P togenerate the paired inverse transform.
It is worth noting that the same idea is applicable to any DOT that conformsto Eq. (14.1). Besides DHT, DWT, and DCT discussed in Section 14.2, qualifiedDOTs also include the discrete Hartley transform,1 discrete sine transform,56
discrete M-transform,38 and the discrete fractional-Fourier transform,40 amongothers.
Since the plaintext sent by Alice is unknown and could be a word, a message, animage, or something else, it is important to discuss the encryption system accordingto the plaintext types. Depending on the dimension of the digital data, the data canbe classified as
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
462 Chapter 14
Figure 14.4 Secure communication model using Random DOT (RDOT).
• 1D digital data, such as a text string or an audio data,• 2D digital data, such as a digital gray image, or• high-dimensional (3D and 3D+) digital data, such as a digital video.
The remainder of this section discusses the digital data encryption scheme usingthe randomized DOT for 1D, 2D, and 3D cases.
14.3.1 1D data encryption
Figure 14.4 gives us a general picture of the way in which the secure communica-tion between Alice and Bob over an insecure channel is realized. Two core mod-ules, i.e., RDOT and IRDOT in Fig. 14.4, can be realized by using Theorem 14.1.
Consider methods for improving the security of an existent communicationsystem using Theorem 14.1. Suppose in the beginning of their hypotheticalscenario, Alice and Bob communicate over a public channel, which means thatthe RDOT and IRDOT modules in Fig. 14.4 degrade to a pair of DOTs andIDOTs (inverse discrete orthogonal transforms). Then the relation in Eq. (14.26)can be easily adapted and implemented so that the old and insecure communicationsystem becomes a new and secure communication system, depicted in Fig. 14.4.
Equation (14.26) shows that the effect of using a new transform matrix isequivalent to first right multiplying the input 1D signal data with the key matrixP and then right multiplying the resulting signal with P−1. This fact provides theconvenience of improving an existent transform system to an encryption systemdirectly.
Figure 14.5 illustrates the relationship between the pair of DOTs and IDOTsand the pair of RDOTs and IRDOTs, where the symbols S (.) and S −1(.) denotethe original transform pair of DOTs and IDOTs. As a result, the “Encryption”module and the “Decryption” module become RDOT and IRDOT, respectively.The implementation of Theorem 14.1 on an existent qualified DOT system is theequivalent of adding a preprocess and postprocess, as Fig. 14.5 shows.
Three good examples implementing the above encryption system are PDFT,WPDFT and RMDFT, seen in Fig. 14.3. All three new transform systems usedifferent key matrices. Signals under these three transforms, i.e., y in Fig. 14.5 andthe ciphertext in Fig. 14.4, are all asymmetric and almost uniformly distributed on
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Randomization of Discrete Orthogonal Transforms and Encryption 463
Figure 14.5 The flowchart of the encryption system based on existent transforms.
the transform domain. As a result, their transform domain signal provides very littleinformation about the plaintext, the rectangular wave in the time domain definedby Eq. (14.27).
14.3.2 2D data encryption and beyond
Previous discussions focused on cases involving 1D data; however, the sameencryption concept can be extended naturally to 2D data. Such extension can bethought of in two ways. The first approach is simply to use a matrix version of Xinstead of a vector x in the 1D system developed in Theorem 14.1. In other words,a 1D transform system can gain the ability to process 2D data (a matrix X) bytransforming row by row. The 1D transform system proposed in Section 14.3.1 cansatisfy this extension without any changes.
The second approach uses a conventional 2D transform, which transforms amatrix not only according to its row vectors but also its column vectors. Sincecalculating the 2D discrete transform is equivalent to computing the 1D discretetransform of each dimension of the input matrix, a 2D transform can be definedvia its 1D transform defined by the relation in Eq. (14.30), where M is the trans-form matrix of 1D cases and x is an n × n 2D matrix in time domain. As a result,2D DHT,16 2D DFT57 and 2D DCT57 can be defined as Eqs. (14.31)–(14.33) viaEq. (14.4), Eq. (14.6), and Eq. (14.9), respectively:{
S 2D(X) = MXMS −1
2D(Y) = M−1Y M−1,(14.30){
S 2D−DFT (X) = FXFS −1
2D−DFT (Y) = F∗YF∗, (14.31){S 2D−DCT (X) = CXCS −1
2D−DCT (Y) = CT YCT ,(14.32){
S 2D−DHT (X) = HXHS −1
2D−DHT (Y) = HT YHT/n2.(14.33)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
464 Chapter 14
It is easy to verify that the flowchart of improving an existent DOT systemdepicted in Fig. 14.5 automatically satisfies the 2D discrete transforms. Afterapplying the forward 2D DOT transform, one obtains Eq. (14.34), and afterapplying the inverse 2D DOT transform, one obtains Eq. (14.35), which impliesthat the original signal in the time domain has been perfectly restored. Therefore,the encryption flowchart of Fig. 14.5 for 1D cases can be directly used for 2D cases,i.e., digital image encryption.
Y = S 2D(XP) · P−1 = (M(XP)M) · P−1. (14.34)
S −12D(YP) · P−1 = (M−1(YP)M−1) · P−1 = M−1((M(XP)M · P−1)P)M−1 · P−1
= (M−1M)(XP)(M · (P−1P)M−1) · P−1 = XPP−1 = X.(14.35)
One might ask whether the proposed encryption method for 1D cases can bealso used for high-dimensional digital data. The answer is affirmative, because thehigh-dimensional data can always be decomposed to a set of lower-dimensionaldata.
For example, a digital video clip is normally taken in the format of a sequenceof digital images. In other words, a digital video (3D data) can be considered asa set of digital images (2D data). Therefore, from this point of view, the proposedencryption method in Fig. 14.5 can be applied even for high-dimensional digitaldata for encryption.
14.3.3 Examples of image encryption
Recall the three important properties of a good encryption algorithm mentioned inSection 14.1.2:
(1) a sufficiently large key space that an exhaustive key search algorithm cannotcomplete in a certain time, for example, ten years under current computercalculation capacity;
(2) a sufficiently complex confusion property, which was identified by Shannonand refers to making the relationship between the key and the ciphertext verycomplex and involved;53 and
(3) an effective diffusion property, which was also identified by Shannon andrefers to the redundancy in the statistics of the plaintext being dissipated inthe statistics of the ciphertext.53
14.3.3.1 Key space analysis
Theoretically, the key space of the proposed encryption method is completelydependent on the key matrix P, where such dependence has a two-fold meaning.First, broadly, it means that the number of qualified key matrix P is the key space.From this point of view, there are infinitely (uncountably) many encryption keys.Second, narrowly, it means that the key space depends on the size and type of thekey matrix P. Without considering the size and the type of the key matrix P, thekey space does not make sense in reality.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Randomization of Discrete Orthogonal Transforms and Encryption 465
For example, assume that the input digital data is an n× n gray image. If the keymatrix is restricted to the unitary permutation matrix set U (see Definition 14.1),then the number of allowed keys is n!. For a 256 × 256 gray image (this size iscommonly used for key space analysis and is much smaller than standard digitalphotos), the allowed key space is 256! ≈ 21684 bits, which implies a sufficientlylarge key space. It is worth noting that our current encryption algorithms andciphers consider a key space of 2256 bits large enough to resist brute-force attacks.If P is restricted to the generalized permutation matrix set G (see Definition 14.2),then the number of allowed keys is infinite because the possible weights can bedefined as any decimal number.
14.3.3.2 Confusion property
The confusion property is desirable because ciphertexts generated by differentkeys have the same statistics, in which case the statistics of a ciphertext give noinformation about the used key.53 This section argues that even the naïve imageencryption system presented in Fig. 14.5 has promising confusion and diffusionproperties. The confusion property of the proposed system is difficult to prove,so this property will be illustrated with various images. The diffusion property ofthe proposed system can be shown by calculating the number-of-pixel change rate(NPCR) of the system, so a proof will be given directly.
Figures 14.6–14.8 depict the confusion property of the system from differentaspects by using various plaintext image and P matrices. It is worth notingthat the transforms that these figures all refer to are 2D transforms defined inEqs. (14.31)–(14.33), and a random transform is taken with respect to the generalform of Eq. (14.34).
In Fig. 14.6, the gray 256 × 256 “Lena” image is used as the plaintext image.The ciphertext for each 2D transform according to the unitary permutation matrixPU , the generalized permutation matrix PG, and the full-rank random matrixR are shown in the odd rows. In the even rows, histograms are plotted belowtheir corresponding images. From a visual inspection and histogram analysisperspective, it is clear that various key matrices P generate similar statistics.
In Fig. 14.7, three 256 × 256 gray images from the USC-SIPI image databasedemonstrate the effect of using various plaintext images and different types of Pmatrices. It is not difficult to see that the ciphertext images have similar coefficienthistograms compared to those in Fig. 14.6, although both of the P matrices and theplaintext images are different. This indicates that the statistics of ciphertext imagesprovide very limited information about the key, and thus the proposed system hasthe property of confusion.
Figure 14.8 investigates the transform coefficient histogram of the ciphertextimage in the first row of Fig. 14.7. Note that the ciphertext images are sized at256 × 256. Instead of looking at coefficient histograms for the whole ciphertextimage, the whole ciphertext image was divided into sixteen 64 × 64 imageblocks without overlapping, then the coefficient histogram of each image blockwas inspected. These block coefficient histograms are shown in the secondcolumn of Fig. 14.8. The third and fourth columns show the mean and standard
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
466 Chapter 14
Figure 14.6 Image encryption using the proposed random transform—Part I: influencesof different types of random matrix P. (See note for Fig. 14.7 on p. 467.)
deviation, respectively, of these block coefficient histograms. The block coefficienthistograms show that different image blocks have similar histograms like theoverall histograms, which implies that ciphertext images somewhat present theself-similarity property. In other words, an invader would be confused becausedifferent parts of the ciphertext image have more or less the same statistics.
14.3.3.3 Diffusion property
Conventionally, the diffusion property can be easily tested by the NPCR,46,58–60
which counts the number of pixel changes in ciphertext by changing only onepixel in plaintext. The higher percentage the NPCR is, the better the diffusionproperty.
Mathematically, the NPCR of an encrypted image can be defined as follows:
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Randomization of Discrete Orthogonal Transforms and Encryption 467
Figure 14.7 Image encryption using the proposed random transform—Part II: usingdifferent input images. Note: (1) The histograms of O are plotted by using 256 bins thatare uniformly distributed on [0, 255], the range of a gray image. (2) The histograms of PU
are plotted by using two bins, i.e. 0 and 1; the number of pixels in each bin in the histogram isthe number expressed after taking logarithm 10. (3) The histograms of PG and R are plottedby using 256 bins, which are uniformly distributed on [−1, 1]; the number of pixels in eachbin in the histogram is the number expressed after taking logarithm 10. (4) The histogramof ciphertext images of DCT and WHT are plotted by using 256 bins whose logarithm 10length are uniformly distributed on [− log m, log m], where m is the maximum of the absolutetransform domain coefficient. (5) The histogram of ciphertext images of DFT are plotted withrespect to magnitude and phase, respectively. The magnitude histogram is plotted by using256 bins whose logarithm 10 length are uniformly distributed on [0, log m], where m is themaximum magnitude. The phase histogram is plotted by using 256 bins that are uniformlydistributed on [−π,π].
Definition 14.3:
NPCR =∑i, j
Di, j/T × 100%,
Di, j =
⎧⎪⎪⎨⎪⎪⎩1, if C1i, j � C2
i, j
0, if C1i, j = C2
i, j,
(14.36)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
468 Chapter 14
Figure 14.8 Image encryption using the proposed random transform—Part III: random-ness of the transform coefficients.
where C1 and C2 are ciphertexts before and after changing one pixel in plaintext,respectively; D has the same size as image C1; and T denotes the total number ofpixels in C1.
The diffusion property is one characteristic of a 2D transform. For a 2Dtransform system with a transform matrix M, if none of the elements in M is zero(note that all DFT, DCT and DHT matrices are of this type), then even one elementchange in the input matrix X will lead to a completely different resulting matrix Y ,according to Lemma 14.1, by forcing matrix A = B = M.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Randomization of Discrete Orthogonal Transforms and Encryption 469
Lemma 14.1: Suppose A and B are both n × n transform matrices with nonzeroelements, i.e., for any subscript pair i, j,∃Ai, j � 0 and Bi, j � 0, its corresponding2D transform S (.) is defined as Eq. (14.30). Suppose x and y are two n × n
matrices and yi, j =
{xi, j elsez if i = r, j = c , where r and c are constant integers between 1
and n, and z is a constant with z � xr,c. Then, for any subscript pair i, j, there is[S (x)]i, j � [S (y)]i, j.
Proof: For any subscript pair i, j, there is
[S (x)]i, j = [AxB]i, j =
n∑k=1
[Ax]i,k × Bk, j
=
n∑k=1
⎛⎜⎜⎜⎜⎜⎝ n∑m=1
Ai,m × xm,k
⎞⎟⎟⎟⎟⎟⎠ × Bk, j
= Ai,c × xc,r × Br, j +
n∑k=1k�r
n∑m=1n�c
Ai,m × xm,k × Bk, j.
(14.37)
Similarly, there is
[S (y)]i, j = Ai,c × yc,r × Br, j +
n∑k=1k�r
n∑m=1n�c
Ai,m × ym,k × Bk, j
= Ai,c × yc,r × Br, j +
n∑k=1k�r
n∑m=1n�c
Ai,m × xm,k × Bk, j. (14.38)
Then,
[S (x)]i, j − [S (y)]i, j = Ai,c × (xc,r − yc,r) × Br, j � 0. (14.39)
Consequently, as long as the transform matrix M and Z = PMP−1 havenonzero elements, the encrypted image produced by the proposed system has 100%NPCR, which is the theoretical maximum of NPCR. For this there is the followingtheorem:
Theorem 14.2: The NPCR of the proposed 2D encryption system described inFig. 14.5 is 100%, as long as none of the elements of Z = PMP−1 and M is zero,where P is an invertible matrix and M is the transform matrix defined under a 2Dtransform system S (.), as shown in Eq. (14.34).
Proof: Suppose that x and X are two plaintexts with one pixel difference asLemma 14.1 shows, and y and Y are corresponding ciphertext obtained by usingthe proposed encryption system. The ciphertext can be denoted as
y = S (xP)P−1 = MxPMP−1 = MxZ, (14.40)
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
470 Chapter 14
and correspondingly,
Y = S (XP)P−1 = MXZ. (14.41)
If both M and Z satisfy the condition that they contain nonzero components, thenLemma 14.1 can be applied directly. Therefore, ∀i, j,∃yi, j � Yi, j, and equivalently,∀i, j∃Di, j = 1.
As a result,
NPCR = 100%.
Remarks 14.1: Nonzero conditions imposed on M and Z are automaticallysatisfied if P is a unitary permutation matrix PU or a generalized permutationmatrix PG.
References
1. C.-Y. Hsu and J.-L. Wu, “Fast computation of discrete Hartley transform viaWalsh–Hadamard transform,” Electron. Lett. 23 (9), 466–468 (1987).
2. S. Sridharan, E. Dawson, and B. Goldburg, “Speech encryption in thetransform domain,” Electron. Lett. 26, 655–657 (1990).
3. A. H. Delaney and Y. Bresler, “A fast and accurate Fourier algorithm foriterative parallel-beam tomography,” Image Processing, IEEE Transactions on5, 740–753 (1996).
4. T. M. Foltz and B. M. Welsh, “Symmetric convolution of asymmetricmultidimensional sequences using discrete trigonometric transforms,” ImageProcessing, IEEE Transactions on 8, 640–651 (1999).
5. I. Djurovic and V. V. Lukin, “Robust DFT with high breakdown point forcomplex-valued impulse noise environment,” Signal Processing Letters, IEEE13, 25–28 (2006).
6. G.A. Shah and T.S. Rathore, “A new fast Radix-2 decimation-in-frequencyalgorithm for computing the discrete Hartley transform,” in ComputationalIntelligence, Communication Systems and Networks, 2009. CICSYN ’09. FirstInternational Conference on, 363–368, (2009).
7. J. Bruce, “Discrete Fourier transforms, linear filters, and spectrum weighting,”Audio and Electroacoustics, IEEE Transactions on 16, 495–499 (1968).
8. C. J. Zarowski, M. Yunik, and G. O. Martens, “DFT spectrum filtering,”Acoustics, Speech and Signal Processing, IEEE Transactions on 36, 461–470(1988).
9. R. Kresch and N. Merhav, “Fast DCT domain filtering using the DCT and theDST,” Image Processing, IEEE Transactions on 8, 821–833 (1999).
10. V. F. Candela, A. Marquina, and S. Serna, “A local spectral inversion of alinearized TV model for denoising and deblurring,” Image Processing, IEEETransactions on 12, 808–816 (2003).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Randomization of Discrete Orthogonal Transforms and Encryption 471
11. A. Foi, V. Katkovnik, and K. Egiazarian, “Pointwise shape-adaptive DCT forhigh-quality denoising and deblocking of grayscale and color images,” ImageProcessing, IEEE Transactions on 16, 1395–1411 (2007).
12. N. Gupta, M. N. S. Swamy, and E. I. Plotkin, “Wavelet domain-based videonoise reduction using temporal discrete cosine transform and hierarchicallyadapted thresholding,” Image Processing, IET 1, 2–12 (2007).
13. Y. Huang, H. M. Dreizen, and N. P. Galatsanos, “Prioritized DCT forcompression and progressive transmission of images,” Image Processing,IEEE Transactions on 1, 477–487 (1992).
14. R. Neelamani, R. de Queiroz, F. Zhigang, S. Dash, and R. G. Baraniuk, “JPEGcompression history estimation for color images,” Image Processing, IEEETransactions on 15, 1365–1378 (2006).
15. K. Yamatani and N. Saito, “Improvement of DCT-based compressionalgorithms using Poisson’s equation,” Image Processing, IEEE Transactionson 15, 3672–3689 (2006).
16. W. K. Pratt, J. Kane, and H. C. Andrews, “Hadamard transform imagecoding,” Proc. IEEE 57, 58–68 (1969).
17. O. K. Ersoy and A. Nouira, “Image coding with the discrete cosine-IIItransform,” Selected Areas in Communications, IEEE Journal on 10, 884–891(1992).
18. D. Sun and W.-K. Cham, “Postprocessing of low bit-rate block DCTcoded images based on a fields of experts prior,” Image Processing, IEEETransactions on 16, 2743–2751 (2007).
19. T. Suzuki and M. Ikehara, “Integer DCT based on direct-lifting of DCT-IDCTfor lossless-to-lossy image coding,” Image Processing, IEEE Transactions on19, 2958–2965 (2010).
20. S. Weinstein and P. Ebert, “Data transmission by frequency-division mul-tiplexing using the discrete Fourier transform,” Communication Technology,IEEE Transactions on 19, 628–634 (1971).
21. G. Bertocci, B. Schoenherr, and D. Messerschmitt, “An approach to theimplementation of a discrete cosine transform,” Communications, IEEETransactions on 30, 635–641 (1982).
22. E. Feig, “Practical aspects of DFT-based frequency division multiplexingfor data transmission,” Communications, IEEE Transactions on 38, 929–932(1990).
23. S. Toledo, “On the communication complexity of the discrete Fouriertransform,” Signal Processing Letters, IEEE 3, 171–172 (1996).
24. S. Hara, A. Wannasarnmaytha, Y. Tsuchida, and N. Morinaga, “A novelFSK demodulation method using short-time DFT analysis for LEO satellitecommunication systems,” Vehicular Technology, IEEE Transactions on 46,625–633 (1997).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
472 Chapter 14
25. L. Hyesook and E. E. Swartzlander Jr., “Multidimensional systolic arrays forthe implementation of discrete Fourier transforms,” Signal Processing, IEEETransactions on, 47, 1359–1370 (1999).
26. H. Bogucka, “Application of the joint discrete hadamard-inverse Fouriertransform in a MC-CDMA wireless communication system-performance andcomplexity studies,” Wireless Communications, IEEE Transactions on 3,2013–2018 (2004).
27. S. M. Phoong, C. Yubing, and C. Chun-Yang, “DFT-modulated filterbanktransceivers for multipath fading channels,” Signal Processing, IEEETransactions on, 53, 182–192 (2005).
28. T. Ran, D. Bing, Z. Wei-Qiang, and W. Yue, “Sampling and sampling rateconversion of band limited signals in the fractional Fourier transform domain,”Signal Processing, IEEE Transactions on, 56, 158–171 (2008).
29. A. N. Akansu and H. Agirman-Tosun, “Generalized discrete Fouriertransform with nonlinear phase,” Signal Processing, IEEE Transactions on,58, 4547–4556 (2010).
30. S. Sridharan, E. Dawson, and B. Goldburg, “Fast Fourier transform basedspeech encryption system,” Communications, Speech and Vision, IEEEProceedings I 138, 215–223 (1991).
31. Z. Yicong, K. Panetta and S. Agaian, “Image encryption using discreteparametric cosine transform,” in Signals, Systems and Computers, 2009 Confe-rence Record of the Forty-Third Asilomar Conference on, 2009, pp. 395–399(2009).
32. J. Lang, R. Tao, and Y. Wang, “Image encryption based on the multiple-parameter discrete fractional Fourier transform and chaos function,” OpticsCommunications 283, 2092–2096 (2010).
33. Z. Liu, L. Xu, T. Liu, H. Chen, P. Li, C. Lin, and S. Liu, “Color imageencryption by using Arnold transform and color-blend operation in discretecosine transform domains,” Optics Communications 284 (1), 123–128 (2010).
34. B. Bhargava, C. Shi, and S.-Y. Wang, “MPEG video encryption algorithms,”Multimedia Tools and Applications 24, 57–79 (2004).
35. S. Yang and S. Sun, “A video encryption method based on chaotic maps inDCT domain,” Progress in Natural Science 18, 1299–1304 (2008).
36. J. Cuzick and T. L. Lai, “On random Fourier series,” Trans. Amer. Math. Soc.261, 53–80 (1980).
37. P. J. S. G. Ferreira, “A group of permutations that commute with the discreteFourier transform,” Signal Processing, IEEE Transactions on, 42, 444–445(1994).
38. A. Dmitryev and V. Chernov, “Two-dimensional discrete orthogonal trans-forms with the ‘noise-like’ basis functions,” in Int. Conf. GraphiCon 2000,pp. 36–41 (2000).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Randomization of Discrete Orthogonal Transforms and Encryption 473
39. Z. Liu and S. Liu, “Randomization of the Fourier transform,” Opt. Lett. 32,478–480 (2007).
40. P. Soo-Chang and H. Wen-Liang, “Random discrete fractional Fouriertransform,” Signal Processing Letters, IEEE 16, 1015–1018 (2009).
41. M. T. Hanna, N. P. A. Seif, and W. A. E. M. Ahmed, “Hermite–Gaussian-likeeigenvectors of the discrete Fourier transform matrix based on the singular-value decomposition of its orthogonal projection matrices,” Circuits andSystems I: Regular Papers, IEEE Transactions on, 51, 2245–2254 (2004).
42. D. Stinson, Cryptography: Theory and Practice, CRC press, (2006).
43. M. Yang, N. Bourbakis, and L. Shujun, “Data-image-video encryption,”Potentials, IEEE 23, 28–34 (2004).
44. T. Chuang and J. Lin, “A new multiresolution approach to still imageencryption,” Pattern Recognition and Image Analysis 9, 431–436 (1999).
45. Y. Wu, J.P. Noonan and S. Agaian, “Binary data encryption using theSudoku block,” in Systems, Man and Cybernetics, 2010. SMC 2010. IEEEInternational Conference on, (2010).
46. G. Chen, Y. Mao, and C. K. Chui, “A symmetric image encryption schemebased on 3D chaotic cat maps,” Chaos, Solitons & Fractals 21, 749–761(2004).
47. L. Zhang, X. Liao, and X. Wang, “An image encryption approach based onchaotic maps,” Chaos, Solitons & Fractals 24, 759–765 (2005).
48. W. Xiaolin and P.W. Moo, “Joint image/video compression and encryption viahigh-order conditional entropy coding of wavelet coefficients,” in MultimediaComputing and Systems, 1999. IEEE International Conference on, 2, 908–912,(1999).
49. S. S. Maniccam and N. G. Bourbakis, “Image and video encryption usingSCAN patterns,” Pattern Recognition 37, 725–737 (2004).
50. N. Bourbakis and A. Dollas, “SCAN-based compression-encryption-hidingfor video on demand,” Multimedia, IEEE 10, 79–87 (2003).
51. S. Maniccam and N. Bourbakis, “Lossless image compression and encryptionusing SCAN,” Pattern Recognition 34, 1229–1245 (2001).
52. R. Sutton, Secure Communications: Applications and Management, JohnWiley & Sons, New York (2002).
53. C. E. Shannon, “Communication theory of secrecy systems,” Bell SystemsTechnical Journal 28, 656–715 (1949).
54. L. Smith, Linear Algebra, 3rd ed., Springer-Verlag, New York (1998).
55. D. Bernstein, Matrix Mathematics: Theory, Facts, and Formulas with Applica-tion to Linear Systems Theory, Princeton University Press, Princeton (2005).
56. A. Gupta and K. R. Rao, “An efficient FFT algorithm based on the discretesine transform,” Signal Processing, IEEE Transactions on 39, 486–490 (1991).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
474 Chapter 14
57. R. González and R. Woods, Digital Image Processing, Prentice Hall, NewYork (2008).
58. C. K. Huang and H. H. Nien, “Multi chaotic systems based pixel shuffle forimage encryption,” Optics Communications 282, 2123–2127 (2009).
59. H. S. Kwok and W. K. S. Tang, “A fast image encryption system based onchaotic maps with finite precision representation,” Chaos, Solitons & Fractals32, 1518–1529 (2007).
60. Y. Mao, G. Chen, and S. Lian, “A novel fast image encryption scheme basedon 3D chaotic Baker maps,” Int. J. Bifurcation and Chaos 14 (10), 3613–3624(2003).
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
499
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Prof. Sos S. Agaian (Fellow SPIE, Fellow AAAS, ForeignMember, National Academy of Sciences of the Republic ofArmenia) received the M.S. degree (summa cum laude) inmathematics and mechanics from Yerevan State University,Yerevan, Armenia, the Ph.D. degree in mathematics and physics(Steklov Institute of Mathematics Academy of Sciences of theUSSR) and the Doctor of Engineering Sciences degree fromthe Academy of Sciences of the USSR, Moscow, Russia. Heis currently the Peter T. Flawn distinguished professor in the
College of Engineering at The University of Texas at San Antonio. He hasauthored more than 450 scientific papers and 4 books, and holds 13 patents. Heis an associate editor of several journals. His current research interests includesignal/image processing and systems, information security, mobile and medicalimaging, and secure communication.
Prof. Hakob Sarukhanyan received the M.Sc. degree inmathematics from Yerevan State University, Armenia, in 1973,the Ph.D. degree in technical sciences and Doctor Sci. degree inmathematical sciences from the National Academy of Sciencesof Armenia (NAS RA) in 1982 and 1999 respectively. He wasJunior and Senior researcher at the Institute for Informatics andAutomation Problems of NAS RA from 1973 to 1993, where heis currently a head of the Digital Signal and Image ProcessingLaboratory. He was also a visiting professor at the Tampere
International Center of Signal Processing, Finland, from 1999 to 2007. His researchinterests include signal/image processing, wireless communications, combinatorialtheory, spectral techniques, and object recognition. He has authored more than 90scientific papers.
Prof. Karen Egiazarian received the M.Sc. degree inmathematics from Yerevan State University, Armenia, in 1981,the Ph.D. degree in physics and mathematics from MoscowState University, Moscow, Russia, in 1986, and the D.Tech.degree from Tampere University of Technology, Finland, in1994. He has been Senior Researcher with the Department ofDigital Signal Processing, Institute of Information Problems andAutomation, National Academy of Sciences of Armenia. Since1996, he has been an Assistant Professor with the DSP/TUT,
where he is currently a Professor, leading the Transforms and Spectral Methodsgroup. His research interests are in the areas of applied mathematics, signal/imageprocessing, and digital logic.
Prof. Jaakko Astola (Fellow SPIE, Fellow IEEE) received B.Sc.,M.Sc, Licentiate, and Ph.D. degrees in mathematics (specializingin error-correcting codes) from Turku University, Finland, in1972, 1973, 1975, and 1978, respectively. Since 1993 he hasbeen Professor of Signal Processing and Director of TampereInternational Center for Signal Processing leading a group ofabout 60 scientists. He was nominated Academy Professor bythe Academy of Finland (2001–2006). His research interestsinclude signal processing, coding theory, spectral techniques, andstatistics.
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms