+ All Categories
Home > Documents > M-TRANSFORM ALGORITHM

M-TRANSFORM ALGORITHM

Date post: 03-Dec-2023
Category:
Upload: iau-ae
View: 0 times
Download: 0 times
Share this document with a friend
10
M-TRANSFORM ALGORITHM A new orthogonal transform, the Malakooti transform (M-TRANFORM), analogous to the Hadamard transform, has been developed to represent the time series signals with a set of coefficients called the M coefficients. These set of coefficients contain useful information about the spectral characteristics of the underlying time series and can be used for data transmission and compression. Many time series signal are highly redundant; speech, image, and other periodic signals fall into this category. The M-transform representation enables one to represent the desired signal with fewer coefficients, resulting in a saving of transmission bandwidth and memory. This transform, like the Hadamard transform, has a complete orthonormal set and has an important role in signal and image processing applications. It has been shown [1] that the time series signals obtained from a two- dimensional shape can be represented with a few coefficients for pattern recognition and shape classification. Similarly, speech signals are represented by a set of coefficients for spectral estimation and word recognition. In all these cases, the right singular vectors of the correlation matrix are used as an orthogonal basis for the solution space. For this reason and many others, unitary transforms or an orthonormal basis, in particular a complete orthonormal basis, should receive more attention than other transforms which have no unitary property. Complete Orthonormal Set A set of linearly independent vectors 1, , 2 , โ€ฆ is said to be orthonormal if it is self-reciprocal, i.e., if the vectors are all mutually orthogonal and have unit norm as โˆ— ={ 1 = 0 โ‰  . (EQ-1) If time series signals and are represented by a linear combination of a set of orthonormal vectors =โˆ‘ ฮฑ i =1 v i (EQ-2) and =โˆ‘ ฮฒ i v i =1 (EQ-3) Then their inner product < , >, is easy to find. The inner product of X and Y is obtained as < , > = < โˆ‘ ,โˆ‘ > =1 =1 (EQ-4) = โˆ‘ โˆ‘ โˆ— โˆ— =1 =1 =โˆ‘ =1 โˆ— = < , >. An orthonormal set is said to be complete if any additional non-zero orthonormal vector is superfluous. If a signal is approximated by a linear combination of the first m vectors of complete orthonormal set with dimension n, then the norm of the error can be reduced by choosing m sufficiently large. In the next section, a method for
Transcript

M-TRANSFORM ALGORITHM

A new orthogonal transform, the Malakooti transform (M-TRANFORM), analogous to the Hadamard transform,

has been developed to represent the time series signals with a set of coefficients called the M coefficients. These

set of coefficients contain useful information about the spectral characteristics of the underlying time series and can

be used for data transmission and compression. Many time series signal are highly redundant; speech, image, and

other periodic signals fall into this category. The M-transform representation enables one to represent the desired

signal with fewer coefficients, resulting in a saving of transmission bandwidth and memory.

This transform, like the Hadamard transform, has a complete orthonormal set and has an important role in

signal and image processing applications. It has been shown [1] that the time series signals obtained from a two-

dimensional shape can be represented with a few coefficients for pattern recognition and shape classification.

Similarly, speech signals are represented by a set of coefficients for spectral estimation and word recognition. In all

these cases, the right singular vectors of the correlation matrix are used as an orthogonal basis for the solution

space. For this reason and many others, unitary transforms or an orthonormal basis, in particular a complete

orthonormal basis, should receive more attention than other transforms which have no unitary property.

Complete Orthonormal Set

A set of linearly independent vectors ๐‘ฃ1,, ๐‘ฃ2, โ€ฆ ๐‘ฃ๐‘› is said to be orthonormal if it is self-reciprocal, i.e., if the

vectors are all mutually orthogonal and have unit norm as

๐‘ฃ๐‘–โˆ— ๐‘ฃ๐‘— = {

1 ๐‘– = ๐‘—0 ๐‘– โ‰  ๐‘—.

(EQ-1)

If time series signals and are represented by a linear combination of a set of orthonormal vectors

๐‘‹ = โˆ‘ ฮฑi๐‘›๐‘–=1 vi (EQ-2)

and ๐‘Œ = โˆ‘ ฮฒi vi

๐‘›

๐‘–=1 (EQ-3)

Then their inner product < ๐‘‹, ๐‘Œ >, is easy to find. The inner product of X and Y is obtained as

< ๐‘‹, ๐‘Œ > = < โˆ‘ ๐›ผ๐‘– ๐‘ฃ๐‘– , โˆ‘ ๐›ฝ๐‘— ๐‘ฃ๐‘— >๐‘›๐‘—=1

๐‘›๐‘–=1 (EQ-4)

= โˆ‘ โˆ‘ ๐›ผ๐‘– ๐›ฝ๐‘—โˆ— ๐‘ข๐‘–

โˆ— ๐‘ข๐‘—

๐‘›

๐‘—=1

๐‘›

๐‘–=1

= โˆ‘ ๐›ผ๐‘–

๐‘›

๐‘–=1

๐›ฝ๐‘–โˆ—

= < ๐›ผ, ๐›ฝ >.

An orthonormal set is said to be complete if any additional non-zero orthonormal vector is superfluous. If a

signal is approximated by a linear combination of the first m vectors of complete orthonormal set with dimension n,

then the norm of the error can be reduced by choosing m sufficiently large. In the next section, a method for

generating the complete orthonormal sets of vectors, m-transform vectors, with the Eigen analysis of the spanned

space is presented.

Generation of M-transform Matrix

Assume that the order-1 M-transform matrix, ๐‘€๐‘œ , is equal to one,

๐‘€๐‘œ = 1, (EQ-5)

and the order-2 M-transform matrix, ๐‘€1 , is formed according to

๐‘€1 = [๐‘Ž๐‘€๐‘œ ๐‘Ž๐‘๐‘€๐‘œ

โˆ’๐‘Ž๐‘๐‘€๐‘œ ๐‘Ž๐‘€๐‘œ] (EQ-6)

๐‘€1 = [๐‘Ž ๐‘Ž๐‘

โˆ’๐‘Ž๐‘ ๐‘Ž],

Where a and b are constant parameters.

The matrix M is a 2 x 2 anti-symmetric unitary matrix

๐‘€1๐‘‡ ๐‘€1 = ๐‘€1๐‘€1

๐‘‡ = ๐‘ ๐ผ, (EQ-7)

Where the matrix I is a 2 x 2 identity matrix and constant parameters c is equal to the determinant of ๐‘€1. Thus,

c = ๐‘Ž2 (1 + ๐‘2) (EQ-8)

and ๐‘€1 inverse is given as

๐‘€1โˆ’1 =

๐‘€1๐‘‡

C (EQ-9)

Similarly, the order-3 M-transform matrix, ๐‘€2, can be obtained according to

๐‘€2 = [๐‘Ž๐‘€1 ๐‘Ž๐‘๐‘€1

โˆ’๐‘Ž๐‘๐‘€1 ๐‘Ž๐‘€1] (EQ-10)

The matrix ๐‘€2 is a 4 x 4 anti-symmetric unitary matrix

๐‘€2๐‘‡๐‘€2 = ๐‘€2๐‘€2

๐‘‡ = ๐ถ2 ๐ผ, (EQ-11)

where the matrix I is an 4 x 4 identity matrix, C is given in (EQ-8), and the inverse of is calculated according to

๐‘€2โˆ’1 =

๐‘€2๐‘‡

๐ถ2 , (EQ-12)

Without loss of generality, the 2๐‘˜ x 2๐‘˜ M-transform matrix, ๐‘€๐‘˜ can be obtained from

๐‘€๐‘˜ = [

๐‘Ž๐‘€๐‘˜โˆ’1 ๐‘Ž๐‘๐‘€๐‘˜โˆ’1โˆ’๐‘Ž๐‘๐‘€๐‘˜โˆ’1 ๐‘Ž๐‘€๐‘˜โˆ’1

], (EQ-13)

and ๐‘€๐‘˜ inverse is given according to

๐‘€๐‘˜โˆ’1 =

๐‘€๐‘˜๐‘‡

๐ถ๐‘˜ , (EQ-14)

Using the Kronecker product notation

๐ด โจ‚ ๐ต = [

๐‘Ž11 ๐ต

๐‘Ž21 ๐ต

๐‘Ž12 ๐ต

๐‘Ž22 ๐ตโ‹ฏ

๐‘Ž1๐‘› ๐ต

๐‘Ž2๐‘› ๐ต

โ‹ฎ โ‹ฎ๐‘Ž๐‘›1 ๐ต ๐‘Ž๐‘›2 ๐ต โ‹ฏ ๐‘Ž๐‘›๐‘› ๐ต

], (EQ-15)

the M-transform matrices can written according to

๐‘€1 = ๐‘€1 โจ‚ ๐‘€๐‘œ = [๐‘Ž๐‘€๐‘œ ๐‘Ž๐‘๐‘€๐‘œ

โˆ’๐‘Ž๐‘๐‘€๐‘œ ๐‘Ž๐‘€๐‘œ] , (EQ-16)

and ๐‘€2 = ๐‘€1 โจ‚ ๐‘€1 (EQ-17)

= ๐‘€1 โจ‚ (๐‘€1 โจ‚ ๐‘€๐‘œ)

=(๐‘€1 โจ‚ ๐‘€1) โจ‚ ๐‘€๐‘œ

= ๐‘€1(2)

โจ‚ ๐‘€๐‘œ

= ๐‘€1(1)

โจ‚ ๐‘€1,

where ๐‘€1(2)

is the Kronecker power 2 of ๐‘€1 and the symbol โจ‚ denotes the Kronecker product. Similarly,

๐‘€3 = ๐‘€1 โจ‚ ๐‘€2 (EQ-18)

= ๐‘€1 โจ‚ ๐‘€1(2)

โจ‚ ๐‘€๐‘œ

= ๐‘€1(3)

โจ‚ ๐‘€๐‘œ

= ๐‘€1(2)

โจ‚ ๐‘€1

.

.

๐‘€๐‘˜ = ๐‘€1 โจ‚ ๐‘€๐‘˜โˆ’1 (EQ-19)

= ๐‘€1โจ‚ ๐‘€1(๐‘˜โˆ’1)

โจ‚ ๐‘€๐‘œ

= ๐‘€1(๐‘˜)

โจ‚ ๐‘€๐‘œ

= ๐‘€1(๐‘˜โˆ’1)

โจ‚ ๐‘€1,

It has been shown that, (EQ-64) - (EQ-67), that the eigenvalues of a 4 x 4 matrix D, ๐œ†๐‘–

D = ๐ด โจ‚ ๐ต, (EQ-20)

can be calculated from the product of the eigenvalues of B, ๐œ‡๐‘–, and the eigenvalues of A, ๐›พ๐‘–, according to

๐œ†1 = ๐œ‡1 ๐›พ1 , (EQ-21)

๐œ†2 = ๐œ‡2 ๐›พ1, (EQ-22)

๐œ†3 = ๐›พ1๐œ‡2, (EQ-23)

๐œ†4 = ๐›พ2๐œ‡2, (EQ-24)

Thus, the eigenvalues of the M-transform matrices can be obtained from a recursive algorithm proposed in the

following section.

Eigenvalues-Eigenvectors of M-transform Matrices

Assume that constant parameters a and b are given as a=1 and b=2.

Thus,

๐‘€1 = [1 2

โˆ’2 1], (EQ-25)

and

๐‘€2 = [

1โˆ’2โˆ’2 4

2 1โˆ’4โˆ’2

2 โˆ’4 1

โˆ’2

4 2 2 1

], (EQ-26)

The eigenvalues, ๐œ†๐‘–(1)

, and eigenvectors, ๐‘‹๐‘–(1)

, of ๐‘€1 are,

๐œ†1(1)

= 1 + ๐‘—2 , (EQ-27)

๐œ†2(1)

= 1 โˆ’ ๐‘—2, (EQ-28)

๐‘‹1(1)

= { ๐‘—0.7071

โˆ’0.7071, (EQ-29)

๐‘‹2(1)

= {0.7071

โˆ’๐‘—0.7071, (EQ-30)

where the eigenvalues of are complex conjugates of each other. Using the Kronecker product relationship

between ๐‘€1 and ๐‘€2, Equation (EQ-20) , the eigenvalues of ๐‘€2, ๐œ†๐‘–(2)

, are calculated according to

๐€๐Ÿ(๐Ÿ)

= ๐€๐Ÿ(๐Ÿ)

๐€๐Ÿ(๐Ÿ)

, (EQ-31)

= (๐Ÿ + ๐’‹๐Ÿ) (๐Ÿ + ๐’‹๐Ÿ)

= -3+j4

๐€๐Ÿ(๐Ÿ)

= ๐€๐Ÿ(๐Ÿ)

๐€๐Ÿ(๐Ÿ)

, (EQ-32)

= (1 โˆ’ ๐‘—2) (1 + ๐‘—2)

= 5

๐€๐Ÿ‘(๐Ÿ)

= ๐€๐Ÿ(๐Ÿ)

๐€๐Ÿ(๐Ÿ)

, (EQ-33)

= (1 + ๐‘—2) (1 โˆ’ ๐‘—2)

= 5

= ๐€๐Ÿโˆ—(๐Ÿ)

๐€๐Ÿ’(๐Ÿ)

= ๐€๐Ÿ(๐Ÿ)

๐€๐Ÿ(๐Ÿ)

, (EQ-34)

= (1 โˆ’ ๐‘—2) (1 โˆ’ ๐‘—2)

= โˆ’3 โˆ’j4

= ๐€๐Ÿโˆ—(๐Ÿ)

.

The matrix ๐‘€2 has two complex conjugate eigenvalues. Using the complex conjugate property half of the

eigenvalues of ๐‘€2 can be obtained without any calculation.

In general eigenvalues of the 2๐ฟ โˆ— 2๐ฟ transform, ๐‘€๐ฟ, are calculated recursively form the proposed algorithm as

follows

1. Calculate the eigenvalues of ๐‘€1

๐œ†1(1)

= ๐‘Ž + ๐‘—๐‘Ž๐‘ (EQ-35)

๐œ†2(1)

= ๐‘Ž โˆ’ ๐‘—๐‘Ž๐‘ (EQ-36)

2. for K=2 to L do

N=2๐‘˜ (EQ-37)

for i=1 to N/2 do;

๐œ†๐‘–(๐‘˜)

= ๐œ†๐‘–(๐‘˜โˆ’1)

๐œ†1(1)

(EQ-38)

๐œ†๐‘โˆ’๐‘–+1(๐‘˜)

= ๐œ†๐‘–โˆ—(๐‘˜)

(EQ-39)

End do

End do

The eigenvectors of the L-th order M-transform are obtained from a new procedure based on the eigenvectors

of the lower order M-transform. The proposed eigenvector algorithm calculates half of the eigenvectors of the ๐‘€๐ฟ

matrix from a simple procedure. This method, which requires few operations, is incomparable with a direct method

where the dimension of ๐‘€๐ฟ is high. To show the effectiveness of the proposed eigenvector algorithm, the

eigenvectors of the ๐‘€2 matrix are calculated using the Eigen characterization of the ๐‘€1 matrix.

The characteristic equation of the ๐‘€1 matrix is given as

๐‘“(๐œ†) = ๐œ†2- 2a๐œ†+ ๐‘Ž2(1 + ๐‘2) (EQ-40)

or

๐œ†2 = 2๐‘Ž๐œ† โˆ’ ๐‘Ž2(1 + ๐‘2) (EQ-41)

Using the Cayley-Hamilton theorem gives

๐‘€12 = 2๐‘Ž๐‘€1 โˆ’ ๐‘Ž2 (1 + ๐‘2)๐ผ

= ๐‘ก๐‘Ÿ[๐‘€1]๐‘€1 โˆ’ ๐‘‘๐‘’๐‘ก[๐‘€1]๐ผ. (EQ-42)

Assume that ๐œ† is the eigenvalue of ๐‘€2 corresponding to the eigenvector, X,

X = [Xu

Xl] (EQ-43)

Thus, the eigenvectors of the ๐‘€2 matrix are related by the following relationships

๐‘€2X = ๐œ†X (EQ-44)

(๐‘€2 โˆ’ ๐œ†๐ผ )๐‘‹ = 0. (EQ-45)

Substituting for ๐‘€2 and x into (EQ-45) gives

[00

] = [ ๐‘Ž๐‘€1 โˆ’ ๐œ†๐ผ ๐‘Ž๐‘๐‘€1

โˆ’๐‘Ž๐‘๐‘€1 ๐‘Ž๐‘€1 โˆ’ ๐œ†๐ผ] (EQ-46)

or

(๐‘Ž๐‘€1 โˆ’ ๐œ†๐ผ) ๐‘ฅ๐‘ข + ๐‘Ž๐‘๐‘€1 ๐‘ฅ๐‘™ = 0 . (EQ-47)

and

โˆ’๐‘Ž๐‘๐‘€1 ๐‘ฅ๐‘ข + (๐‘Ž๐‘€1 โˆ’ ๐œ†๐ผ)๐‘ฅ๐‘™ = 0. (EQ-48)

Since abโ‰  0, ๐‘ฅ๐‘™ can be obtained from (EQ-48) as

๐‘ฅ๐‘™ = โˆ’ 1

๐‘Ž๐‘ ๐‘€1

โˆ’1 (๐‘Ž๐‘€1 โˆ’ ๐œ†๐ผ)๐‘ฅ๐‘ข . (EQ-49)

Substituting for ๐‘ฅ๐‘™ into (EQ-48) gives

โˆ’๐‘Ž๐‘๐‘€1๐‘ฅ๐‘ข โˆ’ (๐‘Ž๐‘€1 โˆ’ ๐œ†๐ผ) 1

๐‘Ž๐‘๐‘€1

โˆ’1 (๐‘Ž๐‘€1 โˆ’ ๐œ†๐ผ) ๐‘ฅ๐‘ข = 0 (EQ-50)

or

1

๐‘Ž๐‘ [2๐‘Ž๐œ†๐‘€1 โˆ’ ๐‘Ž2(1 + ๐‘2) ๐‘€1

2 โˆ’ ๐œ†2๐ผ] ๐‘€1โˆ’1 ๐‘ฅ๐‘ข = 0. (EQ-51)

Similarly, ๐‘ฅ๐‘ข can be obtained from (EQ-48)

๐‘ฅ๐‘ข = 1

๐‘Ž๐‘๐‘€1

โˆ’1 (๐‘Ž๐‘€1 โˆ’ ๐œ†๐ผ)๐‘ฅ๐‘™. (EQ-52)

Substituting for ๐‘‹๐‘ข into Equation (EQ-48) gives

(๐‘Ž๐‘€1 โˆ’ ๐œ†๐ผ) [1

๐‘Ž๐‘ ๐‘€1

โˆ’1 (๐‘Ž๐‘€1 โˆ’ ๐œ†๐ผ) ๐‘‹๐‘™] + ๐‘Ž๐‘๐‘€1 ๐‘‹๐‘™ = 0 (EQ-53)

Or

1

๐‘Ž๐‘ [๐‘Ž2 (1 + ๐‘2)๐‘€1

2 โˆ’ 2๐‘Ž๐œ†๐‘€1 + ๐œ†2๐ผ]๐‘€1โˆ’1 ๐‘‹๐‘™ = 0 (EQ-54)

Substituting for ๐‘€12 from (EQ-42) into (EQ-53) gives

2๐‘Ž[๐‘Ž2 (1 + ๐‘2) โˆ’ ๐œ†] [๐‘€1 โˆ’1

2๐‘Ž (๐‘Ž2 (1 + ๐‘2) + ๐œ†)๐ผ] ๐‘€1

โˆ’1 ๐‘‹๐‘™ = 0 (EQ-55)

Similarly, Substituting for from (EQ-42) into (EQ-50) gives

2๐‘Ž[๐‘Ž2 (1 + ๐‘2) โˆ’ ๐œ†] [๐‘€1 โˆ’1

2๐‘Ž (๐‘Ž2 (1 + ๐‘2) + ๐œ†)๐ผ] ๐‘€1

โˆ’1 ๐‘‹๐‘ข = 0 (EQ-56)

Two eigenvalues of ๐‘€2 are calculated from (EQ-57) and (EQ-58) according to

๐œ†2(2)

= ๐‘Ž2(1 + ๐‘2)

= ๐‘‘๐‘’๐‘ก[๐‘€1] (EQ-57)

and

๐œ†3(2)

= ๐‘Ž2(1 + ๐‘2)

= ๐‘‘๐‘’๐‘ก [๐‘€1] (EQ-58)

or

๐œ†2(2)

= ๐œ†3(2)

= ๐‘Ž2(1 + ๐‘2)

= (๐‘Ž + ๐‘—๐‘Ž๐‘) (๐‘Ž โˆ’ ๐‘—๐‘Ž๐‘) (EQ-59)

where (๐‘Ž + ๐‘—๐‘Ž๐‘)and (๐‘Ž โˆ’ ๐‘—๐‘Ž๐‘) are the eigenvalues of the ๐‘€1 matrix. Thus,

๐œ†2(2)

= ๐œ†3(2)

(EQ-60)

๐œ†1(1)

= ๐œ†2(1)

= ๐‘‘๐‘’๐‘ก [๐‘€1]. (EQ-61)

The remaining two eigenvalues of ๐‘€2 are calculated from the following relationship,

๐‘€1 โˆ’ 1

2๐‘Ž (๐‘Ž2(1 + ๐‘2) + ๐œ†)๐ผ = 0 (EQ-62)

and

2๐‘Ž๐‘€1 โˆ’ ๐‘Ž2 (1 + ๐‘2)๐ผ โˆ’ ๐œ†๐ผ = 0 (EQ-63)

Equation (EQ-63) indicates that the remaining eigenvalues of ๐‘€2 are related to the eigenvalues of ๐‘€1 according to

๐œ†1(2)

= 2๐‘Ž๐œ†1(1)

โˆ’ ๐‘Ž2 (1 + ๐‘2) (EQ-64)

= ๐‘ก๐‘Ÿ [๐‘€1] ๐œ†1(1)

โˆ’ ๐‘‘๐‘’๐‘ก [๐‘€1]

= 2๐‘Ž ๐œ†1(1)

โˆ’ ๐œ†1(1)

๐œ†2(1)

= ๐œ†1(1)

[2๐‘Ž โˆ’ ๐œ†2(1)

].

Substituting for ๐œ†2(1)

from (EQ-36) into (EQ-64) gives

๐œ†1(2)

= ๐œ†1(1)

[2๐‘Ž โˆ’ (๐‘Ž โˆ’ ๐‘—๐‘Ž๐‘)]

= ๐œ†1(1)

(๐‘Ž + ๐‘—๐‘Ž๐‘)

= ๐œ†1(1)

๐œ†1(1)

(EQ-65)

Similarly,

๐œ†4(2)

= 2๐‘Ž ๐œ†2(1)

โˆ’ ๐‘Ž2 (1 + ๐‘2) (EQ-66)

= ๐‘ก๐‘Ÿ [๐‘€1] ๐œ†2(1)

โˆ’ ๐‘‘๐‘’๐‘ก [๐‘€1]

= 2๐‘Ž ๐œ†2(1)

โˆ’ ๐œ†1(1)

๐œ†2(1)

= ๐œ†2(1)

[2๐‘Ž โˆ’ ๐œ†1(1)

]

= ๐œ†2(1)

[2๐‘Ž โˆ’ (๐‘Ž + ๐‘—๐‘Ž๐‘)]

= ๐œ†2(1)

(๐‘Ž โˆ’ ๐‘—๐‘).

Thus,

๐œ†4(2)

= ๐œ†2(1)

๐œ†2(1)

. (EQ-67)

Assume that ๐‘‹4(2)

is an eigenvector of ๐‘€2 , where

๐‘‹4(2)

= [

1

๐‘Ž๐‘(๐‘Ž๐‘€1 โˆ’ ๐œ†4

(2)๐ผ)๐‘‹2

(1)

๐‘€1๐‘‹2(1)

]. (EQ-68)

๐‘‹2(1)

is the eigenvector of ๐‘€1 , and ๐œ†4(2)

, and ๐‘‹4(2)

are the eigenvalues and eigenvector of ๐‘€2 , respectively. Using

Equation (EQ-45), the eigenvalues- eigenvector of ๐‘€2 can be written according to

[๐‘€2 โˆ’ ๐œ†4(2)

๐ผ]๐‘‹4(2)

= 0

= [(๐‘Ž๐‘€1 โˆ’ ๐œ†4

(2)๐ผ)

1

๐‘Ž๐‘(๐‘Ž๐‘€1 โˆ’ ๐œ†4

(2)๐ผ)๐‘‹2

(1)+ ๐‘Ž๐‘๐‘€1

2๐‘‹2(1)

โˆ’๐‘Ž๐‘๐‘€1 (1

๐‘Ž๐‘) (๐‘Ž๐‘€1 โˆ’ ๐œ†4

(2)๐ผ)๐‘‹2

(1)+ (๐‘Ž๐‘€1 โˆ’ ๐œ†4

(2)๐ผ)๐‘€1๐‘‹2

(1)]

= [1

๐‘Ž๐‘(๐‘Ž2(1 + ๐‘2)๐‘€1

2 โˆ’ 2๐‘Ž๐œ†4(2)

๐‘€1 + ๐œ†42(2)

๐ผ)๐‘‹2(1)

0] (EQ-69)

Substituting for ๐‘€12 from (EQ-42) into (EQ-69) gives

[๐‘€2 โˆ’ ๐œ†4(2)

๐ผ]๐‘‹4(2)

= [1

๐‘Ž๐‘(๐‘Ž2(1 + ๐‘2) โˆ’ ๐œ†4

(2))[2๐‘Ž๐‘€1 โˆ’ (๐‘Ž2(1 + ๐‘2) + ๐œ†4

(2))๐ผ]๐‘‹2

(1)]

= [1

๐‘Ž๐‘(๐‘‘๐‘’๐‘ก๐‘€1 โˆ’ ๐œ†4

(2))[๐‘ก๐‘Ÿ(๐‘€1)๐‘€1 โˆ’ (๐‘‘๐‘’๐‘ก(๐‘€1) + ๐œ†4

(2))๐ผ]๐‘‹2

(1)] (EQ-70)

Substituting for ๐œ†4(2)

from (EQ-67) into (EQ-70) gives

[๐‘€2 โˆ’ ๐œ†4(2)

๐ผ]๐‘‹4(2)

= [1

๐‘Ž๐‘[2๐‘‘๐‘’๐‘ก(๐‘€1) โˆ’ ๐œ†2

(1)๐‘ก๐‘Ÿ(๐‘€1)]๐‘ก๐‘Ÿ (๐‘€1)[๐‘€1 โˆ’ ๐œ†2

(1)๐ผ]๐‘‹2

(1)]

= [00

] (EQ-71)

Thus, ๐‘‹4(2)

, is an eigenvector of ๐‘€2 corresponding to ๐œ†4(2)

.

Similarly, ๐‘‹1(2)

is an eigenvector of ๐‘€2 corresponding to ๐œ†1(2)

, where

๐‘‹1(2)

= [1

๐‘Ž๐‘(๐‘Ž๐‘€1 โˆ’ ๐œ†1

(2)๐ผ)๐‘‹1

(1)

๐‘€1

]. (EQ-72)

Since ๐‘€1 is nonsingular, ๐‘‹1(2)

and ๐‘‹4(2)

are linearly independent. The other two eigenvector of ๐‘€2 , ๐‘‹2(2)

and ๐‘‹3(2)

are

selected so that

๐‘‡ = [๐‘‹1(2)

, ๐‘‹2(2)

, ๐‘‹3(2)

, ๐‘‹4(2)

] (EQ-73)

Are linearly independent, and

ฮ› = ๐‘‡โˆ’1 ๐‘€2 ๐‘‡ (EQ-74)

is a diagonal matrix.

This analysis clearly shows that half of the eigenvectors of ๐‘€๐‘™ can be obtained from a straight-forward

procedure and the other half can be selected so that T = ๐‘‹1(๐‘˜)

, ๐‘‹2(๐‘˜)

, โ€ฆ , ๐‘‹๐‘›(๐‘˜)

is a linearly independent set. The

proposed M-transform, whose eigenvalues are calculated from a simple recursive algorithm and half of its

eigenvectors are calculated from a few simple operations, can be used as an orthogonal basis to represent many

signal and image processing applications. Moreover, the number of distinct eigenvalues of ๐‘€๐ฟ is L+1 as opposed to

an L-th order Hadamard transform, ๐ป๐ฟ , which only has two distinct eigenvalues[2]. The eigenvalues of the ๐‘€๐ฟ

transform can be used as feature parameters if the elements of the ๐‘€๐ฟ matrix are the autocorrelation lags of the

observation and by proper selection of the a and b constants.

References:

[1] Mohammad V Malakooti, Keith Teague, โ€œCARMA Model method of two-dimensional shape

classification: An Eigen system approach vs. the LP Norm, ICASSP, Vol. 12, 1987.

[2] Clark R. Givens, โ€œSome observations on eigenvectors of Hadamard Matrices of order 2nโ€

, Linear Algebra and its

Applications, Vol. 56, Jan. 1984, pp245-250.


Recommended