+ All Categories
Home > Documents > [IEEE AFRICON 2011 - Victoria Falls, Livingstone, Zambia (2011.09.13-2011.09.15)] IEEE Africon '11 -...

[IEEE AFRICON 2011 - Victoria Falls, Livingstone, Zambia (2011.09.13-2011.09.15)] IEEE Africon '11 -...

Date post: 12-Dec-2016
Category:
Upload: rex
View: 214 times
Download: 0 times
Share this document with a friend
6
Automatic classification of combined analog and digital modulation schemes using feedforward neural network Jide Julius Popoola School of Electrical and Information Engineering University of the Witwatersrand Johannesburg, South Africa [email protected] Rex van Olst School of Electrical and Information Engineering University of the Witwatersrand Johannesburg, South Africa [email protected] AbstractThis paper presents an artificial neural network based automatic modulation classifier system which can be used to classify combined analog and digital modulation schemes. Four best known analog modulation schemes and five corresponding digital modulation schemes were considered. An approach that involves three different steps in developing an automatic modulation classification is presented. The first step involves the extraction of the statistical feature keys used as the inputs to the classifier. The statistical feature keys are extracted from instantaneous amplitude, instantaneous frequency and phase of the simulated signals using MATLAB code. The second step involves the development of the automatic modulation classifier based on a backpropagation neural network algorithm. The third step of the methodology involves the performance evaluation of the developed automatic modulation classifier with a related study from the research literature. Results obtained show that the developed classifier is accurate and sensitive to classification of the nine modulation schemes considered with an average success rate above 99.0%. Keywords- artificial neural network (ANN); ANN classification; network training; automatic modulation classification I. INTRODUCTION In modern decision making processes, whether in engineering, business, medicine or other fields, tools that allow the decision makers to assign an object to an appropriate class or target are very essential. One of such tools, which has demonstrated promising potential, is the artificial neural network (ANN). ANNs are mathematical models that attempt to parallel and simulate the functionality and decision making of the human brain. As a mathematical model of human cognition through biological neurons, ANN is regarded as an information processing system that has certain performance features in common with the human neural system. These features include the ability of storing knowledge and making it available for use whenever necessary. Other features of ANNs are their propensity to identify patterns even in the presence of noise [1] as well as an aptitude for taking past experiences into consideration and make inferences and judgment about new situations. They derive these strengths from their massively parallel structure. Generally, ANNs are classified according to either their architecture or learning algorithm. According to their architecture, ANNs can be classified as either feedforward networks or recurrent networks. In a feedforward network, the neurons are grouped into layers with the flow of signals from the input layer to the output layer in a unidirectional manner. There is interconnection of neurons from one layer to the next layer but there are no connecting cycles or loops. In a recurrent network, on the other hand, there exists both inter- and intra- connections of neurons. When a training algorithm is used in classifying ANNs, it is usually classified into three classes, namely: supervised learning, unsupervised learning and reinforcement learning. In supervised learning, the ANNs require a known output in order to adjust its weights appropriately. In unsupervised learning, the ANNs do not require a known output; the networks basically learn, adapt and build in their response to the inputs. In reinforcement learning, the ANNs employ a critic to evaluate the fitness of the networks output corresponding to the given input. Network learning or training is the adaptation process by which the network learns the relationship between the inputs and the targets. It is a repetitive incremental process guided by an optimization algorithm [2]. Generally, multilayer feedforward neural networks can be trained as non-linear classifiers using a generalized back propagation algorithm (BPA). The BPA is a supervised learning algorithm with a defined sum square error function that aims at reducing the overall system error to a minimum. ANN performance can be improved if a suitable error function minimization algorithm is chosen. We used the mean square error (MSE) as an index of performance in minimizing the error in this study. In addition, a scaled conjugate gradient (SCG) algorithm was applied in the study for network training. Its choice was based on its designed ability that avoids the time-consuming line search that raises the computational complexity per learning iteration in order to determine an appropriate step size. SCG combines the model-trust region approach used in the Levenberg-Marquardt (LM) algorithm with the conjugate gradient approach [3,4] in order to scale the step size. The authors thank all the sponsors of the University of the Witwatersrand’s Centre for Telecommunications Access and Services (CeTAS) for their financial support. The authors also express their appreciation to the Independent Communications of South Africa (ICASA) for its financial support. The principal author also acknowledges the financial assistance received from the University of the Witwatersrand’s Postgraduate Merit Award (PMA). IEEE Africon 2011 - The Falls Resort and Conference Centre, Livingstone, Zambia, 13 - 15 September 2011 978-1-61284-993-5/11/$26.00 ©2011 IEEE
Transcript

Automatic classification of combined analog and digital modulation schemes using feedforward neural

network

Jide Julius Popoola School of Electrical and Information Engineering

University of the Witwatersrand Johannesburg, South Africa

[email protected]

Rex van Olst School of Electrical and Information Engineering

University of the Witwatersrand Johannesburg, South Africa

[email protected]

Abstract—This paper presents an artificial neural network based automatic modulation classifier system which can be used to classify combined analog and digital modulation schemes. Four best known analog modulation schemes and five corresponding digital modulation schemes were considered. An approach that involves three different steps in developing an automatic modulation classification is presented. The first step involves the extraction of the statistical feature keys used as the inputs to the classifier. The statistical feature keys are extracted from instantaneous amplitude, instantaneous frequency and phase of the simulated signals using MATLAB code. The second step involves the development of the automatic modulation classifier based on a backpropagation neural network algorithm. The third step of the methodology involves the performance evaluation of the developed automatic modulation classifier with a related study from the research literature. Results obtained show that the developed classifier is accurate and sensitive to classification of the nine modulation schemes considered with an average success rate above 99.0%.

Keywords- artificial neural network (ANN); ANN classification; network training; automatic modulation classification

I. INTRODUCTION In modern decision making processes, whether in

engineering, business, medicine or other fields, tools that allow the decision makers to assign an object to an appropriate class or target are very essential. One of such tools, which has demonstrated promising potential, is the artificial neural network (ANN). ANNs are mathematical models that attempt to parallel and simulate the functionality and decision making of the human brain. As a mathematical model of human cognition through biological neurons, ANN is regarded as an information processing system that has certain performance features in common with the human neural system. These features include the ability of storing knowledge and making it available for use whenever necessary. Other features of ANNs are their propensity to identify patterns even in the presence of noise [1] as well as an aptitude for taking past experiences into consideration and make inferences and judgment about new situations. They derive these strengths from their massively parallel structure.

Generally, ANNs are classified according to either their architecture or learning algorithm. According to their architecture, ANNs can be classified as either feedforward networks or recurrent networks. In a feedforward network, the neurons are grouped into layers with the flow of signals from the input layer to the output layer in a unidirectional manner. There is interconnection of neurons from one layer to the next layer but there are no connecting cycles or loops. In a recurrent network, on the other hand, there exists both inter- and intra- connections of neurons.

When a training algorithm is used in classifying ANNs, it is usually classified into three classes, namely: supervised learning, unsupervised learning and reinforcement learning. In supervised learning, the ANNs require a known output in order to adjust its weights appropriately. In unsupervised learning, the ANNs do not require a known output; the networks basically learn, adapt and build in their response to the inputs. In reinforcement learning, the ANNs employ a critic to evaluate the fitness of the networks output corresponding to the given input.

Network learning or training is the adaptation process by which the network learns the relationship between the inputs and the targets. It is a repetitive incremental process guided by an optimization algorithm [2]. Generally, multilayer feedforward neural networks can be trained as non-linear classifiers using a generalized back propagation algorithm (BPA). The BPA is a supervised learning algorithm with a defined sum square error function that aims at reducing the overall system error to a minimum. ANN performance can be improved if a suitable error function minimization algorithm is chosen. We used the mean square error (MSE) as an index of performance in minimizing the error in this study.

In addition, a scaled conjugate gradient (SCG) algorithm was applied in the study for network training. Its choice was based on its designed ability that avoids the time-consuming line search that raises the computational complexity per learning iteration in order to determine an appropriate step size. SCG combines the model-trust region approach used in the Levenberg-Marquardt (LM) algorithm with the conjugate gradient approach [3,4] in order to scale the step size.

The authors thank all the sponsors of the University of theWitwatersrand’s Centre for Telecommunications Access and Services(CeTAS) for their financial support. The authors also express theirappreciation to the Independent Communications of South Africa (ICASA)for its financial support. The principal author also acknowledges the financial assistance received from the University of the Witwatersrand’s PostgraduateMerit Award (PMA).

IEEE Africon 2011 - The Falls Resort and Conference Centre, Livingstone, Zambia, 13 - 15 September 2011

978-1-61284-993-5/11/$26.00 ©2011 IEEE

Interested readers are referred to [4] for more information on the SCG algorithm.

A. Applications of ANNs ANNs have been applied in many applications such as:

medicine, banking, agriculture, science and engineering to solve real world problems. In engineering, for instance, engineers have applied ANNs to solve many problems such as classification, pattern recognition and nonlinear problems that are difficult to solve using normal mathematical processes. In communications and signal processing, the ANN has been used in developing automatic modulation classification (AMC).

An AMC is a system that automatically identifies the modulation schemes of the radio signal without a priori knowledge about the parameters of the signal [5,6]. It is an intermediate step between signal interception and demodulation for characterizing signals in various military and civilian communication applications such as; spectrum management, surveillance, electronic warfare, military threat analysis, interference identification and many others. There has been a lot of interest in AMC in the last decade [7-10] with extensive and diverse unclassified literature devoted to the topic. Several methods for identifying modulation schemes and signal parameters focus on extracting signal characteristics under different conditions. These variations in both applied methods and conditions make it difficult to compare the performance of many automatic modulation classifiers from disparate sources.

B. AMC Approaches In general, there are two main approaches for developing

AMC. The first and most commonly used approach is the pattern recognition approach, in which the signals are characterized by a vector of several features both from the frequency and time domains [11-13]. The second approach to the modulation classification problem is the decision theoretical approach that is based on hypothesis testing arguments to formulate the recognition problem and to obtain the classification rule [7,14]. Apart from these two approaches usually employed in developing automatic modulation classification systems, there are also five methods usually used in developing algorithms for automatic modulation classification [15]. They are: (i) spectral processing, (ii) instantaneous amplitude, phase and frequency parameters, (iii) instantaneous amplitude, phase and frequency histograms, (iv) combination of the previous three methods and (v) a universal demodulator. In this paper, the first approach was employed while the second method was used to develop the classifier for this study.

As a result of increasing usage of digital modulation in technologies such as wireless communications, recent research has been focused on classifying digital modulation schemes rather than analog ones. However, because the analog modulation schemes are still in use in most developing countries, some research was focused on analog modulation classification. But in a blind radio environment, it is impossible to know in advance whether the received radio signal is analog modulated or digitally modulated. Therefore, the desire to have a universal AMC that can operate in a blind radio environment

underlines the development of the combined analog and digital AMC algorithm presented in this paper and using an artificial neural network.

The multilayer perceptrons (MLP) or feedforward neural network was used in developing our combined analog and digital modulated signals classifier. Four of the best known analog modulation schemes – amplitude modulation (AM), double sideband (DSB) modulation, single sideband (SSB) modulation and frequency modulation (FM) – and five corresponding digital modulation schemes – two symbol amplitude shift keying (2ASK), four symbol amplitude shift keying (4ASK), two symbol frequency shift keying (2FSK), two symbol phase shift keying (2PSK) and four symbol phase shift keying (4PSK) – were considered. The back propagation algorithm was applied for the classification of the modulated signals. Details on the study methodology are presented in Section II. The classification results are discussed in Section III while the conclusions are presented in Section IV.

II. STUDY METHODOLOGY This study methodology involves three steps. The first step

is the extraction of the statistical features keys used as the inputs to the developed classifier. The second step is the development of the automatic modulation classifier, while the third step is the performance evaluation of the developed classifier. Details on the first and second steps are presented in the following subsections. The third step is presented in section III.

A. Features Extraction Keys Feature extraction is an integral part of any recognition

system [16]. The purpose of feature extraction is to describe the pattern by means of a minimum number of features or attributes that are effective in discriminating amongst the pattern classes (i.e. modulation schemes in this study). The statistical feature keys employed for this classification algorithm are derived from the instantaneous amplitude, instantaneous phase and instantaneous frequency of the simulated modulated signals. The feature keys used in this study had earlier been used in [18] and are defined as follows:

(i) The maximum value of the spectral power density for normalized centre instantaneous amplitude, maxγ , given by,

( )( )

NiaDFT cn

2

maxmax

=γ (1)

Where, DFT is the discrete Fourier Transform of the radio frequency signal, N is the number of samples per segment,

( )iacn is the value of the normalized-centred instantaneous

amplitude at time instant, sf

it = ( )Ni ,,2,1= , sf is

the sampling frequency. The value of the normalized-centred instantaneous amplitude ( ),iacn is defined as:

( ) ( ) 1−= iaia ncn ; ( ) ( )a

n miaia = (2)

IEEE Africon 2011 - The Falls Resort and Conference Centre, Livingstone, Zambia, 13 - 15 September 2011

978-1-61284-993-5/11/$26.00 ©2011 IEEE

Where am is the mean value of the samples, which is

defined as: ( ) ( )∑=

=N

ia iaNm

1

1 (3)

(ii) The standard deviation of the absolute value of the centred non-linear component of the instantaneous phase, apσ , at time instant, t defines mathematically as:

( )( )

( )( )

2

2 11⎟⎟⎠

⎞⎜⎜⎝

⎛−⎟⎟

⎞⎜⎜⎝

⎛= ∑∑

>> tntn aiaNL

aiaNLap i

Ci

Cφφσ (4)

Where ( )iNLφ is the value of the centred non-linear component of the instantaneous phase at time instant, C is the number of samples in ( ){ }iNLφ , ta is threshold for ( ){ }ia below which the estimation of instantaneous phase becomes highly noise sensitive.

(iii) The standard deviation of the direct value of the centred non-linear component of the direct instantaneous phase, dpσ , defines mathematically as:

( )( )

( )( )

2

2 11⎟⎟⎠

⎞⎜⎜⎝

⎛−⎟⎟

⎞⎜⎜⎝

⎛= ∑∑

>> tntn aiaNL

aiaNLdp i

Ci

Cφφσ (5)

(iv) The standard deviation of the absolute value of the normalized centred instantaneous amplitude, aaσ , defines as:

( ) ( )2

11

2 11⎟⎠

⎞⎜⎝

⎛−⎟⎠

⎞⎜⎝

⎛= ∑∑==

N

icn

N

icnaa ia

Nia

Nσ (6)

(v) The spectrum symmetry around the carrier frequency, represented as P . This feature extraction is based on the spectral powers for the lower and upper sidebands of the modulated signal. The feature key is defined as:

UL

UL

PPPPP

+−

= (7)

Where the lower sideband power is, ∑=

=cnf

icL iXP

1

2)( and

the upper sideband power is ∑=

++=cnf

icncU fiXP

1

2)1( .

)(iXc is the Fourier transform of the intercepted signal, )1( +cnf

is the sample number corresponding to the carrier frequency cf and cnf is defined as;

1−=s

ccn f

Nff (8)

Based on equations (1)-(8), the features extraction keys obtained for the combined analog and digital modulated signals are shown in Fig. 1.

Fig. 1. Plots of (a) maxγ , (b) dpσ , (c) apσ , (d) aaσ and (e) P against

SNR for the combined analog and digital modulated signals

(a)

(b)

(c)

(d)

(e)

IEEE Africon 2011 - The Falls Resort and Conference Centre, Livingstone, Zambia, 13 - 15 September 2011

978-1-61284-993-5/11/$26.00 ©2011 IEEE

Fig. 2. The combined AMC architecture

B. The Proposed AMC Architecture In this study, a multilayer automatic modulation classifier

was developed. The architecture of the developed classifier is shown in Fig. 2 having the statistical feature extraction keys discussed above as the inputs. The ANN architecture used in this paper is the feedforward propagation and consists of one input layer, one hidden or intermediate layer of computational nodes or neurons and one output layer of computational neurons. All the neurons are fully connected as shown in Fig. 2. Input neurons at the input layer do not perform computations but only distribute the input features to the computing neurons in the hidden layer. The neurons in the hidden layer on the other hand, perform computations on the inputs from input layer and pass their results to the neurons in the output layer.

During the training or learning process, input vectors and corresponding target vectors are used to train the network until it could classify the signals in appropriate way. Whenever the results of the output neurons differ from the expected or target value, errors are propagated in a backward manner from the output layer to the hidden layer. This BPA involves two paths: the forward and the backward path.

The forward path involves creating a feedforward network by initializing weight and training the network. During this path, the initialized weights are fixed when the inputs are propagated through the network of Fig. 2 layer by layer. The phase ends with the error signal ( )ie computation using the relationship

iii yte −= (9)

where it is the target or desired response of i th input and iy is the actual output produced by the network in response to the i th input.

The backward path involves a network update by modifying the connection weights so as to reduce the total error in the network output. The error signal ( )ie generated during the forward path is propagated in backward direction through the network of Fig. 2. The backward error signal propagation causes an adjustment in network weight so as to minimize the error signal in a statistical sense using the mean square error ( )E

( )∑=

−=N

iii yt

NE

0

21 (10)

where N is the total number of input.

TABLE I SPECIFICATIONS FOR THE DEVELOPED AMC CLASSIFIER

Item Parameters Value 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

Type of neural network architecture No. of neurons in input No. of neurons in hidden layer No. of neurons in output layer Coefficient of weight-decay Activation function in hidden layer Activation function in output layer Maximum number of epochs Performance function Learning algorithm

Feedforward 5 7 9 0.01 Tanh Logistic 100 MSE SCG

C. The Proposed AMC Training Procedures A total of 4500 data with five feature set inputs and nine

target outputs were used. The ANN and the training algorithm were implemented with the NETLAB Algorithm for Pattern Recognition [17] in a MATLAB environment. The specifications for the AMC employed for the classification of the combined analog-digital modulated schemes are shown in Table I. The procedure followed to train the developed AMC is given as follows:

(i) Generated data, consisting of input vectors and target vectors, were imported into the MATLAB environment.

(ii) The loaded data was normalized and randomly sorted.

(iii) Partitioning of the loaded data into training, validation and testing data sets was carried out. 50% of our total data was used for the network training. The training data set was used to update the weights of the network. The training was done until the MSE used as the performance function was minimal. 25% of the total data was used to validate that the network was able to generalize and stop training before the network was over fitting. The last 25% of the total data was used as a completely independent test data to test the network generalization.

(iv) The ANN was created. The feedforward network with non-linearity activation functions of tan-sigmoid (tanh) and logistic (log-sigmoid) were used in the hidden and output layers respectively (i) to introduce non-linearity into the network because without non-linearity, the network will not be more powerful than plain perceptrons and (ii) to overcome the slow convergence limitation of BPA. 5 neurons were used at the input layer corresponding to the number of input features, 7 neurons were used at the hidden layer. The network has 9 neurons at the output layer corresponding to the number of targets.

(v) The developed AMC was finally evaluated to determine its performance. This was done by comparing our results with a related study in the literature.

III. THE DEVELOPED AMC RESULTS In this section, we present the results obtained when

evaluating the developed algorithm. In the analysis, figures and tables were used. We compared these results with those obtained using the same features extraction keys. In order to present and discuss the obtained results perfectly as well as evaluating the developed AMC performance, this section is

dpσ

apσaaσ

P

maxγ 2ASK

4ASK

AM

FM

IEEE Africon 2011 - The Falls Resort and Conference Centre, Livingstone, Zambia, 13 - 15 September 2011

978-1-61284-993-5/11/$26.00 ©2011 IEEE

divided into two subsections. In the first subsection, we present the results of our developed AMC and discuss them. In the second subsection, we present the performance evaluation of the proposed algorithm by comparing our results with those from a related study in the surveyed literature.

A. Developed AMC Results and Discussions The developed classifier perfectly categorises the nine

modulation schemes considered, starting with the network architecture with 7 neurons in the hidden layer. The classification rate increases as the number of neurons in the hidden layer increases. Figs. 3 to 6, show the classification rate of the classifier for four different values of signal-to-noise ratio (SNR) with 7 and 13 neurons in the hidden layer. The results show that as the value of SNR is increasing with increasing in number of neurons in the hidden layer, the classification rate increases.

Considering the entire results presented in Fig. 3 to Fig. 6, it shows that the developed classifier’s performance is consistent. This is good because it indicates that no matter the SNR value used, the results are not biased towards any particular SNR value as well as any modulation scheme. This is established from the calculated average detection rates of 99.03%, 99.55%, 99.82% and 99.91% respectively at SNR values of 0 dB, 5 dB, 10 dB and 20 dB 7 with 7 neurons in the hidden layer. Further analysis shows that the combined analog and digital AMC classifier as developed achieves an average recognition rate above 99.0%. The overall performance was measured in terms of the mean square error, which is in the order of 10-15 for 100 epochs.

B. Developed AMC Performance In this subsection, we provide the comparative performance

evaluation results obtained when evaluating the developed combined analog-digital AMC in this study. The results of our combined analog-digital AMC was compared with similar results obtained using the same statistical feature extraction keys. Also, in assessing the developed AMC for this study, the choice of the reference study is characterized by the following additional features: (i) capability of classifying almost the same set of combined analog and digital modulation schemes. (ii) absence of any pre-knowledge of the signal parameters or features. (iii) use of the same channel, additive white Gaussian noise (AWGN) in the signal simulation. (iv) Lastly, in choosing the reference work, equal values of SNR were considered. The reference work was published in 1996. Its choice was not only based on meeting all the conditions stated above but also because there is no recent work on the surveyed literature that dealt with combined analog and digital AMC that satisfied our comparative conditions. The only noticeable difference between the present work and the reference work in [18] is the network architecture used. While only one layer of hidden layer is employed in our study, the reference work in [18] used two sequences of hidden layers with 15 neurons each. Therefore for the comparative study, 15 neurons are used in the one hidden layer of our network architecture which their results were compared with 15 -15 neurons used in the 2 hidden layers network architecture of the reference work.

Fig. 3. Detection rate at 0 dB SNR using 7 and 13 neurons at hidden layer Fig. 4. Detection rate at 5 dB SNR using 7 and 13 neurons at hidden layer Fig. 5. Detection rate at 0 dB SNR using 7 and 13 neurons at hidden layer Fig. 6. Detection rate at 0 dB SNR using 7 and 13 neurons at hidden layer

As stated earlier, tables were used in the performance evaluation analysis performed. In Table II, the comparisons of the results obtained in our developed AMC and reference work in [18] were shown. Table II shows the classification rate results with a SNR value equal to 15 dB. A further comparison is presented in Table II between the present work and work in [18] with a SNR value equal to 20 dB. In both SNR values considered, the results of the reference work in [18] only outperformed the present work in the correct deduction of the DSB modulation scheme. On the other hand, the present work outperformed the reference work reported in [18] in the eight other modulation schemes considered. From the results of the performance evaluation conducted as well as the overall results of this study, it is obvious that the present work achieves results that are similar to the reference work. The outcome also

90

92

94

96

98

100

Modulation scheme

Cla

ssifi

catio

n ra

te

7 Neurons 13 Neurons

7 Neurons 99.32 99.18 99.9 99.85 99.3 99.44 99.45 99.75 99.77

13 Neurons 99.55 99.96 99.92 99.97 99.96 99.94 99.92 99.94 99.97

2ASK 4ASK 2FSK BPSK QPSK AM DSB SSB FM

90

92

94

96

98

100

Modulation scheme

Cla

ssifi

catio

n ra

te

7 Neurons 13 Neurons

7 Neurons 99.5 99.85 99.92 99.87 99.89 99.76 99.81 99.87 99.89

13 Neurons 99.84 99.98 99.95 99.98 99.97 99.97 99.93 99.97 99.99

2ASK 4ASK 2FSK BPSK QPSK AM DSB SSB FM

90

92

94

96

98

100

Modulation scheme

Clas

sific

atio

n ra

te

7 Neurons 13 Neurons

7 Neurons 99.74 99.93 99.95 99.9 99.96 99.97 99.88 99.92 99.98

13 Neurons 99.98 99.99 99.99 99.99 99.98 99.98 99.96 99.99 99.99

2ASK 4ASK 2FSK BPSK QPSK AM DSB SSB FM

90

92

94

96

98

100

Modulation scheme

Cla

ssifi

catio

n ra

te

7 Neurons 13 Neurons

7 Neurons 98.49 97.78 99.56 99.84 99.27 97.71 99.37 99.95 99.68

13 Neurons 99.46 99.14 99.91 99.93 99.81 98.99 99.91 99.92 99.96

2ASK 4ASK 2FSK BPSK QPSK AM DSB SSB FM

IEEE Africon 2011 - The Falls Resort and Conference Centre, Livingstone, Zambia, 13 - 15 September 2011

978-1-61284-993-5/11/$26.00 ©2011 IEEE

shows that the present work can compare favorably with other algorithms covered in the literature.

TABLE II COMPARISON OF PRESENT WORK CLASSIFICATION RATE WITH STUDY [18] AT 15 dB AND 20 dB SNR VALUES

Present work: Popoola and van Olst Reference work: Azzouz and Nandi (1996) [18]

Comparison between the present work and [18] shows that

the present work outperforms [18] in the detection of eight out of the nine modulation schemes considered. These disparities in performances of the two classifiers might be as a result of difference in training algorithms used despite the similarity in the input data set employed. This is because different training algorithms do have different effects on the ANN performance. In this study, back propagation (BP) with a scaled conjugate gradient training algorithm was used because of its good performance and convergence capability. In [18], BP training algorithm with momentum was used, which also has good convergence capability. However, it is not as effective as the BP with SCG used in this study because of its performance’s degradation as the number of network nodes increases. This shows that the correct choice of appropriate training algorithm does have effect on the performance of the artificial neural network.

IV. CONCLUSIONS In this paper, we proposed a combined analog and digital

automatic modulation classification algorithm that is capable of recognizing 9 combined analog and digitally modulated signals without a priori knowledge or information about their features. We investigate the performance of the proposed classifier by comparing its results with related results in surveyed literature. The comparative results show that the present work can compare favorably with the reference work. Similarly, the overall results of the study indicated that the developed classifier successfully classified the nine modulation schemes considered with an average success rate above 99.0%. The study has demonstrated that automatic modulation classifier developed using an ANN can perform well with a high probability of correct classification even at low signal-to-noise ratios.

REFERENCES [1] M.A. Razi and K. Athappilly, “A comparative predictive analysis of

neural networks (NNs) nonlinear regression and classification and regression tree (CART) model,” Expert Systems with Applications, vol. 29, no. 1, pp. 65-74, July 2005.

[2] D. Shanthi, G. Sahoo and N. Saravanan, “Comparison of neural network training algorithms for the prediction of the patient’s post-operative recovery area,” Journal of Convergence Information Technology, vol. 4, no. 1, pp. 24-32, March 2009.

[3] A.B. Sankar, D. Kumar, and K. Seethalakshmi, “Neural network based respiratiry signal classification using various feed-forward back propagation training algorithms,” European Journal of Scientific Research, vol. 49, no. 3, pp. 468-483, 2011.

[4] M.F. Moller, “A Scaled Conjugate Gradient Algorithm for Fast Supervised Learning,” Neural Networks, vol. 6, no. 4, pp. 525-533, 1993.

[5] D. Grimaldi, S. Rapuano, and L. De Vito, “An automatic digital modulation classifier for measurement on telecommunication networks,” IEEE Transactions on Instrumentation and Measurement, vol. 56, no. 5, pp. 1711-1720, October 2007.

[6] O.A. Dobre, A. Abdi, Y. Bar-Ness, and W. Su, “Selection combining for modulation recognition in fading channels,” in Processings of IEEE Military Communications Conference 2005 (MILCON 2005), vol. 4, pp. 2499-2505, October 17-20, 2005.

[7] W. Wei and J.M. Mendel, “Maximum-likelihood classification for digital amplitude-phase modulations,” IEEE Transactions on Communications, vol. 48, no. 2, pp. 189-193, February 2000.Young, The Technical Writer's Handbook. Mill Valley, CA: University Science, 1989.

[8] Z. Z. Yu, Y.Q. Shi, and W. Su, “M-ary frequency shift keying signal classification based on discrete Fourier transform,” in Proceedings of IEEE Military Communications Conference 2003 (MILCON 2003), vol. 2, pp. 1167-1172, October 13-16, 2003.

[9] L. Hong and K.C. Ho, “Classification of BPSK and QPSK signals with unknown signal level using the Bayes technique,” in Proceedings of International Symposium on Circuits and Systems 2003 (ISCAS 2003), pp. IV.1-IV.4, May 25-28, 2003.

[10] J. Lopatka and M. Pedzisz, “Automatic modulation classification using statistical moments and fuzzy classifier,” in Proceedings of International Conference on Signal Processing 2000 (ICSP 2000), vol. 3, pp. 1500-1505, August 21-25, 2000.

[11] H. Guldemir and A.Sengur, “Online modulation recognition of analog communication signals using neural network,” Expert System with Applications, vol. 33, no. 1, pp. 206-214, July 2007.

[12] A.K. Nandi and E.E. Azzouz, “Modulation recognition using artificial neural networks,” Signal Processing, vol. 56, no. 2, pp. 165-175, January 1997.

[13] F. Jondral, “Automatic classification of high frequency signals”, Signal Processing, vol. 9, pp.177-190, October 1985.

[14] P. Panagotiou, and A. Polydoros, “Likelihood ratio tests for modulation classification,” in Proceedings of IEEE 21st Century Military Communications Conference 2000 (MILCON 2000), vol. 2, pp. 670-674, October 22-25, 2000.

[15] E.E. Azzouz and A.K. Nandi, “Automatic identification of digital modulation types,” Signal Processing, vol. 47, no. 1, pp. 55-69, November 1995.

[16] S.B. Patil and N.V. Subbareddy, “Neural network based system for script identification in Indian documents,” Sadhana, vol. 27, part 1, pp. 83-97, February 2002.

[17] I.T. Nabney, “NETLAB Algorithms for Pattern Recognition,” Springer Publishers, London, chapter 5, pp. 149-189, 2004.

[18] E.E. Azzouz and A.K. Nandi, “Automatic Modulation Recognition of Communication Signals,” Kluwer Academic Publishers, Boston, chapter 5, pp. 132-176, 1996.

Simulated modulation

scheme

Correct modulation scheme detection rate comparison

Detection rate at SNR =15 dB

Detection rate at SNR = 20 dB

Present work

Reference work

Present work

Reference work

2ASK 4ASK 2FSK BPSK QPSK AM DSB SSB FM

99.99 99.98 99.92 99.75 99.98 99.95 99.98 99.97 99.91

96.80 86.50 99.00 99.50 96.80 88.50 100.00 97.40 90.10

99.93 99.99 99.98 99.97 100.0 99.98 99.98 99.98 99.95

97.00 85.80 99.00 99.50 97.50 87.60 100.00 97.60 93.40

IEEE Africon 2011 - The Falls Resort and Conference Centre, Livingstone, Zambia, 13 - 15 September 2011

978-1-61284-993-5/11/$26.00 ©2011 IEEE


Recommended