+ All Categories
Home > Documents > Quantum Neural Network-Based EEG Filtering for a Brain–Computer Interface

Quantum Neural Network-Based EEG Filtering for a Brain–Computer Interface

Date post: 19-Dec-2016
Category:
Upload: thomas-martin
View: 221 times
Download: 5 times
Share this document with a friend
11

Click here to load reader

Transcript
Page 1: Quantum Neural Network-Based EEG Filtering for a Brain–Computer Interface

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 1

Quantum Neural Network-Based EEG Filteringfor a Brain–Computer Interface

Vaibhav Gandhi, Girijesh Prasad, Senior Member, IEEE, Damien Coyle, Senior Member, IEEE,Laxmidhar Behera, Senior Member, IEEE, and Thomas Martin McGinnity, Senior Member, IEEE

Abstract— A novel neural information processing architectureinspired by quantum mechanics and incorporating the well-known Schrodinger wave equation is proposed in this paper. Theproposed architecture referred to as recurrent quantum neuralnetwork (RQNN) can characterize a nonstationary stochasticsignal as time-varying wave packets. A robust unsupervisedlearning algorithm enables the RQNN to effectively capture thestatistical behavior of the input signal and facilitates the estima-tion of signal embedded in noise with unknown characteristics.The results from a number of benchmark tests show that simplesignals such as dc, staircase dc, and sinusoidal signals embed-ded within high noise can be accurately filtered and particleswarm optimization can be employed to select model parameters.The RQNN filtering procedure is applied in a two-class motorimagery-based brain–computer interface where the objectivewas to filter electroencephalogram (EEG) signals before featureextraction and classification to increase signal separability. Atwo-step inner–outer fivefold cross-validation approach is utilizedto select the algorithm parameters subject-specifically for ninesubjects. It is shown that the subject-specific RQNN EEG filteringsignificantly improves brain–computer interface performancecompared to using only the raw EEG or Savitzky–Golay filteredEEG across multiple sessions.

Index Terms— Brain–computer interface (BCI),electroencephalogram (EEG), recurrent quantum neuralnetwork (RQNN).

I. INTRODUCTION

BRAIN–COMPUTER interface (BCI) technology is ameans of communication that allows individuals with

severe movement disability to communicate with externalassistive devices using the electroencephalogram (EEG) orother brain signals. In motor imagery (MI)-based BCIs, thesubject performs a mental imagination of specific movements.This MI is translated into a control signal by classifying thespecific EEG pattern that is characteristic of the subject’simagined task, e.g., movement of hands and/or foot. Theseraw EEG signals have a very low signal-to-noise (SNR) ratiobecause of the interference from the electrical power line,

Manuscript received August 2, 2012; revised April 16, 2013 and July 13,2013; accepted July 14, 2013. This work was supported by the U.K.–IndiaEducation and Research Initiative under Grant “Innovations in IntelligentAssistive Robotics.”

V. Gandhi is with the School of Science and Technology, MiddlesexUniversity, London NW4 4BT, U.K. (e-mail: [email protected]).

G. Prasad, D. Coyle, and T. M. McGinnity are with the Intelligent SystemsResearch Centre, University of Ulster, Derry BT52 1SA, U.K. (e-mail:[email protected]; [email protected]; [email protected]).

L. Behera is with the Department of Electrical Engineering, Indian Instituteof Technology, Kanpur 208016, India (e-mail: [email protected]).

Color versions of one or more of the figures in this paper are availableonline at http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TNNLS.2013.2274436

motion artifacts, electromyogram (EMG)/electrooculograminterference. Preprocessing is carried out to remove suchunwanted components embedded within the EEG signaland good preprocessing results in increase in signal qual-ity resulting in better feature separability and classificationperformance. Very recently, integrated with feature extractionstage, novel spatial filtering algorithms based on Kullback–Leibler [1] common spatial pattern (CSP) [2] and Bayesianlearning have been investigated to account for very low SNREEG [3], [4]. The KLCSP-based approach is investigatedon several EEG data sets in [3] and showed significantperformance improvement compared to CSP and stationaryCSP. Similarly, [4] reports an extensive study of Bayesianlearning-based spatial filtering approach and its applicationusing publicly available EEG data. Neural networks and self-organizing fuzzy neural network have also being applied toincrease signal separability in motor imagery BCIs [5]–[7].This paper focuses on EEG signal preprocessing utilizing theconcepts of quantum mechanics (QM) and neural networktheory in a framework referred to as recurrent quantum neuralnetwork (RQNN).

EEG signals can be considered a realization of a randomor stochastic process [8]. When an accurate description of thesystem is unavailable, a stochastic filter can be designed on thebasis of probabilistic measures. Bucy in [9] states that everysolution to a stochastic filtering problem involves the com-putation of a time-varying probability density function (pdf )on the state–space of the observed system. The architectureof RQNN model is based on the principles of QM with theSchrodinger wave equation (SWE) [10] playing a major part.This approach enables the online estimation of a time-varyingpdf that allows estimating and removing the noise from theraw EEG signal.

In quantum terminology, the state is represented by ψ(a vector in the Hilbert space H) and referred to as awave function or a probability amplitude function. The timeevolution of this state vector ψ is according to SWE and isrepresented as

i h∂ψ(x, t)

∂ t= Hψ(x, t). (1)

Here H is the Hamiltonian or the energy operatorand is given as i h(∂/∂ t) where 2�h (i.e., h) is thePlank’s constant1 [11]. Here is the wave function

1The Planck’s constant is an atomic-scale constant that denotes the size ofthe quanta in quantum mechanics. The atomic units are a scale of measurementin which the units of energy and time are defined so that the value of thereduced Planck constant is exactly one.

2162-237X © 2013 IEEE

Page 2: Quantum Neural Network-Based EEG Filtering for a Brain–Computer Interface

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

2 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS

∑Unified response

is a pdf or a wave-packet

A quantum process predicts

the average response of the

wave-packet

Neuronal lattice

Fig. 1. Conceptual framework of RQNN model.

associated with the quantum object at spacetime point(x, t).

Fig. 1 shows a basic architecture of RQNN model inwhich each neuron mediates a spatio-temporal field with aunified quantum activation function in the form of Gaussianthat aggregates pdf information from the observed noisyinput signal. Thus the solution of SWE (which is complex-valued and whose modulus square is the pdf that localizesthe position of quantum object in the vector space) gives usthe activation function. From a mathematical point of view, thetime-dependent single-dimension nonlinear SWE is a partialdifferential equation describing the dynamics of wave packet(modulus-square of this wave is the pdf ) in the presence ofa potential field (or function) (which is the force field inwhich the particles defined by the wave function are forced tomove) [12]. Thus the RQNN model is based on novel conceptthat a quantum object mediates the collective response of aneural lattice (a spatial structure of an array of neurons whereeach neuron is a simple computational unit as shown in Fig. 1and explained in detail in Section II) [13], [14].

This model has been investigated here as a filtering mecha-nism in the preprocessing of EEG signal for a synchronousMI-based BCI to improve signal quality and separability.A similar technique was reported in [15] and [16] for EEGsignal filtering where the error signal was used to stimulatethe neurons within the network and the weights of the networkwere updated using the well-known Hebbian learning rule.Similar techniques have also been applied for robot con-trol [17], eye tracking, [13] and stock market prediction [18]applications. Neurons within the proposed RQNN are stim-ulated directly from the raw input signal. In addition, thelearning rule for the weight updation process also utilizes adelearning scheme.

Several important modifications have been made withreference to [19]. First, the selection of subject-specificRQNN model parameters using a two-step inner–outer fivefoldcross-validation and a particle swarm optimization (PSO)[20], [21] technique, and second, the scaling of the inputEEG signal resulting in reducing the range of movement ofthe wave packet as well as the number of spatial neurons.As discussed in Section IX, this model is demonstrated toproduce a stable filtered EEG that results in a statisticallysignificant enhancement in the performance of BCI systemwhich is applicable across multiple sessions and is also betterthan some of the existing filtering techniques in the fieldincluding Savitzky–Golay (SG) and Kalman filter.

The remainder of this paper is organized into nine sections.Section II describes the theoretical concepts of RQNN model.Section III describes the RQNN signal filtering approach.Sections IV and V discuss the datasets and the methodol-ogy for EEG filtering with the RQNN model respectively.Section VI details the feature extraction (FE) and classificationmethodology utilized in this paper. The parameter selectionapproach for the subject-specific RQNN model is discussedin Section VII. Section VIII discusses the Savitzky–Golayfiltering methodology utilized for comparative analysis. Theresults are presented and discussed in Section IX. Section Xconcludes this paper.

II. CONCEPTUAL RQNN FRAMEWORK

QM theory is extremely successful in describing the processwe see in nature [22]. Dawes in [23] and [24] proposed anovel model—aparametric avalanche stochastic filter using theconcept of time-varying pdf proposed by Bucy in [9]. Thispaper was improved by Behera et al. [13], [14], [25] usingmaximum likelihood estimation (MLE) instead of inverse filterin the feedback loop. Further, Ivancevic in [18] providedan analytical analysis of nonlinear Schrodinger equation andused the closed-form solution for the concerned application.Because the RQNN approach does not make any assumptionabout the nature and shape of the noise that is embedded inthe signal to be filtered, this approach is most suitable forthose signals where the characteristics of the embedded noiseis not known. EEG signals are one of these types of signalswhere the characteristics of the embedded noise is not knownand hence this paper presented here on EEG signal filtering isstrongly inspired by these works.

A conceptual framework of RQNN model is shown inFig. 1. It is basically a 1-D array of neurons whose receptivefields are initially excited by the signal input reaching eachneuron through the synaptic connections. The neural latticeresponds to the stimulus by actuating a feedback signal backto the input. The time evolution of this average behavior isdescribed by SWE [10]

i h∂ψ(x, t)

∂ t= − h2

2m∇2ψ(x, t) + V (x, t)ψ(x, t) (2)

where ψ(x, t) represents the quantum state, ∇ is the Laplacianoperator and V (x, t) is the potential energy.

The neuronal lattice sets up the spatial potential energyV (x). A quantum process described by the quantum stateψ which mediates the collective response of neuronal lattice,evolves in this spatial potential V (x) according to (2). As V (x)sets up the evolution path of the wave function, any desiredresponse can be obtained by properly modulating the potentialenergy.

Such RQNN filter used for stochastic filtering is discussedin [13], [14], and [25]. Although this filter is able to reducenoise, because of its stability being highly sensitive to modelparameters, in case of imperfect tuning, the system may fail totrack the signal and its output may saturate to absurd values.In the architecture used in this paper (Fig. 2), the spatialneurons are excited by the input signal y(t). The differencebetween the output of spatial neuronal network and the pdf

Page 3: Quantum Neural Network-Based EEG Filtering for a Brain–Computer Interface

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

GANDHI et al.: QUANTUM NEURAL NETWORK-BASED EEG FILTERING 3

W(x , t)

-spatial neurons

+ ϕ(x , t))(

^ty

1

2

3

N

Quantum Activation Function (SWE)

ρ(x , t) = |ψ(x , t)|2ML

Estimate)(ty

Scaling

Recurrent

Fig. 2. Signal estimation using RQNN model.

feedback |ψ(x, t)|2 is weighted by a weight vector W (x) toget the potential energy V (x). The model can thus be seen asa Gaussian mixture model estimator of potential energy withfixed centers and variances, and only the weights are variable.These weights can be trained using any learning rule.

The parameters of RQNN model have been selected usinga two-step inner–outer fivefold cross-validation technique forfiltering EEG data sets and using PSO technique for sim-ple signals used to validate the method. There are severalparameters to tune from and hence applying any optimizationtechnique without the knowledge of multidimensional searchspace for filtering EEG can be time-consuming. In [19], theparameters were heuristically selected and kept the same forall the subjects. This leads to underfiltering or overfilter-ing for a few subjects without making the system unstable,but for optimal performance, the EEG signal preprocessingshould preferably be carried out with subject-specific choice ofparameters.

III. RQNN SIGNAL FILTERING

This section describes the RQNN architecture (see Fig. 2).In RQNN, we make the assumption that the average behaviorof neural lattice that estimates the signal is a time-varying pdfwhich is mediated by a quantum object placed in the potentialfield V (x) and modulated by the input signal so as to transferthe information about pdf. We use SWE to recurrently trackthis pdf because it is a well-known fact that the square of themodulus of ψ function, the solution of the wave equation (2),is also a pdf. The potential energy is calculated as

V (x) = ζW (x, t)φ(x, t) (3)

where

φ(x, t) = e−(y(t)−x)2

2σ2 − |ψ(x, t)|2 (4)

where y(t) is the input signal and the synapses are representedby the time-varying synaptic weights W (x, t). The variable ζrepresents the scaling factor to actuate the spatial potentialenergy V (x, t), and σ is the width of the neurons in thelattice (taken here as unity). This potential energy modulatesthe nonlinear SWE described by (1). The filtered estimate iscalculated using MLE as

y(t) = E[|ψ(x, t)|2] =∫

x |ψ(x, t)|2dx (5)

where x represents the different possible values which maybe taken up by the random process y. The variable x can be

interpreted as the discrete version of quantum space with theresolution within this discrete space being referred to as δx(taken as 0.1 in this paper). Thus all the possible values ofx will construct the number of spatial neurons N for RQNNmodel.

On the basis of MLE, the weights are updated and a newpotential V (x, t) is established for the next time evolution.It is expected that the synaptic weights W (x, t) evolve in sucha manner so as to drive the ψ function to carry the exactinformation of pdf of the filtered signal y(t). To achieve thisgoal, the weights are updated using the following learning rule:

∂W (x, t)

∂ t= −βd W (x, t)+ βφ(x, t)(1 + v(t)2) (6)

where β is the learning rate, and βd is the delearning rate.Delearning is used to forget the previous information, as theinput signal is not stationary, rather quasistationary in nature.The second right-hand side term in the above equation maybepurely positive and so in the absence of delearning term, thevalue of synaptic weights W may keep growing indefinitely.Delearning thus prevents unbounded increase in the values ofthe synaptic weights W and does not let the system becomeunstable. The variable v(t) in the second term is the differencebetween the noisy input signal and the estimated filteredsignal, thereby representing the embedded noise as

v(t) = y(t)− y(t). (7)

If the statistical mean of the noise is zero, then this errorcorrecting signal v(t) has less impact on weights, and it isthe actual signal content in input y(t) that influences themovement of wave packet along the desired direction whichresults in helping the goal of achieving signal filtering.

A. Numerical Implementation

The space variable x is defined uniformly spaced asxn = nδx, n = −(N/2), . . . ,+(N/2) and the time is spacedas tk = kδt, k = 1, . . . , T . The potential function is approx-imated as V (xn, t(k)) = V k

n . This potential function excitesthe nonlinear SWE to obtain the quantum wave function ψk

n .Various methods, both explicit as well as implicit, havebeen developed for solving nonlinear SWE numerically, ona finite dimensional subspace [26]. The first approach usesCrank–Nicholson method [27] which is an implicit scheme forsolving nonlinear SWE and requires a quasitridiagonal systemof equations to be solved at each step [28]. This scheme,although accurate, requires solving the inverse of a hugeN × N matrix, which is time-consuming. Hence the implemen-tation of the same was carried out using the explicit scheme

iψk+1

n − ψkn

δt= −ψ

kn+1 − 2ψk

n + ψkn − 1

2mδx2 + V kn ψ

kn . (8)

This method is linearly stable for δt/(δ)2 � 1/4, with atruncation error of the order of (0(δt2) + 0(δx2)). Anotherpoint to note is that we need to maintain the normalizedcharacter of pdf envelope, |ψ|2, by normalizing at every step,i.e., N

n=1|ψkn |2δx for all k.

Page 4: Quantum Neural Network-Based EEG Filtering for a Brain–Computer Interface

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

4 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS

Fig. 3. Training scheme of the paradigm with smiley feedback [22].

IV. DATA SETS

The EEG data used in this analysis is data set 2b providedin the BCI competition IV [29] with each subject contributinga single session referred to as *03T for the training phaseand two sessions referred to as *04E, *05E for the evaluationphase. The data set is obtained using acue-based paradigmwhich consists of two classes, namely MI of left hand (class 1)and right hand (class 2). Three EEG channels (C3, Cz, and C4)were recorded in bipolar mode with a sampling frequencyof 250 Hz and were bandpass-filtered between 0.5 Hz and100 Hz, and a notch filter at 50 Hz was enabled. However, forinvestigation, only two channels C3 and C4 are utilized. Asshown in Fig. 3, the trial paradigm started at 0 s with a graysmiley centered on the screen. At 2 s, a short warning beep(1 kHz, 70 ms) was given. The cue was presented from 3 to7.5 s and the subjects were accordingly required to performthe specific imagination. At 7.5 s, the screen went blank anda random interval between 1.0 and 2.0 s was added to the trialso as to avoid user adaptation. More details of this EEG signalrecording methodology are available in [29].

V. EEG FILTERING WITH RQNN

Fig. 4 shows the position of RQNN model within the BCIsystem. The raw EEG signal is fed one sample at a time and anenhanced signal is obtained as a result of filtering process. Theraw EEG is first scaled in the range 0–2 before it is fed to theRQNN model. During the off-line classifier training process,all the trials from a particular channel of EEG are available.Therefore, the complete EEG is scaled using the maximum ofamplitude value from that specific channel. During the onlineprocess, the EEG signal is approximately scaled in the range0–2 using the maximum of amplitude value obtained fromthe off-line training data of that specific channel. The neteffect is that the input signal during the online process is alsomaintained approximately in the region 0–2, and this enablesthe tracking of sample using a reduced range of the movementof wave packet. In addition, the number of spatial neuronshas also been reduced along the x-axis from an earlier valueof 401 to 612 in the present case. The primary assumptionin doing this is that the unknown nonstationary and evolvingEEG signal during the evaluation stage will stay within thebound of the range of 61 spatial neurons which can cover the

2If the range of the neuronal lattice is −2 to +2, then with a spacing of0.1 between each neuron, the total number of neurons covering the rangewill be −2, −1.9, −1.8, . . . ,−0.1, 0,+0.1, . . . 1.9, 2 i.e., 41. However, toincorporate the behavior of signal during the unknown evaluation stage, therange has been extended to cover the range up to +3 using 61 neurons.

C3

C4

RQNN

RQNN

Classifica�on

Feature Extrac�on

(Bandpower/Hjorth)

Fig. 4. RQNN model framework for EEG signal enhancement.

input signal range up to three. If the scaling of the input signalis not implemented, then the number of neurons required tocover the input signal range will be larger thereby leadingto an increased computational expense. This is an importantmodification in [19] and the scaling of EEG is now dictated asper the training data set. During the off-line training process,the complete set of scaled EEG signal (here signals fromchannels C3 and C4 discussed in Section VI) is fed through thetwo RQNNs, respectively (see Fig. 4), and a filtered estimateof the signal is obtained for the samples from both thesechannels.

VI. FEATURE EXTRACTION AND CLASSIFICATION

The next task is to obtain the features from this RQNN-enhanced EEG signal which in the present case are theHjorth [30] and band power features. These combined featuresare then fed as an input to train the off-line classifier which inthis case is the linear discriminant analysis (LDA) classifier.Once the off-line analysis is complete and the classifier istrained, the parameters and weight vector are stored for usewith the classifier to identify the unlabeled EEG data duringthe online analysis. It needs to be clarified here that to capturethe dynamic property of the continuous EEG signal, the weightupdation process of RQNN filter is continuous (to enhance theEEG signal) during both the off-line and online stages whilethe classifier parameters are tuned off-line and then kept fixedfor the online classification process.

Various FE approaches such as RQNN-generated features,band power, Hjorth, power spectral density (PSD), bispectrum(BSP), time frequency (t– f ) features have been utilized byvarious research groups [15], [16], [31]–[35] to produce agood practical BCI system. Most of the BCI research insignal processing is focused on frequency domain. The bandpower FE method is based on calculating the squared ampli-tude of signal over a small window. This approach typicallyincludes two frequency bands. The μ band (8–13 Hz) and theβ band (14–24 Hz) for the purpose of FE, although therange of these frequency bands may vary from one subjectto the other. The μ and β bands are important as they aremore reactive during a cued motor imagery [8], [36]. Thereis a much larger difference in band power changes [event-related desynchronization (ERD), event-related synchroniza-tion (ERS)] within these bands and help differentiate betweenhand versus foot MI or right versus left hand MI. In addition, itis also possible to convey relevant information about the EEGepochs with the trio of combinations of conventional time-domain-based descriptive statistics Hjorth parameters, namely

Page 5: Quantum Neural Network-Based EEG Filtering for a Brain–Computer Interface

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

GANDHI et al.: QUANTUM NEURAL NETWORK-BASED EEG FILTERING 5

TABLE I

FIXED RQNN PARAMETERS BEFORE INITIALIZING THE VARIABLE PARAMETER SEARCH

TABLE II

VARIABLE PARAMETERS TO BE SELECTED WITHIN THE SEARCH SPACE

activity, mobility, and complexity [37]. The computational costin the calculation of Hjorth parameters is considered low asthis approach is based on variance [31]. Moreover, Hjorthparameter, especially complexity, is sensitive to noise becausetheir computation is based on numerical differences and theirvariances [38]. This prompted the authors to evaluate RQNNpreprocessing technique by utilizing a combination of Hjorthand band power features.

VII. RQNN PARAMETER SELECTION

This section discusses the possible ways of selecting RQNNparameters to suit an individual subject. Four parameters inthe RQNN model have been kept fixed and are explained inTable I. These are obtained heuristically, but after suitabletrial and experimentation over a small set of EEG data.The variable parameters are selected from the search spaceas explained in Table II through the two-step inner–outerfivefold cross-validation method shown in Fig. 5. The firststep is to vary the RQNN parameters within the search spaceshown in Table II and measure the overall performance ofthe classifier through an inner–outer cross-validation techniquewith a limited number of trials using the Hjorth and bandpower features over the standard frequency bands of 8–13 Hzand 14–24 Hz. In this first step, the training data set of EEGis separated into five outer folds. Of these, the raw EEGis filtered using the RQNN on four folds using a specificset of parameters over the event-related MI period 3–7 s.Once the RQNN-enhanced signal is obtained, FE is performed.This feature set is now further divided into five inner folds.A normal fivefold cross-validation (CV) is performed on theseinner folds to obtain the performance quantifiers [classificationaccuracy (CA) (i.e., the percentage of correct classifications)

and kappa3 value] for the specific parameter combination withthe fixed frequency band. This complete step is repeated withall the different combination of parameters within the searchspace mentioned in Table II. Five best RQNN parametersare chosen from this step as per the highest kappa value.Thus the output from the first step gives five best RQNNparameters from each outer fold that has the potential toefficiently filter the raw EEG. The second step is to find thebest subject-specific frequency band in accordance with thefive best outerfold RQNN parameters. Therefore, in this step,the raw EEG is filtered using the five best RQNN parametersand features are again extracted and a normal fivefold CV isperformed over the complete set of EEG training data. Thisstage thus gives one best RQNN parameter and frequency bandcombination and the optimum time-point4 for performing theclassification as per the highest kappa value for each subject.Once these steps are complete, the classifier is chosen at thebest time-point so that it can be applied on the unknownevaluation data sets.

Another common approach to handle parameter tun-ing/selecting issue is to utilize optimization techniques such asPSO or genetic algorithm (GA). However, the RQNN modelhas several parameters that should be varied in agreement withthe frequency bands at the FE stage for EEG classification tosuit an individual subject. Applying any optimization tech-nique within a large multidimensional search space would betime-consuming. Therefore, PSO has been applied to select the

3Kappa is a measure of agreement between two estimators and since itconsiders chance agreement, it is regarded as a more robust measure incomparison to accuracy [58].

4Optimum time-point is an estimate of a point in time within the trialduration of 8 s that produces features with maximum separation that allowsfor classification with the lowest error.

Page 6: Quantum Neural Network-Based EEG Filtering for a Brain–Computer Interface

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

6 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS

Fig. 5. Flowchart for the two-step inner–outer fivefold CV parameterselection (RQNN/frequency band).

RQNN parameters for filtering simple example signals whilea two-step parameter selection approach has been applied forfiltering EEG.

VIII. SAVITZKY–GOLAY FILTER

The performance of RQNN has been compared with theunfiltered EEG as well as with the well-established SGtechnique [39]. The SG technique has been utilized as anoise removal approach (in a way it is thus similar to theRQNN) in biological signals such as ECG [40] and the EEG[41], [42]. SG filtering can smoothen the signal withoutdestroying the original properties of signal. Hence, the SGapproach has been utilized here to compare it with the RQNNmodel. The RQNN block shown in the EEG framework ofFig. 4 is simply replaced with the SG block.

IX. RESULTS AND DISCUSSION

A. Simple Example Signals

To validate the RQNN technique for filtering the complexEEG signals, we apply it to filter simple example signals in theform of dc, staircase dc, and sinusoidal signals that have beenembedded with a known amount of noise. The dc signal ofamplitude 2 is embedded with 0 dB noise (i.e., SNR is 1), the

0 20 40 60 80 100

-2

0

2

4

6

Time

Ampli

tude

DC + 0 dB NoiseNoiseless DCRQNN filtered DC

0 5 10 15 20-0.5

0

0.5

1

1.5

2

2.5

Time

Amplit

ude

Staircase DC + 20 dB NoiseNoiseless DCRQNN filtered Staircase DC

0 2 4 6 8-5

0

5

Time

Amplit

ude

Sine + 6 dB NoiseNoiseless SineRQNN filtered Sine

Fig. 6. DC, staircase dc, and sine signal filtering.

staircase dc with amplitude varying from 0 to 2 is embeddedwith 20 dB noise and the sinusoidal signal of amplitude 3 isembedded with 6 dB noise. The parameters of RQNN model tofilter the input dc signal are β = 0.002, m = 0.5, ζ = 775.05,and N = 400 while each sample is iterated once, so as tostabilize SWE (Table I). The parameters β and ζ were obtainedusing the PSO technique [20], [21] by fixing the parameterm at 0.5. The parameters to filter the sinusoidal signal wereobtained as β = 5.25, m = 0.25, and ζ = 1.75 and N = 140and each sample was iterated for 60 times before the nextsample was fed. The delearning parameter βd has been keptat all places as one. Fig. 6 shows the filtering of these signalsusing the RQNN approach. A video showing the movement ofthe wave packet for dc filtering is available at [43]. The root-mean-square error in filtering the dc signal of amplitude 2 withthe proposed RQNN as well as with the Kalman filter [44]is shown in Table III (partially reproduced from [14]) anddemonstrates that the RQNN performs better. It can thus befirmly stated from the plots and the figures that the RQNN isable to effectively capture the statistical behavior of the inputsignal and appropriately track the true signal even when fedwith a highly noisy input signal.

It is worth highlighting here that the statistical behavior ofnoise and signal in terms of pdf is a priori assumed in case ofKalman filter and its variants. However, the proposed RQNN

Page 7: Quantum Neural Network-Based EEG Filtering for a Brain–Computer Interface

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

GANDHI et al.: QUANTUM NEURAL NETWORK-BASED EEG FILTERING 7

TABLE III

PERFORMANCE COMPARISON FOR dc SIGNAL OF AMPLITUDE 2

5 5.2 5.4 5.6 5.8 60.65

0.7

0.75

0.8

0.85

0.9

0.95

1

Time

Ampli

tude

RAWRQNN

Fig. 7. Representative plot of RQNN filtered and raw EEG.

0 0.2 0.4 0.6 0.8 1 1.2 1.40

0.05

0.1

0.15

0.2

X

f(X

)

Fig. 8. Snapshots of the wave packets and MLE that generate therepresentative plot of the RQNN filtered EEG as shown in Fig. 7.

directly estimates this probability density function withoutmaking any such assumption. Thus the proposed model canenhance the EEG signal much better as the noise pdf isnaturally non-Gaussian.

B. EEG-Based BCI

1) Signal Wave Packets: Fig. 8 displays the tracking ofEEG signal in the form of snapshots of wave packets. Themovement of the wave packet along the x-axis is shown attime instants t = 5.0 s, t = 5.2 s, t = 5.6 s, and t = 6.0 s.MLE from the wave packet gives the filtered EEG as shownin Fig. 7. This figure displays the representative plot of theraw EEG and the RQNN-enhanced EEG for a time intervalbetween 5 and 6 s. The effect of filtering can be ascertainedthrough ERD/ERS in the frequency domain as well as throughan overall performance enhancement of the classifier outcome.

2) ERD/ERS: Fig. 9 shows a representative ERSobtained with the RQNN-filtered EEG signal and the rawEEG signal for subject four (evaluation set 5 E). The

0 2 4 6 8-0.05

0

0.05

0.1

0.15

0.2

Time (s)

ERD/

ERS

Left MI (RQNN)Right MI (RQNN)Left MI (RAW)Right MI (RAW)

Beta band C4

Fig. 9. ERS for RQNN-filtered and raw EEG (subject B0405E).

TABLE IV

SUBJECT-SPECIFIC PARAMETERS FROM INNER–OUTER FIVEFOLD CV

0 1 2 3 4 5 6 7 840

50

60

70

80

90

100

Time (s)

Acc

urac

y (%

)

RAWSGRQNN

Fig. 10. Classification accuracy plot (subject B0405E).

ERD/ERS were obtained for all channels by averaging bandpower change at each time-point across the time interval4000–6000 ms (standard activity period) with respect to thereference period from 500 to 1500 ms for all the subjects. Theimprovements in ERD/ERS with the RQNN-filtered signalsfor both the evaluation data sets is statistically significant(p < 0.04) and enhances the overall BCI performance.

3) Performance Enhancement (CA/Kappa): The list ofsubject-specific parameters for the RQNN model obtainedusing the inner–outer fivefold CV (Section VII) is shownin Table IV. Fig. 10 displays the CA plot using the LDA

Page 8: Quantum Neural Network-Based EEG Filtering for a Brain–Computer Interface

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

8 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS

TABLE V

PEAK CA WITH DIFFERENT MODELS

TABLE VI

MAXIMUM OF KAPPA WITH DIFFERENT MODELS

classifier with the Hjorth and band power features using theraw EEG, the RQNN-filtered EEG, and the SG-filtered EEGsignals for the evaluation data set for the subjects B0405E.Tables V and VI display the peak CA and the maximum ofkappa values, respectively, for the training and the evaluationdata sets for all the nine subjects The average improvementwith the RQNN technique across all the nine subjects is morethan 4% in CA [p < 0.0217]5 and 0.08 in kappa values(p < 0.0216) when compared with the raw approach byusing the same combined Hjorth and band power feature setupand subject-specific frequency band (step 2 in Fig. 5). Theaverage improvement with the RQNN technique is >7% inCA (p < 0.0001) and 0.14 in kappa values (p < 0.0001)when compared with the SG-filtered approach by using thesame combined feature setup (and subject-specific frequencyband obtained from step 2 in Fig. 5). These results also showa clear improvement of >9% in average CA (p < 0.0007)and >0.1 in average kappa value (p < 0.0006) when comparedwith the BCI design with PSD features extracted from rawEEG investigated in [35] on the same data set and train-ing/evaluation setup. RQNN shows improvements of 4% inthe average CA (p < 0.044) and >0.07 in average kappa(p < 0.044) when compared with the performance of BCIdesign with BSP features extracted from raw EEG investigatedin [35] on the same data set and training/evaluation setup.Table VII displays the average maximum of kappa as wellas the maximum of kappa computed from all nine subjects

5Two-way analysis of variance (ANOVA2) test is performed with the resultsof the training and the evaluation stages for the RQNN filtered and the rawEEG approach.

at the evaluation stage to compare the performance usingdifferent methods. From the results displayed in Table VII,specifically observing the performance of subject B03, thereseems to be a huge difference in the maximum of kappa valuesobtained with BSP (0.29)/PSD (0.27) compared to that withthe raw (0.84) and the RQNN (0.89) approaches. This may bebecause, the BSP and PSD techniques are frequency-based,while the raw and the RQNN techniques in this paper haveused a combination of frequency (band power) and temporal-based (Hjorth) features. To substantiate this, we implementedthe inner–outer fivefold cross-validation using only the bandpower features for both the raw and the RQNN. The resultingaverage performance for evaluation stages in terms of CA (andmaximum of kappa values) for subject B03 was 61.9 (0.25)and 58.12 (0.16), respectively, with the RQNN and the rawapproaches. Thus it may be stated here that the RQNN filteringenhances the performance of BCI when compared to the rawEEG, but the increase in performance when compared to BSPand PSD may also be attributed to the use of a combination offrequency and temporal features. It can therefore be concludedfrom these results that the RQNN improves the averageperformance of BCI system for almost all the subjects duringboth the training and the evaluation stages when comparedto the unfiltered EEG, SG-filtered EEG, and even PSD andBSP features-based approaches. The same data sets were alsoprocessed and classified by several renowned researchers ascompetitors of BCI Competition IV 2b-data set [45] which isalso discussed in [35]. The performance of RQNN (Table VII)is also significantly better than the ones obtained by thewinners of BCI competition in [45]6 The competition winnerused the filter bank CSP technique for FE along with theNaive Bayes Parzen window classifier. The runner-up groupused common spatial subspace decomposition technique forFE followed by LDA classifier. The third group used a CSPfollowed by log-variance techniques for FE and the best (attraining stages) of LDA and SVM classifier. The fourth groupused wavelet technique followed by an LDA classifier andit used spectral features before a neural network classifier.The sixth group estimates 75 band power features with theircursive feature elimination technique with a Bayesian LDAclassifier [35]. Some of the competitors of Competition IVused only session 3 for training, while some used combinedsessions from the three training sessions (combining 1, 3, or1, 2, or 1, 2, 3) differently for different subjects and evaluatedon session 4 and 5 [46]–[51]. In this paper, only session 3is used for training, while the sessions 4 and 5 are used forevaluation. The results thus show that without prior knowledgeof the type of noise characteristics present in EEG, the RQNNcan be utilized to enhance EEG signal separability and thatthe quantum approach-based filtering method can be used asa signal preprocessing method for BCI.

4) Online Real-Time Implementation: The proposed RQNNmethodology has also been utilized in online EEG filteringfor real-time MI-based robot control task using an intelligentadaptive user interface as shown in the videos at. [52]. A very

6The average maximum of kappa (across nine subjects) obtained by the firstsix competitors is 0.6, 0.58, 0.46, 0.43, 0.37, and 0.25 respectively.

Page 9: Quantum Neural Network-Based EEG Filtering for a Brain–Computer Interface

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

GANDHI et al.: QUANTUM NEURAL NETWORK-BASED EEG FILTERING 9

TABLE VII

EVALUATION STAGE (*4E, *5E) PERFORMANCE COMPARISON$

TABLE VIII

RQNN PERFORMANCE ON BCI COMPETITION IV 2A DATA SET

important feature of RQNN filtering methodology is that asingle incoming sample (particle) is viewed as a wave packetwhich evolves as per the potential field (or function) under theinfluence of SWE (video at. [43]).

5) Investigation on BCI Competition Data Set IV DataSet: The RQNN methodology has also been investigated onthe BCI competition IV 2a data set [53] as displayed inTable VIII. This data set consists of one training set andone evaluation set for nine subjects with 22 channels andfour different MI tasks, namely the imagination of move-ment of left hand (class 1), right hand (class 2), both feet(class 3), and tongue (class 4). However, RQNN approachhas been carried out, as before, using only the two chan-nels, namely C3 and C4 and only for a two-class classifi-cation (left hand versus right hand). Therefore, the data wasseparated into two classes, EEG with left hand and righthand mental imagination task. The same two-step procedurehas been applied (Fig. 5) for the parameter selection. Theaverage performance enhancement obtained is >2% in CA(p < 0.0027) and 0.04 in maximum of kappa (p < 0.0031)when compared with the raw EEG. More details about thesubject-specific parameters for this data set can be availedfrom [54].

X. CONCLUSION

The RQNN was evaluated with case studies of simplesignals and the results show that the RQNN is significantlybetter than the Kalman filter while filtering the dc signal addedwith three different noise levels. The learning architecture

and the associated unsupervised learning algorithm of RQNNhave been modified to take into account the complex natureof EEG signal. The basic approach is to ensure that thestatistical behavior of input signal is properly transferred tothe wave packet associated with the response of quantumdynamics of the network. At every computational samplinginstant, the EEG signal is encoded as a wave packet whichcan be interpreted as pdf of the signal at that instant. Thesubject-specific RQNN parameters have been obtained usinga two-step inner–outer fivefold cross-validation which resultsin an enhanced EEG signal that is used further for FE andclassification processes. The CA and kappa values obtainedfrom RQNN-enhanced EEG signal show a significant improve-ment during both the training and the evaluation stages acrossmultiple sessions This performance enhancement through theRQNN model is superior when compared to that using theraw EEG, Savitzky–Golay filtered EEG or even raw EEG withthe PSD or the BSP-based features. Future work will involvedeveloping automated computational techniques such as GA orPSO for selecting subject-specific RQNN model parameters.Improving other stages of signal processing framework ashighlighted in [55] will also increase the online performanceof BCI for applications in stroke rehabilitation [56] andgames [57] among others.

The noteworthy feature of the proposed scheme is that with-out introducing any complexity at the FE or the classificationstages, the performance of BCI can be significantly improvedsimply by enhancing the EEG signal at the preprocessingstage.

ACKNOWLEDGMENT

The authors would like to thank InvestNI and the NorthernIreland Integrated Development Fund under the Centre ofExcellence in Intelligent Systems Project.

REFERENCES

[1] K. Fukunaga, Introduction to Statistical Pattern Recognition, 2nd ed.New York, NY, USA: Academic, 1990.

[2] S. Lemm, B. Blankertz, G. Curio, and K. R. Muller, “Spatio-spectralfilters for improving the classification of single trial EEG,” IEEE Trans.Biomed. Eng., vol. 52, no. 9, pp. 1541–1548, Sep. 2005.

[3] M. Arvaneh, C. Guan, K. K. Ang, and C. Quek, “Optimizing spatial fil-ters by minimizing within-class dissimilarities in electroencephalogram-based brain–computer interface,” IEEE Trans. Neural Netw. Learn. Syst.,vol. 24, no. 4, pp. 610–619, Apr. 2013.

[4] H. Zhang, H. Yang, and C. Guan, “Bayesian learning for spatial filteringin an EEG-based brain–computer interface,” IEEE Trans. Neural Netw.Learn. Syst., vol. 24, no. 7, pp. 1049–1060, Jul. 2013.

[5] D. Coyle, G. Prasad, and T. M. McGinnity, “Faster self-organizingfuzzy neural network training and a hyperparameter analysis for abrain–computer interface,” IEEE Trans. Syst., Man, Cybern., B, Cybern.,vol. 39, no. 6, pp. 1458–1471, Dec. 2009.

[6] D. Coyle, G. Prasad, and T. M. McGinnity, “A time-series predic-tion approach for feature extraction in a brain–computer interface,”IEEE Trans. Neural Syst. Rehabil. Eng., vol. 13, no. 4, pp. 461–467,Dec. 2005.

[7] D. Coyle, “Neural network based auto association and time-seriesprediction for biosignal processing in brain–computer interfaces,” IEEEComput. Intell. Mag., vol. 4, no. 4, pp. 47–59, Nov. 2009.

[8] G. Pfurtscheller and F. H. Lopes da Silva, “Event-related desynchro-nization,” in Handbook of Electroencephalography and Clinical Neuro-physiology, vol. 6. Amsterdam, The Netherlands: Elsevier, 1999.

Page 10: Quantum Neural Network-Based EEG Filtering for a Brain–Computer Interface

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

10 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS

[9] R. S. Bucy, “Linear and nonlinear filtering,” Proc. IEEE, vol. 58, no. 6,pp. 854–864, Jun. 1970.

[10] R. Shankar, Principles of Quantum Mechanics. New York, NY, USA:Plenum, 1994.

[11] M. Planck, Zur Theorie Des Gesetzes Der Energieverteilung Im Nor-malspectrum. Munich, Germany: Barth, 1900.

[12] R. P. Feynman, “Quantum mechanical computers,” Found. Phys., vol. 16,no. 6, pp. 507–531, Jun. 1986.

[13] L. Behera, I. Kar, and A. C. Elitzur, “A recurrent quantum neuralnetwork model to describe eye tracking of moving targets,” Found. Phys.Lett., vol. 18, no. 4, pp. 357–370, Aug. 2005.

[14] L. Behera and I. Kar, “Quantum stochastic filtering,” in Proc. IEEE Int.Conf. Syst., Man Cybern., Oct. 2005, pp. 2161–2167.

[15] V. Gandhi, V. Arora, G. Prasad, D. Coyle, and T. M. McGinnity, “A novelEEG signal enhancement approach using a recurrent quantum neuralnetwork for a brain–computer interface,” in Proc. 3rd Eur. Conf., Tech.Assist. Rehabil., Mar. 2011, pp. 1–8.

[16] V. Gandhi, V. Arora, L. Behera, G. Prasad, D. Coyle, andT. M. McGinnity, “A recurrent quantum neural network model enhancesthe EEG signal for an improved brain–computer interface,” in Proc.Assist. Living, Inst. Eng. Technol. Conf., Apr. 2011, pp. 1–6.

[17] L. Behera, S. Bharat, S. Gaurav, and A. Manish, “A recurrent networkmodel with neurons activated by Schroedinger wave equation and itsapplication to stochastic filtering,” in Proc. 9th Int. Conf. High-Perform.Comput., Workshop Soft Comput., Dec. 2002, pp. 1–8.

[18] V. G. Ivancevic, “Adaptive-wave alternative for the black-scholes optionpricing model,” Cognit. Comput., vol. 2, no. 1, pp. 17–30, Jan. 2010.

[19] V. Gandhi, V. Arora, L. Behera, G. Prasad, D. Coyle, and T. McGinnity,“EEG denoising with a recurrent quantum neural network for a brain–computer interface,” in Proc. Int. Joint Conf. Neural Netw., Jul./Aug.2011, pp. 1583–1590.

[20] J. Kennedy and R. C. Eberhart, “Particle swarm optimization,” in Proc.IEEE Int. Conf. Neural Netw., Nov./Dec. 1995, pp. 1942–1948.

[21] J. Kennedy, “The particle swarm: Social adaptation of knowledge,” inProc. IEEE Int. ICEC, Apr. 1997, pp. 303–308.

[22] J. Acacio de Barros and P. Suppes, “Quantum mechanics, interfer-ence, and the brain,” J. Math. Psychol., vol. 53, no. 5, pp. 306–313,2009.

[23] R. L. Dawes, “Quantum neurodynamics: Neural stochastic filtering withthe Schroedinger equation,” in Proc. Int. Joint Conf. Neural Netw.,Jun. 1992, pp. 133–140.

[24] K. H. Pribram, Rethinking Neural Networks: Quantum Fields andBiological Data. Mahwah, NJ, USA: Lawrence Erlbaum Assoc., 1993.

[25] L. Behera, I. Kar, and A. C. Elitzur, “Recurrent Quantum neural networkand its applications,” in Proc. Emerging Phys. Consciousness, 2006,pp. 327–350.

[26] T. R. Taha and M. I. Ablowitz, “Analytical and numerical aspects of cer-tain nonlinear evolution equations. II. Numerical, nonlinear Schrödingerequation,” J. Comput. Phys., vol. 55, no. 2, pp. 203–230, 1984.

[27] J. Crank and P. Nicolson, “A practical method for numerical evaluationof solutions of partial differential equations of the heat-conduction type,”in Proc. Math. Cambridge Phil. Soc., Jan. 1947, pp. 50–67.

[28] J. Scheffel, “Does nature solve differential equations?” R. Inst. Technol.,Stockholm, Sweden, Tech. Rep. TRITA-ALF-2002-02, May 2002.

[29] (2009). BCI Competition IV [Online]. Available:http://www.bbci.de/competition/iv/desc_2b.pdf

[30] B. Hjorth, “EEG analysis based on time domain properties,” Electroen-cephalogr. Clinical Neurophysiol., vol. 29, no. 3, pp. 306–310, 1970.

[31] M. Vourkas, S. Micheloyannis, and G. Papadourakis, “Use of ANN andHjorth parameters in mental-task discrimination,” in Proc. 1st Int. Conf.Adv. Med. Signal Inf. Process., Sep. 2000, pp. 327–332.

[32] C. Vidaurre, A. Schlogl, R. Cabeza, R. Scherer, and G. Pfurtscheller,“Study of on-line adaptive discriminant analysis for EEG-based brain–computer interfaces,” IEEE Trans. Biomed. Eng., vol. 54, no. 3,pp. 550–556, Mar. 2007.

[33] P. Herman, G. Prasad, T. M. McGinnity, and D. Coyle, “Comparativeanalysis of spectral approaches to feature extraction for EEG-basedmotor imagery classification,” IEEE Trans. Neural Syst. Rehabil. Eng.,vol. 16, no. 4, pp. 317–326, Aug. 2008.

[34] D. Coyle, G. Prasad, and T. M. McGinnity, “A time-frequency approachto feature extraction for a brain–computer interface with a comparativeanalysis of performance measures,” EURASIP J. Appl. Signal Process.,vol. 19, pp. 3141–3151, Feb. 2005.

[35] S. Shahid and G. Prasad, “Bispectrum-based feature extraction techniquefor devising a practical brain–computer interface,” J. Neural Eng., vol. 8,no. 2, pp. 025014-1–025014-12, Mar. 2011.

[36] G. Pfurtscheller, R. Scherer, G. Müller-Putz, and F. H. Lopes da Silva,“Short-lived brain state after cued motor imagery in naive subjects,”EURASIP J. Neurosci., vol. 28, no. 7, pp. 1419–1426, Oct. 2008.

[37] I. Bankman and I. Gath, “Feature extraction and clustering ofEEG during anaesthesia,” Med. Biol. Eng. Comput., vol. 25, no. 4,pp. 474–477, 1987.

[38] R. M. Rangayyan, Biomedical Signal Analysis: A Case-Study Approach.Piscataway, NJ, USA: IEEE Press, 2002.

[39] A. Savitzky and M. J. E. Golay, “Smoothing and differentiation of databy simplified least squares procedures,” Anal. Chem., vol. 36, no. 8,pp. 1627–1639, Jul. 1964.

[40] S. Hargittai, “Savitzky-Golay least-squares polynomial filters in ECGsignal processing,” in Proc. 32nd Annu. Sci. Comput. Cardiol.,Sep. 2005, pp. 763–766.

[41] H. Hassanpour, “A time-frequency approach for noise reduction,” DigitalSignal Process., vol. 18, no. 5, pp. 728–738, 2008.

[42] A. Zehtabian and B. Zehtabian, “A novel noise reduction method basedon Subspace Division,” J. Comput. Eng., vol. 1, no. 1, pp. 55–61,2009.

[43] V. Gandhi. (2013, Jul. 12). Evolving of the Wave Packet [Online]. Avail-able: http://isrc.ulster.ac.uk/images/stories/Staff/BCI/Members/VGandhi/Video_PhysicalRobotControl/wavepacket_evolves_according_to_swe.mp4

[44] R. E. Kalman, “A new approach to linear filtering and predictionproblems,” Trans. ASME J. Basic Eng., vol. 82, pp. 35–45, Mar. 1960.

[45] B. Blankertz. (2008). BCI Competitions IV [Online]. Available:http://www.bbci.de/competition/iv/

[46] Z. Chin, K. Ang, C. Wang, C. Guan, H. Zhang, K. Phua,B. Hamadicharef, and K. Tee. (2013, Jul. 12). BCI Competi-tion IV Results [Online]. Available: http://www.bbci.de/competition/iv/results/ds2b/ZhengYangChin_desc.pdf

[47] H. Gan, L. Guangquan, and Z. Xiangyang. (2013, Jul. 12). BCI Com-petition IV Results [Online]. Available: http://www.bbci.de/competition/iv/results/ds2b/HuangGan_desc.pdf

[48] D. Coyle, A. Satti, and T. M. McGinnity. (2013, Jul. 12). BCI Compe-tition IV Result [Online]. Available: http://www.bbci.de/competition/iv/results/ds2b/DamienCoyle_desc.pdf

[49] S. Lodder. (2013, Jul. 12). BCI Competition IV Results [Online]. Avail-able: http://www.bbci.de/competition/iv/results/ds2b/ShaunLodder_desc.txt

[50] J. Saa. (2013, Jul. 12). BCI Competition IV Results [Online]. Available:http://www.bbci.de/competition/iv / results / ds2b / JaimeFernandoDelgadoSaa_desc.txt

[51] Y. Ping, L. Xu, and D. Yao. (2013, Jul. 12). BCI Competi-tion IV Results [Online]. Available: http://www.bbci.de/competition/iv/results/ds2b/YangPing_desc.txt

[52] V. Gandhi. (2013, Jul. 12). Robot Control Through MotorImagery [Online]. Available: http://isrc.ulster.ac.uk/Staff/VGandhi/VideoRobotControlThroughMI

[53] C. Brunner, R. Leeb, G. R. Müller-Putz, A. Schlögl, and G. Pfurtscheller.(2009). BCI Competition 2008–Graz Data Set A [Online]. Available:http://www.bbci.de/competition/iv/desc_2a.pdf

[54] V. Gandhi, “Quantum neural network based EEG filtering and adaptivebrain-robot interfaces,” Ph.D. dissertation, Intell. Syst. Res. Centre, Univ.Ulster, Belfast, U.K., 2012.

[55] D. J. Krusienski, M. Grosse-Wentrup, F. Galán, D. Coyle, K. J. Miller,E. Forney, and C. W. Anderson, “Critical issues in state-of-the-artbrain–computer interface signal processing,” J. Neural Eng., vol. 8, no. 2,pp. 025002-1–025002-8, Apr. 2011.

[56] G. Prasad, P. Herman, D. Coyle, S. McDonough, and C. Jacque-line, “Applying a brain–computer interface to support motor imagerypractice in people with stroke for upper limb recovery: A fea-sibility study,” J. Neuroeng. Rehabil., vol. 7, no. 60, pp. 1–17,2010.

[57] D. Marshall, D. Coyle, S. Wilson, and M. Callaghan, “Games, gameplay,and BCI: The state of the art,” IEEE Trans. Comput. Intell. AI Games,vol. 5, no. 2, pp. 82–99, Jun. 2013.

[58] A. Schlögl, J. Kronegg, J. E. Huggins, and S. G. Mason, “Evaluationcriteria for BCI research,” in Towards Brain-Computer Interfacing,G. Dornhege, J. Millán, T. Hinterberger, and D. McFarland, Eds.Cambridge, MA, USA: MIT Press, 2007.

Page 11: Quantum Neural Network-Based EEG Filtering for a Brain–Computer Interface

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

GANDHI et al.: QUANTUM NEURAL NETWORK-BASED EEG FILTERING 11

Vaibhav Gandhi received the B.Eng. degree ininstrumentation and control engineering from Bhav-nagar University, Gujarat, India, in 2000, theM.Eng. degree in electrical engineering from theM.S.University of Baroda, Baroda, India, in 2002,and the Ph.D. degree in computing and engineeringfrom the University of Ulster, Londonderry, U.K., in2012. He was a recepient of the U.K.-India Educa-tion & Research Initiative scholarship for his Ph.D.research in the area of brain-computer interface forassistive robotics carried out at the Intelligent Sys-

tems Research Center, University of Ulster and partly at IIT Kanpur, Kanpur,India. His Ph.D. research was focused on quantum mechanics motivated EEGsignal processing, and an intelligent adaptive use-centric human-computerinterface design for real-time control of a mobile robot for the BCI users. Hispost-doctoral research involved work on shadow-hand multi-fingered mobilerobot control using the EMG/muscle signals, with contributions also in the3-D printing aspects of a robotic hand.

He joined the Department of Design Engineering & Mathematics, School ofScience & Technology, Middlesex University, London, U.K., in 2013, wherehe is currently Lecturer of robotics, embedded systems and real-time systems.His current research interests include brain-computer interfaces, biomedicalsignal processing, quantum neural networks, computational intelligence, com-putational neuroscience, use-centric graphical user interfaces, and assistiverobotics.

Girijesh Prasad (M’98–SM’07) received theB.Tech. degree in electrical engineering from NIT(formerly REC), Calicut, India, in 1987, the M.Tech.degree in computer science and technology from IIT(formerly UOR), Roorkee, India, in 1992, and thePh.D. degree from Queen’s University, Belfast, U.K.,in 1997.

He has been an Academic Staff Member with theUniversity of Ulster, Derry, U.K., since 1999, and iscurrently a Professor of intelligent systems. He is anexecutive member of Intelligent Systems Research

Centre, Magee Campus, where he leads the Brain-Computer Interface andAssistive Technology Team. He has published over 150 research papersin international journals, books, and conference proceedings. His currentresearch interests include self-organizing hybrid intelligent systems, statisticalsignal processing, adaptive predictive modelling and control with applicationsin complex industrial and biological systems including brain modelling,brain-computer interfaces and neuro-rehabilitation, assistive robotic systems,biometrics, and energy systems.

Prof. Prasad is a Chartered Engineer and a fellow of the IET. He is afounding member of IEEE SMC TCs on Brain-Machine Interface Systemsand Evolving Intelligent Systems.

Damien Coyle (SM’12) received a first class degreein computing and electronic engineering in 2002 anda doctorate in Intelligent Systems Engineering in2006 from the University of Ulster, Londonderry,U.K. Since 2006, he has been a Lecturer/Senior Lec-turer with the School of Computing and IntelligentSystems and a member of the Intelligent SystemsResearch Centre, University of Ulster, where he isa founding member of the brain-computer inter-face team and computational neuroscience researchteams. His current research interests include brain-

computer interfaces, computational intelligence, computational neuroscience,neuroimaging, and biomedical signal processing and he has co-authoredseveral journal articles and book chapters. He is the 2008 recipient of theIEEE Computational Intelligence Society’s Outstanding Doctoral DissertationAward and the 2011 recipient of the International Neural Network Society’sYoung Investigator of the Year Award. He received the University of Ulster’sDistinguished Research Fellowship Award in 2011 and is Royal Academy ofEngineering/The Leverhulme Trust Senior Research Fellowship in 2013. Heis an active volunteer in the IEEE Computational Intelligence Society.

Laxmidhar Behera (S’92–M’03–SM’03) receivedthe B.Sc. in engineering and M.Sc. in engineeringdegrees from NIT Rourkela, Rourkela, India, in1988 and 1990, respectively, and the Ph.D. degreefrom IIT Delhi, Delhi, India. He was an Assis-tant Professor at BITS Pilani, India, from 1995 to1999 and pursued the postdoctoral studies in theGerman National Research Center for InformationTechnology, GMD, Sank Augustin, Germany, from2000 to 2001. He is currently a Professor with theDepartment of Electrical Engineering, IIT Kanpur,

Kanpur, India. He joined the Intelligent Systems Research Center, Universityof Ulster, Londonderry, U.K., as a Reader on sabbatical from IIT Kanpurfrom 2007 to 2009. He was a Visiting Researcher/Professor at FHG, Germany,and ETH, Zurich, Switzerland. He has published more than 170 papers to hiscredit published in refereed journals and presented in conference proceedings.His current research interests include intelligent control, robotics, informationprocessing, quantum neural networks, and cognitive modeling.

Thomas Martin McGinnity (SM’09) received theFirst Class (Hons.) degree in physics and the Ph.D.degree from the University of Durham, Durham,U.K., in 1975 and 1979, respectively.

He is a Professor of intelligent systems engineer-ing with the Faculty of Computing and Engineering,University of Ulster, Derry, Northern Ireland. Heis currently the Director of the Intelligent SystemsResearch Centre, which encompasses the researchactivities of over 100 researchers. He was an Asso-ciate Dean of the Faculty and the Director of the

university’s technology transfer company, Innovation Ulster, and a spin-offcompany, Flex Language Services. He is the author or co-author of over 300research papers and has attracted over £24 million in research funding to theuniversity.

Prof. McGinnity is a fellow of the IET and SMIEEE, and a CharteredEngineer.


Recommended