+ All Categories
Home > Documents > [Lecture Notes in Computer Science] Universal Access in Human-Computer Interaction. Users Diversity...

[Lecture Notes in Computer Science] Universal Access in Human-Computer Interaction. Users Diversity...

Date post: 11-Oct-2016
Category:
Upload: constantine
View: 217 times
Download: 0 times
Share this document with a friend
10
C. Stephanidis (Ed.): Universal Access in HCI, Part II, HCII 2011, LNCS 6766, pp. 545–554, 2011. © Springer-Verlag Berlin Heidelberg 2011 Social Environments, Mixed Communication and Goal- Oriented Control Application Using a Brain-Computer Interface Günter Edlinger and Christoph Guger g.tec medical engineering GmbH and Guger Technologies OG, Herbersteinstrasse 60 8020 Graz, Austria [email protected] Abstract. For this study a P300 BCI speller application framework served as a base to explore the operation for three different applications. Subjects exchanged messages in the networks of (i) Twitter (Twitter Inc.) and socialized with other residents in Second Life (Linden Lab) and (ii) controlled a virtual smart home. Although the complexity of the various applications varied greatly, all three ap- plications yielded similar results which are interesting for the general application of BCI for communication and control: (a) icons can be used together with char- acters in the interface masks and (b) more crucially, the BCI system does not need to be trained on each individual symbol and allows the use of icons for many different tasks without prior time consuming and boring training for each individual icon. Hence such a type of BCI system is more optimally suited for goal oriented control amongst currently available BCI systems. Keywords: Brain-Computer Interface, P300 evoked potential, EEG, BCI. 1 Introduction Since the early 1990s the BCI research field started growing constantly driven by relatively high performance and low cost computer power and as well as EEG instru- mentation capable to be used in real-time and closed loop data processing. However, a first systematic discussion of possible brain-computer communications based on EEG can be found already in J. Vidal [1] in early 1970s; Farwell and Donchin described in another pioneering work the usage of evoked potentials for communica- tion [2]. Since then, performance and usability of BCI systems have advanced dra- matically over the last several years. Only about ten years ago, one of the pioneering laboratories in BCI research in Europe published the first BCI that could provide communication for disabled users in their homes. However, the system was only validated with two users, required months of training, and was still slow and inaccu- rate [3]. Thereafter research laboratories in the USA and Europe were among the first to describe BCIs that could provide real benefit to handicapped people without exten- sive training [4-6]. Still the BCI users needed a training of several weeks to operate the system with acceptable accuracy [7]. Recently, training time of BCI systems dropped down to only minutes and some BCI systems even do not need any training
Transcript
Page 1: [Lecture Notes in Computer Science] Universal Access in Human-Computer Interaction. Users Diversity Volume 6766 || Social Environments, Mixed Communication and Goal-Oriented Control

C. Stephanidis (Ed.): Universal Access in HCI, Part II, HCII 2011, LNCS 6766, pp. 545–554, 2011. © Springer-Verlag Berlin Heidelberg 2011

Social Environments, Mixed Communication and Goal-Oriented Control Application Using a Brain-Computer

Interface

Günter Edlinger and Christoph Guger

g.tec medical engineering GmbH and Guger Technologies OG, Herbersteinstrasse 60 8020 Graz, Austria

[email protected]

Abstract. For this study a P300 BCI speller application framework served as a base to explore the operation for three different applications. Subjects exchanged messages in the networks of (i) Twitter (Twitter Inc.) and socialized with other residents in Second Life (Linden Lab) and (ii) controlled a virtual smart home. Although the complexity of the various applications varied greatly, all three ap-plications yielded similar results which are interesting for the general application of BCI for communication and control: (a) icons can be used together with char-acters in the interface masks and (b) more crucially, the BCI system does not need to be trained on each individual symbol and allows the use of icons for many different tasks without prior time consuming and boring training for each individual icon. Hence such a type of BCI system is more optimally suited for goal oriented control amongst currently available BCI systems.

Keywords: Brain-Computer Interface, P300 evoked potential, EEG, BCI.

1 Introduction

Since the early 1990s the BCI research field started growing constantly driven by relatively high performance and low cost computer power and as well as EEG instru-mentation capable to be used in real-time and closed loop data processing. However, a first systematic discussion of possible brain-computer communications based on EEG can be found already in J. Vidal [1] in early 1970s; Farwell and Donchin described in another pioneering work the usage of evoked potentials for communica-tion [2]. Since then, performance and usability of BCI systems have advanced dra-matically over the last several years. Only about ten years ago, one of the pioneering laboratories in BCI research in Europe published the first BCI that could provide communication for disabled users in their homes. However, the system was only validated with two users, required months of training, and was still slow and inaccu-rate [3]. Thereafter research laboratories in the USA and Europe were among the first to describe BCIs that could provide real benefit to handicapped people without exten-sive training [4-6]. Still the BCI users needed a training of several weeks to operate the system with acceptable accuracy [7]. Recently, training time of BCI systems dropped down to only minutes and some BCI systems even do not need any training

Page 2: [Lecture Notes in Computer Science] Universal Access in Human-Computer Interaction. Users Diversity Volume 6766 || Social Environments, Mixed Communication and Goal-Oriented Control

546 G. Edlinger and C. Guger

[8;9]. However, BCIs require the user to engage in some conscious, intentional activ-ity to convey information. Work has shown that immersive feedback, which may include virtual reality, can reduce training time and improve accuracy [10;11]. The confluence of ICT techniques (Brain/Neuronal-Computer Interfaces, affective com-puting, Virtual Reality, ambient intelligence) and neuropsychology allows to integrate them into an advanced platform which will improve quality of life of people by pro-viding not only means for communication but by performing advanced and user ori-entated analysis of deficits and providing individual training scenarios. A popular BCI approach is based on the P300 evoked potential. It is elicited when an unlikely event occurs randomly between events with high probability. In the EEG signal the P300 appears as a positive wave about 300 ms after stimulus onset. Its main usage in BCIs is for spelling devices, but one can also use it for control tasks for example in gaming [12] or navigation (e.g. to move a computer-mouse [13]). Krusienski et al. [14] evaluated different classification techniques for the P300 speller, wherein the stepwise linear discriminant analysis (SWLDA) and the Fisher’s linear discriminant analysis provided the best overall performance and implementation characteristics. A recent study [15], performed on 100 subjects, revealed an average accuracy level of 91.1%, with a spelling time of 28.8 s for one single character. Each character was selected out of a matrix of 36 characters.

This paper discusses case study applications for human computer interaction sce-narios all based on the P300 evoked potential for virtual smart home control, and speller like interfaces to operate Twitter and to interact virtually with other partici-pants via Second Life.

2 Methods and P300 Base System

2.1 P300 Base System

A P300 spelling device can be based on a 6 x 6 matrix of different characters dis-played on a computer screen. The row/column speller flashes a whole row or a whole column of characters at once in a random order as shown in Fig. 1A and 1B. The single character speller flashes only one single character at an instant in time. The underlying phenomenon of a P300 speller is the P300 component of the EEG, which is seen if an attended and relatively uncommon event occurs. The subject must con-centrate on a specific letter he/she wants to write. When the character flashes on, the P300 is induced and the maximum in the EEG amplitude is reached typically 300 ms after the flash onset and the P300 signal response is more pronounced in the single character speller than in the row/column speller and therefore easier to detect [16].

For BCI system training, EEG data are acquired from the subject while the subject focuses on the appearance of specific letters in the copy spelling mode. In this mode, an arbitrary word like LUCAS is presented on the monitor. First, the subject counts whenever the L flashes. Each row, column, or character flashes for e.g.100ms per flash. Then the subject counts the U until it flashes 15 times, and so on. EEG data are then evaluated with respect to the flashing event within a specific interval length, segmented and sent to an LDA to separate the target characters from all non targets. This yields a subject specific weight vector WV for the real-time experiments. It is very interesting for this approach that the LDA is trained only on 5 characters repre-senting 5 classes and not on all 36 characters.

Page 3: [Lecture Notes in Computer Science] Universal Access in Human-Computer Interaction. Users Diversity Volume 6766 || Social Environments, Mixed Communication and Goal-Oriented Control

Social Environments, Mixed Communication and Goal-Oriented Control Application 547

A B

C D

Fig. 1. A and B display the screen layout of the 36 character speller. Either all characters of one row or column are highlighted at the same time in the row/column speller or only one single character is highlighted for a certain time in the single character speller. C displays the elec-trode layout according to [17]. A total of eight electrodes positions distributed mostly over occipital and parietal regions are used. Red circles indicate the used electrode positions Fz, Cz, P3, Pz, P4, PO7, Oz, PO8. The yellow ring indicates the ground electrode mounted on the forehead at Fpz and the blue ring indicates the reference electrode attached to the right ear lobe. D displays a versatile system setup with the portable wireless EEG device g.MOBIlab+, an active electrode system g.GAMMAsys and a P300 speller application intendiX (courtesy of g.tec medical engineering GmbH, Austria).

Furthermore subjects might utilize averaged WV across several subjects. However, the accuracy of the spelling system increases also with the number of training charac-ters. After the setup of the WV the real-time experiments can be conducted. The de-vice driver ‘g.USBamp’ reads again the EEG data from the amplifier. Then the data are band pass filtered (‘Filter’) to remove drifts and artifacts and down sampled to 64 Hz ('Downsample 4:1’). The ‘RowCol Character Speller’ block generates the flashing sequence and the trigger signals for each flashing event and sends the ‘ID’ to the ‘Signal Processing’ block. The ‘Signal Processing’ block creates a buffer for each character. After all the characters flashed, the EEG data is used as input for the LDA and the system decides which letter was most likely attended by the subject. Then this character is displayed on the computer screen. Hence such a P300 base concept al-lows very reliable results with high information transfer rates [14;16;18].

In Guger et al. [16] it has been demonstrated that more than 70% of the sample population could use such a spelling setup with an accuracy of 100%. Based on this findings the experiments described in the manuscript are based on a variant of the Simulink model shown in Fig. 2.

Page 4: [Lecture Notes in Computer Science] Universal Access in Human-Computer Interaction. Users Diversity Volume 6766 || Social Environments, Mixed Communication and Goal-Oriented Control

548 G. Edlinger and C. Guger

Fig. 2. Real-time Simulink model for P300 speller experiment

3 P300 Applications

3.1 Virtual Smart Home Study

An appropriate interface for controlling a virtual smart home environment was de-signed. In contrast to the spelling application the subjects were positioned in standing position in front of a control computer and were instructed to avoid unnecessary movements. Next to the subjects a 3D power wall was installed for projecting the virtual smart home. Brain signals were measured from 8 scalp locations similar to the speller study. In order to allow some mobility of the subjects EEG data were acquired with the wireless g.MOBIlab+ (biosignal amplification unit, g.tec medical engineer-ing GmbH, Austria). In the experiment it should be possible for a subject to switch on and off the light, to open and close the doors and windows, to control the TV set, to use the phone, to play music, to operate a video camera at the entrance, to walk around in the house and to move him/herself to a specific place in the smart home. Hence the Simulink model from Fig. 2 was modified and appropriate Simulink blocks for the wireless g.MOBIlab+, the interface masks for smart home control allowing the usage of icons and symbols in addition to letters and a UDP communication block were inserted for controlling the virtual environment by BCI commands. One exam-ple of the main interface mask is given in Fig. 3A. A total of 12 subjects participated in the case study. The first task was to train the system on 42 selected icons. In the operation mode the classification result was then sent via a network connection to the control unit of a virtual 3D representation of a smart home (developed by Chris Gro-enegress and Mel Slater, ICREA-Universitat de Barcelona, Spain). A total of 7 differ-ent control masks were operated by the subjects. All subjects needed between 3 and 10 flashes (mean 5.2) per character to reach an accuracy of 95 % for the single char-acter speller. This resulted in a maximum information transfer rate of 84 bits/s for the single character speller (further details of the setup and results can be found in [19]).

Page 5: [Lecture Notes in Computer Science] Universal Access in Human-Computer Interaction. Users Diversity Volume 6766 || Social Environments, Mixed Communication and Goal-Oriented Control

Social Environments, Mixed Communication and Goal-Oriented Control Application 549

A B

C D

Fig. 3. Panel A displays the main interface mask consisting of 41 different icons arranged in a rectangular grid. Panel B displays a 3D view of the living room including some of the devices that can be controlled via the BCI like the TV set, room light or telephone. Panel C represents the control mask to be ‘beamed’ to 21 different locations in the apartment. Here 21 characters represent all user selectable positions. In the top left corner in panel D the living room can be found, the top right corner represents the kitchen, bottom left corner represents the sleeping room and on the bottom right corner the bathroom as well as the entrance door of the apartment is located.

In order to measure the performance and accuracy of the control, the subjects had to perform specific tasks e.g. opening the front door, moving to the specific places in the apartment or manipulating the light source or the room temperature. Interestingly the performance varied greatly between the interface masks. Fig. 4 indicates an over-view over the accuracies for the different interface masks. For some masks subjects performed consistently worse compared to other masks. Only about 30% of the deci-sions were correct for the Goto mask.

3.2 P300 Twitter and Second Life Control

Twitter (Twitter Inc.) is a social network that enables the user to send and read mes-sages. The messages are limited to 140 characters and are displayed in the authors profile page. Messages can be sent via the Twitter website or via smart phones or SMS (Short Message Service). Twitter provides also an application programming interface to send and receive SMS. Second Life is a free 3D online virtual world de-veloped by the American company Linden Lab. It was launched on June 23, 2003 and already 5 years later the platform had 15 million registered accounts whereas on aver-age 60 000 users were online at the same time. Only a free client software “Second Life Viewer” and an account are necessary to participate.

One of the main activities in Second Life is socializing with other so-called resi-dents whereas every resident represents a person of the real world (see Fig. 5B). Fur-thermore it is possible to perform different actions like holding business meetings, to take pictures or make movies, to attend courses, etc. Communication takes place via

Page 6: [Lecture Notes in Computer Science] Universal Access in Human-Computer Interaction. Users Diversity Volume 6766 || Social Environments, Mixed Communication and Goal-Oriented Control

550 G. Edlinger and C. Guger

Fig. 4. Control accuracy results for 12 different subjects and all interface masks. A green color (light gray) coded cell represents a control accuracy of 100%. All other accuracies are indicated in red color (dark grey). The numbers presented in the cells represent the actual correct deci-sions. The top most rows indicate the total number of decisions per mask. The right most col-umn indicates the total accuracy for the individual subject for all tasks. The last row indicates the accuracy of one specific interface mask over all subjects.

text chats, voice chats and gestures. Hence also handicapped people could participate in Second Life like any other user if an appropriate interface were available. There-fore a P300 controller was interfaced using a Second Life (SL) controller in C++ Simulink S-functions.

In order to participate in sending tweeds and Second Life appropriate interface masks were designed and the P300 base system from Fig. 2 was modified accord-ingly. The upper panel in Fig. 5A shows an UML diagram of the actions required to use e.g. the service Twitter. Hence the standard P300 spelling matrix based on a 6 x 6 characters matrix was enhanced to provide the necessary commands. Therefore the first two lines contain now the symbols representing corresponding Twitter services and the remaining characters are used for spelling purpose. The matrix contains now a total of 54 characters. Initial training of the system was done for 10 characters. Then another user was asking questions via Twitter and the BCI User had to answer differ-ent questions on every other day. Therefore in total the BCI User had to use the interface on 9 different days and selected between 6 and 36 characters every day. Interesting is to compare the beginning with the end of the study. The first session lasted 11:09 min and the user spelled 13 characters, but made 3 mistakes. The user had the instruction to correct any mistake and this yielded to an average of 51 seconds selection time per character. In comparison to the last session the user spelled 27 characters in 6:38 min with only 1 mistake and an average selection time of 15 sec-onds per character. Also the number of flashes per character was reduced from 8 to only 3 flashes to increase the speed.

For the control of Second Life and similarly to the virtual smart home control three different interface masks were developed. The lower panel in Fig. 5B displays a screen-shot of a Second Life scene and the main mask as shown having 31 different classes to select from. Other masks for control like ‘chatting’ (55 classes) and ‘searching’

Page 7: [Lecture Notes in Computer Science] Universal Access in Human-Computer Interaction. Users Diversity Volume 6766 || Social Environments, Mixed Communication and Goal-Oriented Control

Social Environments, Mixed Communication and Goal-Oriented Control Application 551

A

B

Fig. 5. Upper panel A: UML diagram of service Twitter and P300 - Twitter interface mask for control. Lower Panel B: Screenshot of Second life situation and Second Life interface main mask to walk forward/backward, turn left/right, slide left/right, climb, teleport home, show map, turn around, activate/deactivate running mode, start/stop flying, decline, acti-vate/deactivate mouselook view, enter search mask, take snapshot, start chat, quit and stand-by.

(40 classes) were developed. Each of the icons represent an actual command associ-ated with it. If a certain icon is selected, Second Life is notified to execute this individual action with actually using keyboard events. The Second life control and performance has been tested in first setups and preliminary results indicate a very similar performance to the virtual smart home scenario.

4 Conclusion and Outlook

BCI enabled control and communication is a new skill a subject has to learn. In an initial adaptation phase the BCI system is trained to the specific subject’s brain activ-ity. In addition the subjects have to get used and adapt to the BCI system as well. The time needed for a subject to adapt to the system is by far shorter in exogenous BCIs like P300 approaches. Such BCI systems yield higher accuracies in a higher number of subjects and give therefore for control purposes more reliable results. However, subjects have to look at flashing or flickering light sources or pay attention to tactile stimulations. Hence external stimulations might interfere to daily life situations and may distract subjects from other ongoing activities.

Page 8: [Lecture Notes in Computer Science] Universal Access in Human-Computer Interaction. Users Diversity Volume 6766 || Social Environments, Mixed Communication and Goal-Oriented Control

552 G. Edlinger and C. Guger

Table 1. Questions and text input with the TWITTER BCI system.

Tweets

No.

Cha

r

Dur

atio

n [m

m:s

s]

Err

ors

No. Flashes Time per character [s]

Friend: Which kind of Brain-Computer Interface do you use?

BCI: P300 GTEC BCI 13 11:09 3 8 51

Friend: Are you using the g.GAMMAsys?

BCI: Exactly! 7 06:18 1 8 54

Friend: Active or passive electrodes? For explanation: the active system avoids or reduces artifacts and signal noise.

BCI: Active electrodes 17 06:10 0 5 22

Friend: The mounting of the active system is very comfortable. You do not need to prepare the skin first, do you?

BCI: you are absolutely right 24 08:55 1 5 22

Friend: How many electrodes are needed to run the BCI?

BCI: For P300 we usually use 8 electrodes 36 14:21 2 5 24

Friend: What amplifier are you using for the Brain-Computer Interface?

BCI: g.MOBIlab+ 10 04:42 1 5 28

Friend: How long does it take to code the software for the BCI for TWITTER?

BCI: 3 Weeks 7 03:13 1 4 28

Friend: How many characters are you able to write within a minute?

BCI: 3 TO 4 6 03:15 0 5 33

Friend: Did you get faster in writing during this period?

BCI: Yes, from 2 to 4 characters 27 06:38 1 3 15

One of the most consistent observations in BCI literature is the fact that a certain

percentage of the population cannot operate a specific type of BCI due to various reasons. Inter-subject as well as intra-subject variability often leads to a so-called BCI illiteracy [10;15;20;21]. Across the different BCI approaches around 20%-25% of subject are unable to control one type of BCI in a satisfactory way. Therefore, the usage of ‘hybrid’ BCIs has been introduced using the output of somato-sensory rhythm BCI as well as P300 or steady state visually evoked potentials based BCIs [22;23] enabling subjects to choose between these different approaches for optimal BCI control. A study of Hong et al. [24] recently did a comparison of an N200 and a P300 speller (tested on the same subjects) and found similar accuracy levels for both of them. This gives evidence that a closer look to the N200 component could be promising, at least for some subjects. Hence BCI illiteracy could be overcome or maybe minimized by investigating more thoroughly subject specific preferences. Designing the interface masks in a more proper way can improve the usability and success rate in BCI. Hence the quite bad control results for all subjects for the specific Goto control mask in the virtual environment can be explained rather by the layout and contrasts used in the mask than the number of icons. The group of Kansaku re-ported about the improvement of BCI P300 operation using an appropriate color set for the flashing letters or icons [25]. The different applications discussed in the manu-script yielded consistent results: (i) characters and icons/symbols can be used in a similar way to setup and operate different interface having quite different complexity; (ii) an average classifier can be utilized for many subjects right from the beginning to

Page 9: [Lecture Notes in Computer Science] Universal Access in Human-Computer Interaction. Users Diversity Volume 6766 || Social Environments, Mixed Communication and Goal-Oriented Control

Social Environments, Mixed Communication and Goal-Oriented Control Application 553

operate a BCI minimizing system calibration and training time and (iii) the experience from the virtual environment can be utilized in real world applications.

Acknowledgments. Funded partly from the EC grant contract FP7/2007-2013 under BrainAble project, FP7-ICT-2009-247935 (Brain-Neural Computer Interaction for Evaluation and Testing of Physical Therapies in Stroke Rehabilitation of Gait Disor-ders) and contract IST-2006-27731 (PRESENCCIA).

References

1. Vidal, J.J.: Toward direct brain-computer communication. Annu. Rev. Biophys. Bioeng. 2, 157–180 (1973)

2. Farwell, L.A., Donchin, E.: Talking off the top of your head: toward a mental prosthesis utilizing event-related brain potentials. Electroencephalogr. Clin. Neurophysiol. 70, 510–523 (1988)

3. Birbaumer, N., Ghanayim, N., Hinterberger, T., Iversen, I., Kotchoubey, B., Kubler, A., Perelmouter, J., Taub, E., Flor, H.: A spelling device for the paralysed. Nature 398, 297–298 (1999)

4. Pfurtscheller, G., Guger, C., Muller, G., Krausz, G., Neuper, C.: Brain oscillations control hand orthosis in a tetraplegic. Neurosci. Lett. 292(3), 211–214 (2000)

5. Vaughan, T.M., Wolpaw, J.R., Donchin, E.: EEG-based communication: prospects and problems. IEEE Trans. Rehabil. Eng. 4, 425–430 (1996)

6. Wolpaw, J.R., Birbaumer, N., McFarland, D.J., Pfurtscheller, G., Vaughan, T.M.: Brain-computer interfaces for communication and control. Clin. Neurophysiol. 113(6), 767–791 (2002)

7. Neuper, C., Scherer, R., Wriessnegger, S., Pfurtscheller, G.: Motor imagery and action ob-servation: modulation of sensorimotor brain rhythms during mental control of a brain-computer interface. Clin. Neurophysiol. 120, 239–247 (2009)

8. Bankertz, B., Losch, F., Krauledat, M., Dornhege, G., Curio, G., Muller, K.R.: The Berlin Brain–Computer Interface: Accurate Performance From First-Session in BCI-Naive Sub-jects. IEEE Transactions on Biomedical Engineering 55(10), 2452–2462 (2008)

9. Haihong, Z., Cuntai, G., Chuanchu, W.: Asynchronous P300-Based Brain–Computer Inter-faces: A Computational Approach With Statistical Models. IEEE Transactions on Bio-medical Engineering 55(6), 1754–1763 (2008)

10. Leeb, R., Lee, F., Keinrath, C., Scherer, R., Bischof, H., Pfurtscheller, G.: Brain-Computer Communication: Motivation, Aim, and Impact of Exploring a Virtual Apartment. IEEE Transactions on Neural Systems and Rehabilitation Engineering 15(4), 473–482 (2007)

11. Ron-Angevin, R., Diaz-Estrella, A.: Brain-computer interface: Changes in performance using virtual reality techniques. Neurosci. Lett. 449, 123–127 (2009)

12. Finke, A., Lenhardt, A., Ritter, H.: The MindGame: A P300-based brain-computer inter-face game. Neural Networks 118, 1329–1333 (2009)

13. Citi, L., Poli, R., Cinel, C., Sepulveda, F.: P300-Based BCI Mouse With Genetically-Optimized Analogue Control. IEEE Transactions on Neural Systems and Rehabilitation Engineering 16(1), 51–61 (2008)

14. Krusienski, D.J., Sellers, E.W., McFarland, D.J., Vaughan, T.M., Wolpaw, J.R.: Toward enhanced P300 speller performance. J. Neurosci. Methods 167, 15–21 (2008)

15. Guger, C., Edlinger, G., Harkam, W., Niedermayer, I., Pfurtscheller, G.: How many people are able to operate an EEG-based brain-computer interface (BCI)? IEEE Trans. Neural Syst. Rehabil. Eng. 11, 145–147 (2003)

Page 10: [Lecture Notes in Computer Science] Universal Access in Human-Computer Interaction. Users Diversity Volume 6766 || Social Environments, Mixed Communication and Goal-Oriented Control

554 G. Edlinger and C. Guger

16. Guger, C., Daban, S., Sellers, E., Holzner, C., Krausz, G., Carabalona, R., Gramatica, F., Edlinger, G.: How many people are able to control a P300-based brain-computer inter-face (BCI)? Neurosci. Neurosci. Lett. 462, 94–98 (2009)

17. Sellers, E.W., Krusienski, D.J., McFarland, D.J., Vaughan, T.M., Wolpaw, J.R.: A P300 event-related potential brain-computer interface (BCI): the effects of matrix size and inter stimulus interval on performance. Biol. Psychol. 73, 242–252 (2006)

18. Thulasidas, M., Guan, C., Wu, J.: Robust classification of EEG signal for brain-computer interface. IEEE Trans. Neural Syst. Rehabil. Eng. 14, 24–29 (2006)

19. Edlinger, G., Holzner, C., Groenegress, C., Guger, C., Slater, M.: Goal-Oriented Control with Brain-Computer Interface. HCI 16, 732–740 (2009)

20. Muller-Putz, G.R., Pfurtscheller, G.: Control of an electrical prosthesis with an SSVEP-based BCI. IEEE Trans. Biomed. Eng. 55, 361–364 (2008)

21. Allison, B., Luth, T., Valbuena, D., Teymourian, A., Volosyak, I., Graser, A.: BCI Demo-graphics: How Many (and What Kinds of) People Can Use an SSVEP BCI? IEEE Trans-actions on Neural Systems and Rehabilitation Engineering 18(2), 107–116 (2010)

22. Pfurtscheller, G., Allison, B.Z., Brunner, C., Bauernfeind, G., Solis-Escalante, T., Scherer, R., Zander, T.O., Mueller-Putz, G., Neuper, C., Birbaumer, N.: The Hybrid BCI. Front Neurosci 4(42), 42 (2010)

23. Edlinger, G., Holzner, C., Guger, C., Groenegress, C., Slater, M.: Brain-computer inter-faces for goal orientated control of a virtual smart home environment. In: 4th International IEEE/EMBS Conference on Neural Engineering, NER 2009, pp. 463–465 (2009)

24. Hong, B., Guo, F., Liu, T., Gao, X., Gao, S.: N200-speller using motion-onset visual response. Clin. Neurophysiol. 120, 1658–1666 (2009)

25. Komatsu, T., Hata, N., Nakajima, Y., Kansaku, K.: A non-training EEG-based BMI system for environmental control. Neurosci. Res. 61(suppl. 1), S251 (2008)


Recommended