+ All Categories
Home > Engineering > Brain Computer Interface WORD FILE

Brain Computer Interface WORD FILE

Date post: 16-Jul-2015
Category:
Upload: devendra-singh-tomar
View: 192 times
Download: 1 times
Share this document with a friend
Popular Tags:
23
INTRODUCTION:- For generations, humans have fantasized about the ability to communicate and interact with machines through thought alone or to create devices that can peer into person’s mind and thoughts. These ideas have captured the imagination of humankind in the form of ancient myths and modern science fiction stories. However, it is only recently that advances in cognitive neuroscience and brain imaging technologies have started to provide us with the ability to interface directly with the human brain. This ability is made possible through the use of sensors that can monitor some of the physical processes that occur within the brain that correspond with certain forms of thought. Primarily driven by growing societal recognition for the needs of people with physical disabilities, researchers have used these technologies to build brain computer interfaces (BCIs), communication systems that do not depend on the brain’s normal output pathways of peripheral nerves and muscles. Direct brain - computer interfaces (BCI) is a developing field that has been adding this new dimension of functionality to HCI (Human Computer Interaction). BCI has created a novel communication channel, especially for those users who are unable to generate necessary muscular movements to use typical HCI devices. WHAT IS BCI:- A BrainComputer Interface (BCI), often called a Mind-Machine Interface (MMI), or sometimes called a Direct Neural Interface (DNI), Synthetic Telepathy Interface (STI) or a BrainMachine Interface (BMI), is a direct communication pathway between the brain and an external device. In other words, Brain-computer interface (BCI) is a collaboration between a brain and a device that enables signals from the brain to direct some external activity, such as control of a cursor or a prosthetic limb. The interface enables a direct communications pathway between the brain and the object to be controlled. BCIs are often directed at assisting, augmenting, or repairing human cognitive or sensory-motor functions. In the case of cursor control, for example, the signal is transmitted directly from the brain to the
Transcript

INTRODUCTION:-

For generations, humans have fantasized about the ability to communicate and

interact with machines through thought alone or to create devices that can peer into

person’s mind and thoughts. These ideas have captured the imagination of

humankind in the form of ancient myths and modern science fiction stories.

However, it is only recently that advances in cognitive neuroscience and brain

imaging technologies have started to provide us with the ability to interface

directly with the human brain. This ability is made possible through the use of

sensors that can monitor some of the physical processes that occur within the brain

that correspond with certain forms of thought.

Primarily driven by growing societal recognition for the needs of people with

physical disabilities, researchers have used these technologies to build brain

computer interfaces (BCIs), communication systems that do not depend on the

brain’s normal output pathways of peripheral nerves and muscles. Direct brain-

computer interfaces (BCI) is a developing field that has been adding this new

dimension of functionality to HCI (Human Computer Interaction). BCI has created

a novel communication channel, especially for those users who are unable to

generate necessary muscular movements to use typical HCI devices.

WHAT IS BCI:-

A Brain–Computer Interface (BCI), often called a Mind-Machine Interface (MMI),

or sometimes called a Direct Neural Interface (DNI), Synthetic Telepathy Interface

(STI) or a Brain–Machine Interface (BMI), is a direct communication pathway

between the brain and an external device. In other words, Brain-computer interface

(BCI) is a collaboration between a brain and a device that enables signals from the

brain to direct some external activity, such as control of a cursor or a prosthetic

limb. The interface enables a direct communications pathway between the brain

and the object to be controlled. BCIs are often directed at assisting, augmenting, or

repairing human cognitive or sensory-motor functions. In the case of cursor

control, for example, the signal is transmitted directly from the brain to the

mechanism directing the cursor, rather than taking the normal route through the

body's neuromuscular system from the brain to the finger on a mouse.

The potential of BCI systems for helping handicapped people is obvious. In these

systems, users explicitly manipulate their brain activity instead of using motor

movements to produce signals that can be used to control computers or

communication devices. The reason a BCI works at all is because of the way our

brains function. Our brains are filled with neurons, individual nerve cells

connected to one another by dendrites and axons. Every time we think, move, feel

or remember something, our neurons are at work. That work is carried out by small

electric signals that zip from neuron to neuron as fast as 250 mph. The signals are

generated by differences in electric potential carried by ions on the membrane of

each neuron.

Although the paths the signals take are insulated by something called myelin, some

of the electric signal escapes. Scientists can detect those signals, interpret what

they mean and use them to direct a device of some kind. It can also work the other

way around. For example, researchers could figure out what signals are sent to the

brain by the optic nerve when someone sees the color red. They could rig a camera

that would send those exact signals into someone's brain whenever the camera saw

red, allowing a blind person to "see" without eyes.

Working Principle of BCI

There are several computer interfaces designed for disabled people. Most of these

systems, however, require some sort of reliable muscular control such as neck,

head, eyes, or other facial muscles. It is important to note that although requiring

only neural activity, BCI utilizes neural activity generated voluntarily by the user.

Interfaces based on involuntary neural activity, such as those generated during an

epileptic seizure, utilize many of the same components and principles as BCI, but

are not included in this field. BCI systems, therefore, are especially useful for

severely disabled, or locked-in, individuals with no reliable muscular control to

interact with their surroundings.

HISTORY OF BCI:-

The history of brain–computer interfaces (BCIs) starts with Hans Berger's

discovery of the electrical activity of the human brain and the development of

electroencephalography (EEG). In 1924 Berger was the first to record human brain

activity by means of EEG. Berger was able to identify oscillatory activity in the

brain by analyzing EEG traces. Following the work of Hans Berger in 1929 on a

device that later came to be known as Electro Encephalogram (EEG), which could

record electrical potentials generated by brain activity, there was speculation that

perhaps devices could be controlled by using these signals.

HANS BERGER BERGER’S PATIENT

For a long time, however, this remained a speculation. As reviewed by Wolpaw

and colleagues, 40 years later, in the 1970s, researchers were able to develop

primitive control systems based on electrical activity recorded from the head. The

Pentagon's Defence Advanced Research Projects Agency (DARPA), the same

agency involved in developing the first versions of the Internet, funded research

focused on developing bionic devices that would aid soldiers. Early research,

conducted by George Lawrence and coworkers, focused on developing techniques

to improve the performance of soldiers in tasks that had high mental loads. His

research produced a lot of insight on methods of autoregulation and cognitive

biofeedback, but did not produce any usable devices.

DARPA logo

DARPA expanded its focus toward a more general field of biocybemetics. The

goal was to explore the possibility of controlling devices through the real-time

computerized processing of any biological signal. Jacques Vidal from UCLA's

Brain-Computer Interface Laboratory provided evidence that single-trial visual-

evoked potentials could be used as a communication channel effective enough to

control a cursor through a two-dimensional maze.

Work by Vidal and other groups proved that signals from brain activity could be

used to effectively communicate a user's intent. It also created a clear-cut

separation between those systems utilizing EEG activity and those that used EMG

(electromyogram) activity generated from scalp or facial muscular movements.

Future work expanded BCI systems to use neural activity signals recorded not only

by EEG but also by other imaging techniques.

At the European Research and Innovation Exhibition in Paris in June 2006,

American scientist Peter Brunner composed a message simply by concentrating on

a display. Brunner wore a close-fitting (but completely external) cap fitted with a

number of electrodes. Electroencephalographic (EEG) activity from Brunner's

brain was picked up by the cap's electrodes and the information used, along with

software, to identify specific letters or characters for the message.

The BCI Brunner demonstrated is based on a method called the Wadsworth

system. Like other EEG-based BCI technologies, the Wadsworth system uses

adaptive algorithm s and pattern-matching techniques to facilitate communication.

Both user and software are expected to adapt and learn, making the process more

efficient with practice.

Peter Brunner

During the presentation, a message was displayed from an American

neurobiologist who uses the system to continue working, despite suffering from

amyotrophic lateral sclerosis (Lou Gehrig's disease). Although the scientist can no

longer move even his eyes, he was able to send the following e-mail message: "I

am a neuroscientist wHo (sic) couldn't work without BCI. I am writing this with

my EEG courtesy of the Wadsworth Center Brain-Computer Interface Research

Program."

BASIC COMPONENTS OF BCI:-

To understand the requirements of basic research in BCI, it is important to put it in

the context of the entire BCI system. The recent work of Mason and Birch (2003),

which is adapted in this section, presented a general functional model for BCI

systems upon which a universal vocabulary could be developed and different BCI

systems could be compared in a unified framework. The goal of a BCI system is to

allow the user to interact with the device. This interaction is enabled through a

variety of intermediary functional components, control signals, and feedback loops

as detailed in the following figure. Intermediary functional components perform

specific functions in converting intent into action. By definition, this means that

the user and the device are also integral parts of a BCI system. Interaction is also

made possible through feedback loops that serve to inform each component in the

system of the state of one or more components.

Functional components and feedback loops in a brain-computer interface system

The user's brain activity is measured by the electrodes and then amplified and

signal-conditioned by the amplifier. The feature extractor transforms raw signals

into relevant feature vectors, which are classified into logical controls by the

feature translator. The control interface converts the logical controls into semantic

controls that are passed onto the device controller. The device controller changes

the semantic controls into physical device-specific commands that are executed by

the device. The BCI system, therefore, can convert the user's intent into device

action.

A notable experiment has been conducted by Nicolelis and Chapin (2002) on

monkeys to control a robot arm in real time by electrical discharge recorded by

microwires that lay beside a single motor neuron. Various motor-control

parameters, including the direction of hand movement, gripping force, hand

velocity, acceleration, three-dimensional position, etc., were derived from the

parallel streams of neuronal activity by mathematic models. In this system,

monkeys learn to produce complex hand movements in response to arbitrary

sensory cues. The monkeys could exploit visual feedback to judge for themselves

how well the robot could mimic their hand movements. Refer to following figure

for a detailed description.

Experimental design used to test a closed-loop control brain-machine interface for motor control in macaque

monkeys.

Chronically implanted microwire arrays are used to sample the extracellular

activity of populations of neurons in several cortical motor regions. Linear and

nonlinear real-time models are used to extract various motor-control signals from

raw brain activity. The outputs of these models are used to control the movements

of a robot arm. For instance, while one model might provide a velocity signal to

move the robot arm, another model, running in parallel, might extract a force

signal that can be used to allow a robot gripper to hold an object during an arm

movement. Artificial visual and tactile feedback signals are used to inform the

animal about the performance of a robot arm controlled by brain-derived signals.

Visual feedback is provided by using a moving cursor on a video screen to inform

the animal about the position of the robot arm in space. Artificial tactile and

proprioceptive feedback is delivered by a series of small vibromechanical elements

attached to the animal's arm. This haptic display is used to inform the animal about

the performance of the robot arm gripper (whether the gripper has encountered an

object in space, or whether the gripper is applying enough force to hold a particular

object). ANN, artificial neural network; LAN, local area network.

TECHNIQUES:-

INVASIVE TECHNIQUE:-

In a different system, individual electrodes in the Utah electrode are tapered to a

tip, with diameters <90 (xm at their base, and they penetrate only 1-2 mm into the

brain. Invasive techniques cause significant amount of discomfort and risk to the

patient. Researchers use them in human subjects only if it will provide

considerable improvement in functionality over available noninvasive methods. A

majority of the initial research, therefore, is conducted in animals, especially

monkeys and rats, and is also called the brain-machine interface (BMI). Research

in these animals has led to the rapid development of microelectronics that enables

recording electrophysiological activities from a small group of neurons or even a

single neuron. Present technology allows reliable simultaneous sampling of 50-200

neurons, distributed across multiple cortical areas of small primates, for a period of

a few years.

Recording brain activity using invasive signal acquisition methods

Cone-shaped glass electrodes are implanted through the skull and directly into

specific neurons in the brain. The electrode is filled with a neurotrophic factor that

causes neurites to grow into the cone and contact one of the gold wires that

transmits the electrical activity to the receiver unit outside the head and then

amplified and transmitted to the BCI control using a transmitter.

The advantage of these types of invasive techniques is the high spatial and

temporal resolution that can be achieved, as recordings can be made from

individual neurons at very high sampling rates. Intracranially recorded signals

could obtain more information and allow quicker responses, which might lead to

decreased requirements of training and attention (Sanchez et al, 2004). Several

issues, however, have to be considered (Lauer et al, 2000). First, the long-term

stability of the signal over days and years is hard to achieve. The user should be

able to consistently generate the control signal reliably without the need for

frequent retuning. Second is the issue of cortical plasticity following a spinal cord

injury. It has been hypothesized that the motor cortex undergoes reorganization

after a spinal cord injury, but the degree is unknown (Brouwer and Hopkins-

Rosseel, 1997). Finally, if a neuroprosthesis that requires a stimulus to the disabled

limb is used, this stimulus would also produce a significant artifact on the scalp

that might interfere with the signal of interest.

NON-INVASIVE TECHNIQUE:-

There are many methods of measuring brain activity through noninvasive means.

Noninvasive techniques reduce risk for users since they do not require surgery or

permanent attachment to the device. Techniques such as computerized tomography

(CT), positron electron tomography (PET), single-photon emission computed

tomography (SPECT), magnetic resonance imaging (MRI), functional magnetic

resonance imaging (fMRI), magnetoencephalography (MEG), and

electroencephalography (EEG) have all been used to measure brain activity

noninvasively.

Many EEG-based BCI systems use an electrode placement strategy suggested by

the International 10/20 system as detailed in following figure. For better spatial

resolution, it is also common to use a variant of the 10/20 system that fills in the

spaces between the electrodes of the 10/20 system with additional electrodes.

Placement of electrodes for noninvasive signal acquisition using an electroencephalogram (EEG)

This standardized arrangement of electrodes over the scalp is known as the

International 10/20 system and ensures ample coverage of all parts of the head.

The exact positions for each electrode are at the intersection of the lines calculated

from measurements between standard skull landmarks. The letter at each electrode

identifies the particular subcranial lobe (FP, prefrontal lobe; F, frontal lobe; T,

temporal lobe; C, central lobe; P, parietal lobe; O, occipital lobe). The number or

the second letter identifies its hemispherical location (Z, denoting line zero refers

to an electrode placed along the cerebrum's midline; even numbers represent the

right hemisphere; odd numbers represent the left hemisphere. The numbers are in

ascending order with increasing distance from the midline).

NOTE: Signals captured through invasive or noninvasive methods contain

a lot of noise. The first step in feature extraction and translation is to remove

noise. This is followed by selection of relevant features through several

feature extraction techniques that focus on maximizing the signal-to-noise

ratio. Finally, feature translation techniques are used to classify the relevant

features into one of the possible states. This is illustrated in following figure:

Processing steps required to convert user's intent, encoded in the raw signal, into device action

BCI APPLICATION:-

BCI applications can be broadly classified in two main categories, i.e., Non-

medical applications, and Medical applications.

NON-MEDICAL APPLICATION:

BCIs have a wide range of applications that can be beneficial for users who do not

necessarily have a medical indication to use one. This section provides an

overview of seven (high-level) application areas and a few examples within each

area.

1. DEVICE CONTROL:

One of the driving forces behind the development of BCIs was the desire to

give users who lack full control of their limbs access to devices and

communication systems. Under these circumstances, users can already

benefit from a device even if it has limited speed, accuracy and efficiency.

For healthy users that have full muscular control, a BCI currently cannot act

as a competitive source of control signals due to its limitation in bandwidth

and accuracy. Also, the generation of control signals and the extraction of

these from the brain signals may cause latency in the control loop of several

hundreds of milliseconds or more that makes smooth closed-loop control

impossible. The introduction of a form of shared control between the device

and the user in which the latter gives more high level, open-loop commands

may overcome the latency issue. However, it is possible, that healthy users

could – for limited application scenarios – also benefit from either

additional control channels or hands-free control. Examples include drivers,

divers, and astronauts who need to keep their hands on the steering wheel,

to swim, or to operate equipment. Brain-based control paradigms are

developed for these applications in addition to, for instance, voice control.

This field has a strong body of research work on single task situations and

for patient users but lacks data on control in multi-task environments and

for healthy users. Also, the surplus value over other hands-free input

modalities such as voice or eye movement control must be shown.

2. USER STATE MODELLING:

The future generation user-system interfaces need to be able to understand

and anticipate user’s state and user’s intentions. For instance, automobiles

will react to driver drowsiness and virtual humans could convince users to

follow their diet. These future implementations of so-called usersystem-

symbiosis or affective computing [4] require systems to gather and interpret

information on mental states such as emotions, attention, workload, fatigue,

stress, and mistakes. Another application field is the use of BCIs as a

research tool in neuroscientific research. Compared to slower, lab-based

functional imaging methods it can monitor the acting brain in real time and

in the real world and thus help to understand the role of functional networks

during behavioural tasks. Brain-based indices of these user states are

extending physiological measures like heart rate variability and skin

conduction and are thus complimentary to and not so much in competition

with existing technology. There is a limited but fast growing body of

knowledge for these applications and a very high societal impact and

economic viability. High-end applications for e.g. air traffic controllers and

professional drivers (i.e., driver drowsiness detectors) are seriously

investigated and may have good spinoffs for the general public. Better user

interfaces can have an important contribution to societal challenges such as

providing access to electronic systems and services for all (including the

aging population and people with cognitive challenges), a healthy life style

and (traffic) safety.

3. EVALUATION:

Evaluation applications can be used in an online fashion (by constant

monitoring) and an offline fashion. Neuromarketing and neuroergonomics

are two evaluation examples. Neuroergonomics has a clear link to Human-

Computer Interaction: it evaluates how well a technology matches human

capabilities and limitations. An illustrative example is the recent research on

cell phone use during driving: brain-imaging results show that even hands-

free or voice activated use of a mobile phone is as dangerous as being

under the influence of alcohol during driving. The body of evidence in this

area is mainly based on fundamental neuroscientific studies and would

benefit from a transition to more applied studies. The societal impact is rated

low; it is merely a tool that adds to current evaluation tools but has no direct

contribution to solving societal issues. The economic viability is potentially

high: design and evaluation are relevant for many services and products

(e.g., electronics, food, buildings). These methods should prove their added

value over for instance questionnaires.

4. TRAINING:

Most aspects of training are related to the brain and its plasticity. Measuring

this plasticity and changes in the brain can help to improve training methods

in general and an individual’s training schedule in particular. With respect to

the latter, indicators such as learning state and rate of progress (novice /

expert) are useful for automated training systems and virtual instructors.

Currently, this application area is in a conceptual phase with limited

experimental evidence. However, (permanent) education (also known as

lifelong learning) and the need for efficient and effective (automated)

tutoring systems have a good societal as well as economic impact, especially

for societies with a knowledge-based economy, facing an aging population

and/or requiring a flexible workforce. Applications may be both in a

professional environment and for home use, making the price sensitivity

difficult to rate.

5. GAMING AND ENTERTAINMENT:

The entertainment industry is often front runner in introducing new concepts

and paradigms; among others in human-computer interaction for consumers

(recent examples are 3D movies at home and gesture-based game

controllers). Over the past few years, new games have been developed that

are exclusively for use with a EEG headset by companies like Neurosky,

Emotiv, Uncle Milton, MindGames, and Mattel. Additional, there is a

general conviction that experiences of existing game and entertainment

systems are enriched, intensified and/or extended by using BCIs, for

example by tailoring games to the affective state of the user (immersion,

flow, frustration, surprise etc.). For instance, several game designers linked

mental states to the popular game World of Warcraft® so that the

appearance of an avatar is no longer controlled through the keyboard, but

reflects the mental state of the gamer. The first (mass) application of non-

medical BCIs may actually be in the field of gaming/entertainment where

the first stand-alone examples came to the market in 2009 and a broadening

to console games may follow soon. We expect that successful applications

will be based on state monitoring and not on the user directly controlling the

game (although this can add a new fun factor to existing games). Although

BCIs for gaming were suggested a decade ago, the research basis is still

small but growing rapidly in a mainly application-driven manner and not

theory. The societal impact is judged low, but the economic viability is very

high (according to a survey held by GameStrata - the leading online

community for gamers - in 2008, an average North American gamer

spends more than $30,500 on games and gaming hardware over their peak

gaming years between 18 and 48). The price sensitivity is very high, with

important requirements with regard to ease of use, amongst others.

6. COGNITIVE IMPROVEMENT:

Some argue that improvement of cognitive performance is an everyday

reality, for instance through the use of coffee or tea. However, the debate

about cognitive enhancement has gained importance, amongst others

through the increased use of prescription drugs like Modafinil and Ritalin

without a medical indication, by both students and professional workers.

BCIs are also considered a means to improve cognitive functioning of

healthy users. Neurofeedback training (brain activity alteration through

operant conditioning), for instance to improve attention, working memory,

and executive functions is relatively common among healthy users. Another

potential application area is the optimized presentation of learning content.

Although there is currently lack of good experimental data on its

effects, the effect size is probably small and limited to specific cognitive

tasks. Generally, there may be a thin line between medical and non-medical

use of neurofeedback.

7. SAFETY & SECURITY:

EEG alone or combined EEG and eye movement data of expert observers

could support the detection of deviant behaviour and suspicious objects. In

an envisioned scenario an observer or multiple observers are watching

CCTV recordings or baggage scans to detect deviant (suspicious or criminal)

behaviour or objects. EEG and eye movements might be helpful to identify

potential targets that may otherwise not be noticed consciously. Also,

images may be inspected much faster than normal (RSVP paradigm). It is

already known that eye fixations as well as event related potentials in the

EEG reflect what is (unconsciously perceived as being) relevant, and that

brain signals indicate relevant pictures in a rapidly presented stream of

images up to 50 images per second. Other applications in this area include

using EEG in lie detection and person identification, but both are under

fierce debate. The body of research shows that this application area can be

considered a niche and the field is still in a concept development phase.

Successful applications can contribute to societal issues, for instance

regarding the safety of main transport hubs. The niche character makes the

economic viability limited and the horizon to application 5-10 years away.

MEIDCAL APPLICATIONS:

Over the past 30 years, many BCI systems have been developed that have targeted

different applications. However, the main goal of all of them is to

translate the intent of the user into an action without using the periphery nerves

and muscles. Because of the number of methods in feature extraction and

translation and the increasing target audience, it became difficult to make a

universal scoring method to compare different systems. However there have been

a few successful trials to make a general framework for BCI design technology.

Researchers at Keim and Aunon developed a BCI system for patients with severe

physical disabilities that enabled them to spell a specific code words. They placed

electrodes on the whole surface of the scalp, enabling the system to detect the

difference in lateralized spectral power levels. Farwell Donchin developed a BCI

system to type words by making the user select letters and words from a display.

The system flashes letters randomly on a 6 by 6 matrix on the screen while the

user thinks about the next letter they want to type. They used the P300 signal

obtained from several electrodes on the scalp to control the BCI system.

Rebsamen et al. developed a brain-controlled wheelchair (The Aware Chair).

They adopted the widely used P300 component method. It was claimed in this

work that users can navigate inside a typical office or hospital with a BCI

controlled wheelchair safely but slowly.

Aware Chair Control Interface

DRAWBACKS & INNOVATORS:

Although we already understand the basic principles behind BCIs, they don't work perfectly. There are several reasons for this:

1. The brain is incredibly complex. To say that all thoughts or actions are the result of

simple electric signals in the brain is a gross understatement. There are about 100 billion neurons in a human brain. Each neuron is constantly sending and receiving

signals through a complex web of connections. There are chemical processes involved as well, which EEGs can't pick up on.

2. The signal is weak and prone to interference. EEGs measure tiny voltage

potentials. Something as simple as the blinking eyelids of the subject can generate much stronger signals. Refinements in EEGs and implants will probably overcome

this problem to some extent in the future, but for now, reading brain signals is like listening to a bad phone connection. There's lots of static.

3. The equipment is less than portable. It's far better than it used to be -- early systems were hardwired to massive mainframe computers. But some BCIs still require a

wired connection to the equipment, and those that are wireless require the subject to carry a computer that can weigh around 10 pounds. Like all technology, this will

surely become lighter and more wireless in the future.

BCI Innovators

A few companies are pioneers in the field of BCI. Most of them are still in the research stages, though a few products are offered commercially.

Neural Signals is developing technology to restore speech to disabled people. An implant in an area of the brain associated with speech (Broca's area) would

transmit signals to a computer and then to a speaker. With training, the subject could learn to think each of the 39 phonemes in the English language and

reconstruct speech through the computer and speaker.

NASA has researched a similar system, although it reads electric signals from

the nerves in the mouth and throat area, rather than directly from the brain. They succeeded in performing a Web search by mentally "typing" the term "NASA" into

Google.

Cyberkinetics Neurotechnology Systems is marketing the BrainGate, a neural

interface system that allows disabled people to control a wheelchair, robotic prosthesis or computer cursor.

Japanese researchers have developed a preliminary BCI that allows the user to control their avatar in the online world Second Life.

CONCLUSION:

A brain–computer interface is a communication and control channel that does not

depend on the brain’s normal output pathways of peripheral nerves and muscles.

At present, the main impetus to BCI research and development is the expectation

that BCI technology will be valuable for those whose severe neuromuscular

disabilities prevent them from using conventional augmentative communication

methods. These individuals include many with advanced amyotrophic lateral

sclerosis (ALS), brainstem stroke, and severe cerebral palsy.

Successful BCI operation requires that the user acquire and maintain a new skill, a

skill that consists not of muscle control but rather of control of EEG or single-unit

activity. BCI inputs include slow cortical potentials, P300 evoked potentials, and

rhythms from sensorimotor cortex, and single unit activity from motor cortex.

Recording methodologies seek to maximize signal-to-noise ratio. Noise consists

of EMG, EOG, and other activity from sources outside the brain, as well as brain

activity different from the specific rhythms or evoked potentials that comprise the

BCI input. A variety of temporal and spatial filters can reduce such noise and

thereby increase the signal-to-noise ratio. BCI translation algorithms include

linear equations, neural networks, and numerous other classification techniques.

The most difficult aspect of their design and implementation is the need for

continuing adaptation to the characteristics of the input provided by the user. BCI

development depends on close interdisciplinary cooperation between

neuroscientists, engineers, psychologists, computer scientists, and rehabilitation

specialists. It would benefit from general acceptance and application of objective

methods for evaluating translation algorithms, user training protocols, and other

key aspects of BCI operations. Evaluations in terms of information transfer rate

and in terms of usefulness in specific applications are both important. Appropriate

user populations must be identified, and BCI applications must be configured to

meet their most important needs. The assessment of needs should focus on the

actual desires of individual users rather than on preconceived notions about what

these users ought to want.

Similarly, evaluation of specific applications ultimately rests on the extent to

which people actually use them in their daily lives. Continuation and acceleration

of recent progress in BCI research and development requires increased focus on

the production of peer-reviewed research articles in high quality journals.

Research would also benefit from identification and widespread utilization of

appropriate venues for presentations (e.g., the Society for Neuroscience Annual

Meeting), and from appropriately conservative response to media attention. For

the near future, research funding will depend primarily on public agencies and

private foundations that fund research directed at the needs of those with severe

motor disabilities. With further increases in speed, accuracy, and range of

applications, BCI technology could become applicable to larger populations and

could thereby engage the interest and resources of private industry.

REFERENCES

1. Niels Birbaumer, P. Hunter Backham, “Brain Computer Interface

Technology: A Review of First International Meeting”, IEEE Transactions

on Rehabilitation Engineering, Vol.8, No.2, June 2000.

2. Anirudh Vallabhaneni, Tao Wang, “Brain Computer Interface”, University

of Illinois, Chicago, 2005.

3. Haider Hussein Alwaiti, Ishak Aris, “Brain Computer Interface Design &

Applications: Challenges & Future”, World Applied Journal 11, 2010.

4. Jan B. F. Vanerp, Fabien Lotte, “Brain-Computer Interfaces for Non-

Medical Applications: How to Move Forward”, Computer-IEEE Computer

Society-45, April 2012.

5. http://computer.howstuffworks.com/brain-computer-interface.htm

6. en.wikipedia.org/wiki/Brain–computer_interface

7. www.braincomputerinterface.com/

SHRI RAM COLLEGE OF ENGINEERING & MANAGEMENT, BANMORE (M.P.)

SEMINAR FILE

ON

BRAIN-COMPUTER INTERFACE (BCI)

Submitted to

submitted by

Mr Abhay Khedkar Devendra

Singh Tomar

(HOD ECE DEPT.)

0904EC111038

CONTENTS

INTRODUCTION

WHAT IS BCI

HISTORY OF BCI

BASIC COMPONENTS OF BCI

TECHNIQUES

INVASIVE TECHNIQUE

NON-INVASIVE TECHNIQUE

BCI APPLICATION

NON-MEDICAL APPLICATION

o DEVICE CONTROL

o USER STATE MODELLING

o EVALUATION

o TRAINING

o GAMING AND ENTERTAINMENT

o COGNITIVE IMPROVEMENT

o SAFETY & SECURITY

MEIDCAL APPLICATIONS

DRAWBACKS & INNOVATORS

CONCLUSION


Recommended