+ All Categories
Home > Documents > Scanning-based interaction techniques for motor impaired users - ICS

Scanning-based interaction techniques for motor impaired users - ICS

Date post: 18-Mar-2022
Category:
Upload: others
View: 2 times
Download: 0 times
Share this document with a friend
35
Scanning-based interaction techniques for motor impaired users Stavroula Ntoa 1 , George Margetis 1 , Margherita Antona 1 , Constantine Stephanidis 1,2 1 Foundation for Research and Technology – Hellas (FORTH) Institute of Computer Science N. Plastira 100, Vassilika Vouton, GR-700 13 Heraklion, Crete, Greece {stant, gmarget, antona, cs}@ics.forth.gr 2 University of Crete, Department of Computer Science ABSTRACT Scanning is an interaction method addressing users with severe motor impairments, which provides sequential access to the elements of a graphical user interface and enables users to interact with the interface through at least a single binary switch, by activating the switch when the desired interaction element receives the scanning focus. This chapter explains the scanning technique and reports on related approaches across three contexts of use: personal computers, mobile devices, and environmental control for smart homes and ambient intelligence environments. In the context of AmI environments, a recent research approach, combining head tracking and scanning techniques, is discussed as a case study. 1 INTRODUCTION The fundamental human right for access to information has become even more important in the context of the Information Society. The risk of creating a two-tier society of have and have-nots in which only a part of the population has access to the technology, is comfortable using it and can fully enjoy its benefits (Bangemann, 1994) has been recognized almost two decades ago, nevertheless it is now more timely than ever. The recent technological evolution has constituted the personal computer just a simple constituent in the pursuit of an Information Society for all, while new challenges arise due to the popularity of mobile devices and the emergence of ubiquitous computing and ambient intelligence environments. Users with severe motor impairments face the risk of being excluded from accessing information, services and technology in this technologically-dominated era. On the other hand, it is now possible to exploit technological advancements and consolidated experiences towards providing accessible services, in order to not only offer equal access to information and services, but also to facilitate everyday living. This chapter focuses on scanning, a specific solution addressing the needs of users with severe physical disabilities, and aims to provide a review of existing approaches and a discussion of recent advancements in the field.
Transcript

Scanning-based interaction techniques for motor impaired

users

Stavroula Ntoa1, George Margetis1, Margherita Antona1, Constantine Stephanidis1,2

1Foundation for Research and Technology – Hellas (FORTH)

Institute of Computer Science N. Plastira 100, Vassilika Vouton,

GR-700 13 Heraklion, Crete, Greece {stant, gmarget, antona, cs}@ics.forth.gr

2University of Crete, Department of Computer Science

ABSTRACT

Scanning is an interaction method addressing users with severe motor impairments, which provides sequential access to the elements of a graphical user interface and enables users to interact with the interface through at least a single binary switch, by activating the switch when the desired interaction element receives the scanning focus. This chapter explains the scanning technique and reports on related approaches across three contexts of use: personal computers, mobile devices, and environmental control for smart homes and ambient intelligence environments. In the context of AmI environments, a recent research approach, combining head tracking and scanning techniques, is discussed as a case study.

1 INTRODUCTION

The fundamental human right for access to information has become even more important in the context of the Information Society. The risk of creating a two-tier society of have and have-nots in which only a part of the population has access to the technology, is comfortable using it and can fully enjoy its benefits (Bangemann, 1994) has been recognized almost two decades ago, nevertheless it is now more timely than ever. The recent technological evolution has constituted the personal computer just a simple constituent in the pursuit of an Information Society for all, while new challenges arise due to the popularity of mobile devices and the emergence of ubiquitous computing and ambient intelligence environments.

Users with severe motor impairments face the risk of being excluded from accessing information, services and technology in this technologically-dominated era. On the other hand, it is now possible to exploit technological advancements and consolidated experiences towards providing accessible services, in order to not only offer equal access to information and services, but also to facilitate everyday living. This chapter focuses on scanning, a specific solution addressing the needs of users with severe physical disabilities, and aims to provide a review of existing approaches and a discussion of recent advancements in the field.

The chapter is organized as follows: sections 2 and 3 present the scanning technique and how it provides access to graphical user interfaces. Section 4 discusses scanning systems and applications for personal computers, while sections 5 and 6 refer to more recent advancements, namely scanning-based accessibility of mobile devices and scanning applications for environmental control. Finally, section 7 summarizes the topics presented in this chapter and discusses current challenges.

2 THE SCANNING TECHNIQUE

Scanning is an interaction method addressing the needs of users with severe hand motor impairments. The main concept behind this technique is to eliminate the need for interacting with a computer application through traditional input devices, such as a mouse or a keyboard. Instead, users are able to interact with computing devices with the use of switches. In order to make the interactive objects composing a graphical user interface accessible through switches, scanning software is required, which goes through the interactive interface elements and activates the element indicated by the user through pressing a switch. In most scanning software, interactive elements are sequentially focused and highlighted (e.g., by a coloured marker). Furthermore, to eliminate the need for using a keyboard to type in text, an onscreen keyboard is usually provided.

There are several types of scanning techniques, mainly varying in their approach for accessing the individual interactive elements. The most popular scanning techniques include:

- block scanning (Applied Human Factors, 2012; Ntoa, Savidis, & Stephanidis, 2004; Stephanidis et al., 1998), in which items are grouped into blocks, aiming to minimize user input and enhance the interaction speed. A well-known block scanning technique is row / column scanning in which items are grouped into rows. Once a user selects a specific row, its columns are being scanned. Row/column scanning is widely used in on-screen keyboards. Other variances include row-group-column, group-row-column, column-row, column-group-item and quadrant scanning. In quadrant scanning, or three-dimensional scanning (Felzer & Rinderknecht, 2009), the two-dimensional grid of scanning elements is divided into smaller sub-groups (e.g., the four quadrants of an on-screen keyboard) and every scan cycle starts by cyclically highlighting the groups.

- two-directional scanning (RJ Cooper & Associates 2012a), in which the user selects an element by specifying its coordinates on the screen that is being scanned at first vertically, through a line that moves from the top of the screen towards its bottom, and then horizontally, through a pointer that moves along the selected horizontal line.

- eight-directional scanning (Biswas, & Langdon, 2011), which is used by several mouse emulation software. In this method, the mouse pointer can be moved towards one of eight directions, according to the user’s preference. In order to achieve this, the pointer icon changes at specific time intervals to indicate one of the eight directions. The user selects the desired direction by pressing a switch and then the pointer starts moving towards that direction. Once the pointer reaches the specific screen location that the user wishes to select, it can be stopped by a switch or key press.

- hierarchical scanning (Ntoa, Margetis, & Stephanidis, 2009), in which access to windows and window elements is provided according to their place in the window’s hierarchical structure. Elements are usually divided into groups and subgroups according to their hierarchy (e.g., a toolbar acts as a container of the individual buttons it includes, a list box as a container of the included list items, etc.)

- cluster scanning (Biswas & Robinson, 2008), in which elements on the screen are divided into clusters of targets, based on their locations.

Finally, another type of scanning that has been reported in literature is adaptive scanning. An adaptive one-switch row-column scanning has been studied by Simpson and Koester (1999), according to which the system’s scan delay could be adapted at runtime, based on measurements of user performance. In order to study the effectiveness and efficiency of the proposed method two experiments were performed with tasks involving text entry, with eight able-bodied participants. The experiments indicated that the presence of automatic adaptation neither hindered nor enhanced the participants’ performance. A subsequent study (Simpson, Koester & LoPresti, 2006) with fourteen participants (six of which with severe physical disability and eight able-bodied or able to activate a switch with their hand) also verified that the participants’ performance was at least as good for the automatically suggested scanning period as for a self-selected scan period. Finally, another study (Lesher, Higginbotham, & Moulton, 2000) proposed a method for the automatic, real-time adjustment of scanning delays, based on quantitative measures of scanning performance, such as the frequency of selection errors, the frequency of missed selection and the portion of the delay utilized for selections.

3 INTERACTING THROUGH SCANNING

Scanning interaction is possible with the use of switches, which are simple, usually pressure-activated, devices. In order to support the needs of users with various disabilities, switches come in a wide variety, ranging from simple button switches to head, foot or breath-controlled switches. Table 1 provides an overview of the various switch types that are commercially available; however, it does not exhaustively list all the existing switches and their brands.

Table 1. Switch types

Switch Description

Button Mechanical, pressure activated switch, which can be wired or wireless, connected to the user’s computer through a receiver.

Wobble / Joystick

A wobble switch is activated if the spring is pushed in any direction.

A joystick switch allows activating four switches by moving the joystick in four directions; a fifth switch can be activated by pressing down on the joystick shaft.

Grasp A mechanical switch, which can be held in the palm of the hand and activated with a squeeze or pinch.

Lever Lightweight switch, with a pivoting lid that activates a highly sensitive micro-switch mounted inside. It is operated by very light touch.

Ribbon It is activated by bending in either direction and can be operated by head movement or any tight access area (such as between upper or lower arm and trunk, knees, or under chin).

Leaf A mechanical switch activated with light pressure of one side of the leaf. It can be effectively used as a head switch.

Thumb It can be held in the palm of the hand and activated by pressing the button with the thumb.

Finger A wearable switch for persons with minimal movement, which requires little pressure to activate.

String It is activated by a slight pull of the string and mainly addresses users with limited finger and hand mobility, as well as those with minimal strength.

Foot Specially designed to be activated by feet, it eliminates the need for hand interaction at all. Foot switches usually support multiple tasks programming with one click.

Pressure Air-filled actuators are used to create the air pressure switch, which is activated by user’s hand, head or foot pressure.

Pneumatic It allows users to control switch-activated devices, including their computers, with their breath.

Infrared

A momentary-contact optical switch that works by detecting a beam of reflected pulsed infrared light. It can be controlled with an eye-blink, eyebrow movement, finger movement, head movement, and facial muscle movement.

Tilt It is operated by being tilted forwards or backwards and can be attached to head, arms, or legs.

Chin A mechanical switch mounted on a necklace, which can also be used for mounting more than one switches.

Bite An alternative to breath switch, addressing users with breathing difficulties. It is activated when it is bitten.

Sound / Voice operated

It addresses users who are unable to use any form of mechanical switch, but who have speech or the ability to make sounds.

Proximity Sensor

A highly sensitive electronic sensor switch, activated by a physical touch, with 10mm of skin proximity.

Freehand With over 30 contact points it allows switch activation through a large variety of finger movements . It also supports replication of any computer keyboard function with the touch of a finger.

Since most of the switches cannot be plugged directly into the computer, a computer-switch interface is required for connecting switches to the computer. As a result, switch interfaces are devices standing in between the switch and the computer and interpret switch input to computer commands. There are three types of switch interfaces:

• Devices combining the switch and the interface into one piece of equipment, allowing thus the connection of a single switch.

• Devices which allow the connection of multiple switches.

• Devices which offer options for emulating mouse and keyboard functions with the use of one or more switches. For example, common functions emulated include the keyboard arrow keys, special keyboard keys (such as space, enter, tab, backspace, etc.), numbers, click, right-click, and double-click.

4 SCANNING SYSTEMS AND APPLICATIONS FOR PERSONAL COMPUTERS

A variety of scanning systems have been developed in the context of commercial or research efforts, each one applying one or more scanning techniques, supporting access through a variety of switches (both in number and in type), and providing access to various platforms. Initially, efforts were focused on motor impaired users’ interaction with a personal computer; however, the prevalence of mobile devices has already overbalanced this. This section focuses on the efforts towards scanning-based accessibility of personal computers.

In summary, approaches in this domain can be categorized as follows: accessibility and scanning features of the operating systems addressing the needs of severely motor impaired users, mouse emulation and other scanning tools offering access to the entire windows environment, and individual applications providing scanning-based interaction (e.g. games). This classification is mapped in the organization of this chapter which first presents applications with embedded scanning (section 4.1), then shortly discusses the concept of scanning object libraries (section 4.2) as a preamble to more generic solutions, and finally present approaches for scanning as an independent tool (section 4.3).

4.1 Applications with embedded scanning

Applications with embedded scanning are standalone applications which have been developed so as to support scanning in the first place and thus be accessible to people with motor impairments. Such applications include text entry through specific editors or on-screen keyboards, web browsers, switch-training games, educational games, entertainment games, as well as augmentative and alternative communication (AAC) systems. This section reports commercial and interesting research applications for each category.

4.1.1 Text entry in scanning systems

One of the most cumbersome and tedious task for severely motor impaired users is text input. Given the importance of the task, as well as its challenging nature, a lot of work has been devoted to creating scanning-enabled keyboards, ranging from enhancing the standard QWERTY keyboard with scanning to investigating alternative layouts and techniques for making text entry in scanning systems more efficient.

4.1.1.1 QWERTY on-screen keyboards Scanning-enabled QWERTY keyboards are usually available for most of the operating systems, while in some cases they are also embedded in the OS. For example, the on-screen keyboard in Microsoft Windows (Microsoft, 2012b) provides automatic scanning options, allowing users to interact by pressing a keyboard shortcut, using a switch input device, or using a device that simulates a mouse click. Furthermore, it supports customization to the user needs, by allowing selection of the input device, of the input keyboard key (e.g., space bar, enter, F-keys), as well as customization of the scanning speed. WiViK (Bloorview Kids Rehab, 2009) is another windows keyboard supporting a large variety of scanning techniques,

such as automatic, inverse and directed, and scanning styles, such as row-column, row-group-item, column-row, column-group-item, quadrant and item scanning. Furthermore, it includes word prediction and abbreviation expansion facilities, as well as text-to-speech for the typed words. Another Microsoft Windows compatible on-screen keyboard is ClickNType, which supports automatic scanning and can be set to use the left mouse button or a keypress as a selection switch (Danger, 2006). It employs the block scanning method, having the keyboard initially divided in six blocks. Once the user selects the block that contains the key he/she wishes to press, two-dimensional scanning is deployed for selecting the specific key within the group.

On-screen keyboards for other operating systems with scanning include Envoy and GOK. Envoy is an on-screen keyboard for Mac OS X (Madentec Limited, 2006), with automatic scanning for single switch access and step scanning for users with two switches. Furthermore, it supports single switch step scanning, in which users can move the highlighter from the one element to the other by pressing the switch, while selection can be made by pausing for a predefined period of time. Users are also able to perform several other adjustments, such as where to resume scanning from (at the beginning, from the last entry, or back-up one level), the delay between switch hits, the delay to select while step scanning, or the delay on first scan. GOK (GNOME Onscreen Keyboard) on the other hand, is a dynamic on-screen keyboard for UNIX and UNIX-like operating systems and features direct selection, dwell selection, automatic scanning and inverse scanning access methods, and also includes word completion (Haneman & Bolter, 2007). Furthermore, GOK can redisplay components of the user interfaces of running applications directly within GOK as keyboards, providing thus efficient access to elements of the user interface.

Another on-screen keyboard which employs row-column scanning and is operated through a brain-computer interface, was proposed by Gnanayutham and Cockton (2004). The keyboard features a standard QWERTY layout, and is enhanced with six control and two configuration keys. The control keys are backspace, caps lock, new line, read that reads what the user has written in the display window, clear and exit, while the configuration buttons can be used to exit the application or change settings according to the users’ needs.

Finally, a word processor for English and Greek supporting switch-based interaction through scanning is GRAFIS (Antona & Stephanidis, 2000). GRAFIS features the typical word processing functionality, through a simple interface, accessible also through conventional input devices. It is interesting that text input for users employing binary switches is supported through two alternative virtual keyboards, namely: (a) QWERTY, and (b) letter-frequency based (i.e., keys are arranged based on the frequency of letters and digraphs in each of the supported languages). Rate enhancement is also achieved by means of a word prediction function, which performs context-based prediction of the possible next words in a text, or of the continuation of the word currently being typed.

4.1.1.2 Alternative text entry methods Employing the standard QWERTY layout for an on-screen keyboard has the advantage of reserving a familiarity for the users, however one significant disadvantage is the inefficient interaction it leads to. As a result, several techniques have been proposed to address this issue, including the use of a different keyboard layout and support for next character suggestion, or word prediction. Lesher, Moulton and Higginbotham (1998) carried out a series of experiments to establish the relative performance of eighteen different scanning

configurations, examining various static character rearrangements, dynamic matrix rearrangements, character prediction methods, and word prediction methods. A combination of an optimized configuration with character prediction turned out to be the most efficient approach. Another study (Levine & Goodenough-Trepagnier, 1990) has examined three basic text entry methods: arranging the 28 characters on 28 keys (i.e., unambiguous direct selection); encoding the 28 characters on to fewer than 28 keys and arranging these keys to minimise the average time required to generate a character; and assigning the 28 characters to fewer than 28 keys, arranging these keys to minimize the average time per key selection. In summary, the findings of this work indicate that ambiguous keyboards have a strong possibility of offering potential advantages.

Several approaches have adopted the findings of such studies, suggesting various on-screen keyboard layouts along with prediction mechanisms. For example, an alternative scanning keyboard supporting letter and word prediction was proposed by Jones (1998). The keyboard employs a non-QWERTY layout, however it can be easily customized to support any type of keyboard (e.g., QWERTY, alphabetic, numeric, etc.), since it uses a separate file for the keyboard layout. Three additional areas are included besides the keyboard: a display showing the letters that have been typed so far, a row with candidate letters according to the current text input, and a row with potential words predicted, also predicted according to the current text input. The keyboard supports row-column scanning, while each row has been augmented with a go back cell, which directs scanning back to the beginning of the row, allowing users to easily recover from errors in case they have missed a target.

An alternative chorded keyboard was proposed by Lin, Chen, Yeh, Tzeng, and Yeh (2006), following the numeric-based input. In more detail, the keyboard is organized in nine areas on a 3x3 grid, each of which features nine options, also arranged on a 3x3 grid. The user can operate the keyboard by indicating at first the number of the desired area and then the number of the option within that area. Furthermore, the keyboard features nine different layouts: international alphabetic, scaffolding, internet, two types of symbol layout, transparency layout, high contrast layout and two types of Chinese input layouts. Group-row-column scanning is employed in order to allow interaction with a single switch, having at first each one of the nine areas scanned sequentially. When the user selects the desired area, row-column scanning is deployed, scanning at first sequentially each one of the three rows of the grid.

The AUK keyboard (Mourouzis, Boutsakis, Ntoa, Antona, & Stephanidis, 2007) is an on-screen keyboard featuring scanning, which, using the layouts and practices in mobile phones, implements a 12-key soft keyboard aiming to offer movement minimization and high learnability thanks to familiarity. In more detail, the AUK is similar to a multi-tier 3x3 menu system, where each 3x3 grid has eight virtual character keys and a menu key for entering or exiting alternative menus. AUK features six basic tiers (menus): letters (encoded in eight keys) and special characters, such as space, back and shift; numerals; special characters; brackets; formatting options; and numeric operators. Finally, the fifth layer has an empty cell to allow space for adding new menus as necessary, thus rendering the structure extensible.

The usage of an ambiguous keyboard layout, including more than one letter in each key, such as for example in mobile phones, was proposed by (Miró-Borrás & Bernabeu-Soler, 2009). In their keyboard, the typical scan matrix is replaced by a smaller one with only three cells, having the characters arranged in alphabetical order. As a result, the number of scan cycles is

minimized and a faster interaction is achieved. Two disambiguation processes are proposed, in order to identify the intended characters: world-level and character-level disambiguation. In the word disambiguation algorithm, the user indicates that he has completed the letter entering process, by holding the switch pressed at the last letter. Then the system presents one after another a list of all the matching words, displaying the most probable words first. When the desired word is displayed the user has to release the switch in order to select it. In the letter disambiguation process, the same process of suggesting the most probable options is carried out after each letter is typed. A speed of 15.9 and 10.3 wpm was estimated by the authors, using a scan period of 0.5 seconds.

Another ambiguous keyboard is the HandiGlyph on screen keyboard, aiming to provide text entry for mobile devices and for people with severe motor impairments (Belatar & Poirier, 2008). In HandiGlyph all the letters are encoded in three keys, according to their primitives corresponding to the space-time organization. The HandiGlyph interface comprises the three ambiguous primitive keys, a command key and two display areas, while scanning rotates past the three primitive keys and the command key. The user can activate his/her switch to indicate selection of a key. After each selection, the list of potential words appears in the display areas: the first display area shows the disambiguation list of words, which are the words matching exactly the sequence of typed primitives, while the second display area shows the completion list, which is a list of longer words the beginning of which corresponds to the typed primitive sequence. Users can move the scanner to one of the displays and select the target word from there. The command key allows users to: carry out a command, such as delete, space or enter a new line; type words or abbreviations that do not exist in the main dictionary; display all the punctuation marks in the display area; or enter a number. Furthermore, HandiGlyph supports scanning delay adaptation according to the data collected during the interaction with the user.

Following a similar concept, Mackenzie and Felzer (2010) introduce the SAK keyboard, a scanning ambiguous keyboard supporting text entry using a single key, button, or switch for input. The SAK keyboard includes two regions: a letter selection region (having letters arranged in a small number of keys) and a word selection region. Scanning begins in the letter selection region, proceeding from left to right. When the desired key is highlighted the user has to press the switch in order to indicate selection. After the selection, scanning resumes from the next key. While the user selects keys from the letter selection region, the word selection region is populated with candidate words, drawn from the system’s dictionary. The user can select to move scanning to the word selection region by selecting the last key in the letter selection region, the SPACE key. The SAK keyboard being an ambiguous keyboard requires a built-in dictionary to disambiguate key presses. Furthermore, it supports four interaction methods: (i) OLPS – one letter per scan – in which users select one letter per scan sequence; (ii) MLPS – multiple letter per scan – in which users can select multiple letters per scan sequence depending on the word; (iii) DLPK – double letter per key – in which users may make double selections in a single scan step interval if two letters are on the same key and (iv) OW – optimized word – in which users can make an early selection of the desired word appears in the candidate list before all the letters are entered.

A text entry application implementing the SAK keyboard design is Qanti (Felzer, MacKenzie, Beckerle, & Rinderknecht, 2010). Qanti divides the screen into four areas: the letter selection area, the output area showing the text entered so far, an information area displaying the sixteen most frequent candidate words in alphabetical order and a large word scanning

selection board, displaying the sixteen words laid out in a diagonal-oriented order, so as to speed up the selection process. Qanti supports a dictionary as well as out of dictionary words. Evaluation of the application indicated that text entry rates range from 2.5 to 6.5 wpm, making it a competitive scanning text entry application.

An alternative text entry method, based on the concept of ambiguous keyboards, has been proposed by Felzer, Strah and Nordmann (2008), in which users repeatedly select among multiple options – each representing a subset of characters which constantly gets smaller – with the help of intentional muscle contractions. At first, the user is presented with a total of five options: four character subsets and one special menu option. Scanning is initially set to automatic, however users can change it to manual. Options are cyclically highlighted for a dwell period. While time is elapsing, the highlighted box is overlaid with a diminishing selection marker and can be preselected by the user issuing an intentional muscle contraction. The selection is finalized if confirmed with another contraction within an additional dwell period. As a result, the user can select an option through a double contraction, while a single contraction extends highlighting of a specific option.

A specific-purpose scanning keyboard has been proposed by Norte and Lobo (2007), aiming to assist people with motor disabilities to program with the Logo programming language. The keyboard includes six vocabulary groups, namely Graphics, Screen Management and Text Editing, Words/Lists and Disk Access, Object and Control/Logic, Math and Assigning, and Input/Output, Time, Sound, Variables. Furthermore, the user can configure several options of the keyboard, such as: scanning velocity, number of repeat scanning cycles, scanning sound, scanning color, and keyboard size. The Logo keyboard employs row-column scanning techniques.

An alternative means for interacting with a virtual scanning keyboard is that of eye gazing. However, there are only few efforts aiming to combine these two modalities, since when eye movements are possible there is no need to restrict interaction to sequentially accessing interactive elements, as is the case with scanning. Nevertheless, there are some systems which employ the combination of eye gazing and scanning, which, as reported in Majaranta and Räihä (2002), address users who have difficulties in fixating and cannot sustain their gaze still for the duration needed to focus. For example, such a system is VisionKey (Kahn, Heynen, & Snuggs, 1999), which features an alphabetic layout arranged in a 4x4 grid, in groups of four characters. In the standard version, to select a character in the top right position in the block, the user must first gaze at the top right corner of the keychart and then at the required character. In the scanning version, users have to carry out only coarse eye movements (up, down, left, right) in order to indicate a selected direction, and then sequential scanning of the options is initiated.

Due to the wide variation in scanning input methods, a problem faced by designers is to choose a suitable scanning method for a virtual keyboard interface (Bhattacharya, Samanta, & Basu, 2008). In this context, several research efforts have focused on proposing models for scanning keyboards. For example, Damper (1984) proposes a rate prediction model for scanning input, while Bhattacharya et al. (2008) propose two models for the automatic evaluation of virtual scanning keyboards, aiming to help designers’ decision making. Abascal, Gardeazabal, and Garay (2004) have studied the influence on the character input rate of diverse parameters related to the matrix that contains the selection set, such as shape, size, number of dimensions and layout of the selectable items. The findings of their research

indicate that in virtual scanning keyboards items should be placed according to their frequency of use and the shape of the matrix should be taken into account, as well as the specific keyboard layout and grouping of items, while specific matrix and layout suggestions according to the research findings are provided. Another study aiming to assist designers of virtual scanning keyboards (Bhattacharya, Basu, & Samant, 2008) proposed predictive models of user’s error behavior, based on user studies with six disabled virtual scanning keyboard users. The studies revealed two main error categories, namely timing errors, which occur when users fail to activate the switch when the desired interaction element is highlighted, and selection errors, which occur when users select a wrong element. A recent study (Simpson et al., 2011) has proposed another model for one-switch row-column scanning, predicting performance with errors and taking into account error correction methods as well. The results of this study in summary indicate that a frequency-arranged layout is preferable, that additional scanning options such as stop and reverse scanning should be avoided and that error rates should be kept as low as possible.

In conclusion, the task of entering text is quite laborious for users with severe motor disabilities. To this end, several on-screen scanning keyboards have been proposed, aiming to facilitate text entry interaction. Initial efforts have focused on providing the standard QWERTY layout enhanced with scanning facilities, while later approaches studied how to further improve the still sluggish interaction. Such approaches included alternative keyboard layouts, word prediction facilities, as well as predictive models of user’s errors and of the text entry rate. Another variation among the proposed solutions, which may also affect user performance, is the input method that can range from switch interaction, to eye-gaze interaction, muscle contraction, or brain-computer interaction.

4.1.2 Scanning for web applications

The Web has become a medium used by people daily for information and entertainment, for everyday activities such as shopping and communicating with friends and family, as well as for a variety of other activities such as education, employment, etc. It is therefore essential that the Web be accessible in order to provide equal access and equal opportunity to people with disabilities (Henry, 2005). The importance of web accessibility has been realized since the late 1990s in the research community, resulting in several efforts, the most important of which are presented in this section.

The AVANTI web browser is the front-end of the AVANTI information system (Stephanidis et al., 1998), which features integrated support for various "special" input and output devices, along with a number of appropriate interaction techniques that facilitate the interaction of disabled end-users with the system, and specifically the interaction of light or severe motor disabilities, and blind people. However, besides supporting the scanning technique through single and double switch interaction, the most distinctive characteristic of the AVANTI UI is its capability to dynamically tailor itself to the abilities, skills, requirements and preferences of the users, to the different contexts of use, as well as to the changing characteristics of users, as they interact with the system, employing adaptability and adaptivity techniques. Furthermore, the design of the AVANTI UI has followed the Unified User Interface Design methodology (Stephanidis & Savidis, 2003), and as a result only a single unified user interface was designed and developed, comprising alternative interaction components, appropriate for different target user categories.

An accessible web browser suitable for users with disabilities is MultiWeb (Owens & Keller, 2000), which allows users to select from six different interface implementations, according to the input device of their preference, namely: default interface with mouse and keyboard control, switch device interface, touch screen interface, mouse-keyboard interface, keyboard interface, and menu interface. The switch device interface features scanning capabilities, allowing users to select through a switch one of the interface elements, which are highlighted one at a time. The switch interface, as well as most of the rest ones, features a button interface rather than the standard windows menu design, providing thus a larger target area and facilitating interaction. Furthermore, the MultiWeb browser features an on-screen keyboard with scanning facilities. Finally, it is important to note that MultiWeb was designed following a user-oriented participative research approach, involving users with disabilities in the design phase.

ARGO (Ntoa & Stephanidis, 2007) is a web browser supporting visual and non-visual interaction in order to address the needs of blind users and users with vision problems, as well as users with mobility impairments of upper limbs, by operating in three different modes: non-visual, visual with scanning, visual without assistive technologies. The system was created as a public kiosk for web access and therefore comprises all the required hardware and software. Severely motor-impaired users can activate the visual scanning mode by pressing one of the three available switches. ARGO employs hierarchical scanning with block-scanning techniques. Scanning is manually controlled by users with two of the switches, while the third switch is employed to change the scanning direction, from top-to-bottom and left-to-right to bottom-to-top and right-to-left, and vice versa. ARGO features an embedded QWERTY on-screen keyboard for text input, which adopts the same interaction techniques. Finally, the ARGO browser features all the essential browser functionality, such as address bar, back, forward and refresh buttons, as well as a search facility, settings for customizing the assistive technology features, help, and an embedded evaluation questionnaire. Last, a sidebar with a list of the current web page links is available to the user for quick in-page navigation.

An alternative web navigation system is KeySurf (Spalteholz, 2012), a keyboard driven browser, which aims to make text search navigation more efficient and intuitive by estimating which elements are more likely to be selected by the user, and then allowing those elements to be selected with fewer keystrokes. In more detail, KeySurf allows users to browse the Web with a keyboard or equivalent text input device by typing where they want to go. Various techniques are applied to decrease the keystroke cost of selections, such as selecting visible elements first, matching the first characters of labels, and prioritizing visually prominent elements, while the user's browsing history is used to calculate a measure of page and element interest in order to make interesting elements easier to select. KeySurf can be controlled through switches and scanning by being used through an on-screen scanning keyboard. Furthermore, in order to enhance scanning interaction KeySurf supports encoding of the web page elements and assigning codewords directly to each element.

A modified web browser, as well as a proxy that modifies HTML, are proposed by Mankoff et al. (2002), as a means to automatically make adjustments and provide access to the web for people with severe motor impairments who are using low bandwidth input devices and can therefore produce one or two signals when communicating with a computer. The proposed browser, besides the main web content, also comprises three parts: browser functionality, active web page elements, and a preview screen. It should be mentioned that authors clarify

that the browser supports wrapping and not scanning, although scanning support was mentioned as a future work. The difference of these two techniques, as explained in their work, lies in that scanning interfaces move the focus of control in a grid sequentially and automatically from item to item, while in wrapping it is the user who controls all motion.

A different approach for low bandwidth input users (e.g., single switch) is proposed by Spalteholz, Li and Livingston (2007). Their system for efficient navigation on the World Wide Web, which is designed as an extension to the open source Mozilla Firefox web browser, allows users to locate elements in the web page by typing their starting letters. In more details, once a web page is loaded a textual label for each selectable element on the page is constructed by the system. To select an element, the user employs a text entry interface to type the starting letters of the element he wishes to interact with. After each letter, the selectable elements on the page are searched and highlighted. Finally, when only a single element matches the entered query, the user is prompted to navigate to the selected element. In this approach scanning is necessary only for text input (e.g., a row-column scanning keyboard) for users with severe motor impairments, using switch input.

Finally, another browser add-on is FireScanner (Ntoa, Margetis & Stephanidis, 2009) which aims to allow the seamless integration of scanning techniques in the Firefox web browser, lifting the need for using specialized software and devices or specific operating systems. When FireScanner is activated, all the interactive html elements of the displayed web page are sequentially scanned from top to bottom and from left to right. FireScanner employs automatic block scanning techniques, based on the web page DOM. In more detail, once a web page is loaded the hierarchical structure of the HTML elements composing the page is acquired as a Document Object Model (DOM) structure. Then a filtering and tree reconstruction process takes place resulting in the creation of the scanning objects tree of the web page, in order to allow users to navigate from one element to another effectively through scanning. If the user interaction with the page results in loading a new web page in the browser, then the processing takes place all over and a new scanning tree is constructed; however such processing takes place transparently, without imposing any delays in web page loading.

In summary, the work presented in this section regarding scanning approaches towards web accessibility, mainly focused in two directions: (i) accessible web browsers with embedded scanning, and (ii) browser add-ons, which were more recently introduced. The main challenge in developing tools for constituting a web page accessible lies in the fact that anyone can be a web content author, e.g., by creating a personal web page or a blog, and therefore there is a plethora of web pages, ranging from professional to personal, from well-designed to poorly designed and from accessible to inaccessible. As a result, an accessibility technology developer cannot be sure of the content that his/her technology will encounter, and whether it conforms to web design and accessibility guidelines. Therefore such technologies can only guarantee that all or part of the web pages will become accessible, however it is not possible to guarantee that pages will be accessed in the most optimal and usable way.

4.1.3 Educational, Entertainment and Training Games

Games nowadays have evolved from an entertainment medium to a tool for education, training and also a means for social inclusion and rehabilitation for players with disabilities. Their important role has also been realized by the accessibility community and a considerable

corpus of studies, tools, and research has been devoted to discussing and addressing accessibility issues of games (Bierre et al., 2005; Yuan, Folmer, & Harris, 2011; Westin, Bierre, Gramenos, & Hinn, 2011). According to Bierre et al. (2005) the most common problems for users with mobility impairments include that quick response, precise timing and the ability to position a cursor accurately is required, while it is not possible to alter game speed. Similarly, Yuan et al. (2011) identify that motor impaired players find it difficult to position a game object precisely, or activate input devices simultaneously, especially when these inputs need to be provided within a certain amount of time.

Towards providing a methodology for creating accessible games Grammenos, Savidis, and Stephanidis (2009) introduce the concept of universally accessible games, which supports the creation of games that are proactively designed to be concurrently accessible to people with a wide range of diverse requirements and/or disabilities. For example, such a universally accessible web chess game is UA-Chess (Grammenos, Savidis, & Stephanidis, 2005), which supports automatic and manual hierarchical scanning. The game supports users with severe motor impairments, as well as users with low vision, blind users, and those with mild memory or cognitive impairments. Furthermore, it features a two-player mode, offering in parallel alternative input and output modalities and interaction methods.

On the other hand, Folmer, Liu & Ellis (2011) have studied navigation behavior of able bodied users in a 3D virtual world, and based on these results they have proposed a new scanning system for navigating a 3D avatar in a virtual world, using a single switch. This technique is called hold-and-release, in which rather than making a discrete selection when the switch is activated (e.g., move forward or move left), the scanning control method holds the input until the user releases it. For specifying composite directions (e.g., forward and right), two approaches can be implemented: (i) extending the set of inputs with symbols that represent mixed inputs, or (ii) applying multistep selection. Evaluation through simulation indicated that hold-and-release performs better than other scanning techniques in the given context, that multistep selection was more efficient and that extending the set of inputs with additional symbols yielded no approximation errors.

Another approach proposed a Sudoku game accessible either by voice or by a single switch (Norte & Lobo, 2008). Speech input allows users to control the game by saying numbers, while switch access is feasible through scanning. The game employs group scanning techniques, and employs a scanning-enabled numpad for providing numbers. Furthermore, settings allow adjustment of the scanning velocity, the number of repeat scanning cycles, scanning sound, the scanning color, and the input device (mouse, switch, space key, or speech recognition). A usability evaluation of the game indicated that scanning process can create a longer delay, however it is required for providing access to users with speech difficulties who cannot provide accurate input through the speech recognition system. Finally, an important aspect observed during the usability tests with the scanning system was the value of the scanning sound.

Further to the aforementioned approaches, there are today several switch games available commercially or free, while there are dedicated blogs and websites for presenting and reviewing them (e.g. SpecialEffect’s accessible GameBase 2012; OneSwitch.ork.uk Blog, 2012). Switch-training games aim to teach children and scanning beginners the notion of scanning and its variations, such as single switch scanning, two-switch step scanning, or automatic scanning. They are mostly based on cause and effect activities, such as for example

activate the switch to see a frog jump, or listen to a sound. Another category of simple switch games aims to teach fundamental skills, such as timing, or turn taking. Finally, entertainment and educational games involve a large range of game genres, such as for example puzzle, sports, adventure, racing, fly simulators, or music games.

Closing this section, it is important to recognize that although game accessibility is a rather new field of research, a lot of effort has been devoted towards it, yielding not only interesting research approaches but also accessible games available for the end users themselves. Overall, the topic of game accessibility is an active research field and many communities have already been established, aiming to promote the concept, to guide designers and developers, and most importantly to provide accessible games for users with disabilities.

4.1.4 Learning environments

A means for allowing users with disabilities to actively participate in society is through providing access to employment. Towards this end, vocational training and continuous learning can assist these users to acquire new skills, while scanning-enabled learning environments can provide equal access to information and resources. Although this field is very important, few approaches are reported in literature. A possible reason for this could be that more efforts have focused towards providing access to the overall computer environment or to the most usual computer applications, resulting in a plethora of systems in these fields.

In the context of vocational training, Savidis, Grammenos and Stephanidis (2006) report on the design and development of a canteen manager application, which aimed on the one hand at training hand-motor and people with cognitive disabilities for the cashier management of a typical ‘‘canteen’’ and on the other hand at being used as the real-life application system, where users would be simply supervised either by a person present in the field or indirectly monitoring through a camera. During the design of the application an important concern was the organization of options in menus and submenus, as well as the selection of appropriate representative icons for the canteen products and product categories. The applications feature hierarchical scanning, manually controlled by users, through three activation switches. Furthermore, the canteen manager application included an on-screen scanning-enabled keyboard, with a simplified layout.

Another virtual learning environment for students with special needs, including physically disabled students accessing through switches the provided interface is described in Maguire et al. (2006). The virtual learning environment included a number of learning programs, aiming to teach cause and effect, number, matching and sorting skills, life skills, as well as slideshow authoring and presentation. Evaluation with students indicated that the learning environment and its applications, in summary, benefited switch users with severe impairments in understanding cause and effect and urged them to start using computers.

However, current trends in learning and educational environments have focused on online learning platforms, with the most recent evolution being the massive online open courses (Carr, 2012). This trend highlights once again the power of internet, it poses however essential accessibility challenges for online course facilitators, varying from the course content itself, the way it is delivered, and the accessibility of the online platform that is employed.

4.1.5 AAC systems

Augmentative and alternative and communication (AAC) systems are used to assist the communication methods of individuals with severe speech or communication impairments (Glennen, 1997b). Since these persons often face severe motor impairments as well, scanning is a technique that is extensively discussed in the context of such systems (Glennen, 1997a, Beukelman & Mirenda, 2013) and usually applied in commercial AAC systems. A detailed discussion on AAC systems is beyond the scope of this chapter, but a high-level presentation of AAC technologies with scanning support is provided for completeness purposes, mostly focusing on scanning issues.

In summary, commercially available AAC products support a range of functionalities such as: predefined communication pages (Dynavox, 2011; Tobii, 2009; Zyteq, 2012), tools for creating customized pages (Tobii 2009), on-screen keyboard (Tobii, 2009; Zyteq, 2012), or environmental control options (Tobii, 2009; Zyteq, 2012). Communication is achieved through symbols (Tobii, 2009; Zyteq, 2012), or text and speech output (Dynavox, 2011; Prentke Romich Company, 2012; Tobii, 2009; Zyteq, 2012). Furthermore, scanning options include single and dual switch usage (Dynavox, 2011; Prentke Romich Company, 2012; Tobii, 2009; Zyteq, 2012), auditory prompts (Dynavox, 2011; Tobii, 2009; Zyteq, 2012), automatic scanning (Dynavox, 2011; Zyteq, 2012), inverse (Zyteq, 2012) and manual (Dynavox, 2011; Zyteq, 2012) scanning.

On the other hand, recent research efforts have proposed solutions for improving the user interaction with scanning-enabled AAC systems. Such an approach is SIBYLLE, which enables users to enter text to any computer application, as well as to compose messages to be read out through speech synthesis (Wandmacher, Antoine, Poirier, & Départe, 2008). The system uses linear scanning, employing the following optimizations in order to speed up communication: (i) a frequency-ordered dynamic keyboard is provided and (ii) a word prediction mechanism dynamically calculates the most appropriate words for a given context, adapting predictions according to the user’s language and the topic of communication. Furthermore, studies with the system indicated that users were confused with the change of focus of the scanning highlighting frame, and had difficulties in temporally preparing for activating the switch. As a result, a timing line runs vertically through the highlighter frame, providing thus an indication of the time remaining until the frame shifts position.

However, the main problem with most AAC systems is that the communication process tends to be exceedingly slow, since the system must scan through the available choices one at a time until the desired option is reached (Ghedira, Pino, & Bourhis, 2009). As a result Ghedira et al. propose and evaluate an adaptive scanning method, according to which the scanning time –which is initially defined in an empirical way – is automatically adjusted according to the user’s interaction. The algorithm for adapting the scanning time interval involves modeling the user’s reaction to a visual stimulus by activating an on-off sensor, based on the Model Human Processor.

In a glance, AAC systems and the related research have mostly focused on addressing the communication needs of the target users. Scanning is a technique used in such systems, however knowledge and experience from scanning-related research is applied, with few innovative features regarding the specific interaction technique.

4.2 Scanning Object Libraries

Applications with embedded scanning have the advantage that they are instantly accessible to motor impaired users, however they suffer from the drawback that they only partially address the interaction requirements of users. As a result, a user would need more than one application in order to carry out a variety of everyday tasks (e.g., web browser, entertainment software, educational software, document authoring software, etc.). Therefore, users with motor impairments should employ various applications with embedded scanning techniques, possibly facing interoperability issues, and should often update to the latest version of each such application. Furthermore, a major drawback of such approaches is their increased cost of development and maintenance.

An early effort towards providing more generic solutions and avoid the development of specialized applications with embedded scanning was to create an augmented library of windows objects supporting scanning (Savidis, Vernardos, & Stephanidis, 1997). As a result, developers could create applications with scanning by using the augmented library components. However, applications and services developed with these techniques soon became obsolete when the next generation of the Microsoft Windows operating system was introduced.

Another toolkit is proposed by Steriadis and Constantinou (2003) for creating interfaces for quadriplegic people, which contains only one specially issued interactive object class called wifsid (Widget For Single-switch Input Devices). Wifsids are customized widgets which accept only single switch input and feature three main functions: highlighting the object during the scan process, de-highlighting the object, and handling a received input. A wifsid provides to application developers four scanning modes: sequential scanning of all the items in a set, row-column scanning, block scanning, and diagonal scanning in which the matrix of objects is initially divided into two triangular matrices based on the main diagonal. In diagonal scanning as it is supported by the toolkit, when the user selects the triangle in which the element he/she wishes to interact with is located, then row-column scanning is deployed for the selected triangle’s rows.

Scanning object libraries, however, are not targeted to end users themselves; they rather target developers, aiming to facilitate the development process and the reuse of scanning solutions. Furthermore, such approaches have turned out to have limited viability and require investment on resources, since different or updated libraries are needed for the different operating systems or the various versions of them.

4.3 Scanning as an independent tool

With the aim to alleviate the difficulties introduced by single applications with embedded scanning and to provide a more generic solution for the target users themselves, scanning tools enable users to operate the overall graphical environment of the operating system, and interact with any application. The most important benefit of scanning tools is that users do not need specialized software for each different activity they wish to carry out (e.g., read email, browse the web, compose a document), and they do not need to learn different scanning methods and interaction patterns. As a result, users can sooner become more efficient in using these tools and therefore decrease the required interaction time and the errors they may carry

out. There are several efforts towards this direction, including commercial and research approaches.

CrossScanner (RJ Cooper & Associates, 2012a) is a software providing access to all the applications of a windows environment (Windows, Mac) through one or two switches, allowing the user to select an interface element by identifying its vertical and horizontal coordinates. As soon as the application is activated, a line starts scanning the screen vertically. By pressing the switch the user is able to stop the vertical line scanning and select the y-coordinate of the element he wishes to interact with. Then, a hand cursor starts scanning the specific screen line horizontally, in order to allow the user to select the x-coordinate of the specific element he wishes to interact with. Once both coordinates have been defined, the element is activated. The software also provides options for double-clicking, dragging and text input.

ScanBuddy (Applied Human Factors, 2012) is a mouse emulation scanning software for Microsoft Windows, which uses the dive-and-conquer approach, in order to allow users to quickly identify the general region where the mouse activity is to be performed, and then employs two-directional scanning to help users specifically identify the target they wish to interact with. The software allows users to simulate click and double click of the left mouse button, click and double click of the right mouse button, or drag. In addition, users can do some other mouse operations like control-click, scrolling, etc.

SwitchXS (Origin Instruments, 2012) is a mouse and keyboard emulation software for Mac OS X, providing access to all the applications running on the specific operating system, by allowing users to control the mouse pointer and perform any mouse action (e.g., click, double click, shift click, etc). In order to achieve this, the software embeds a number of predefined scanning panels that the user can choose from to move the mouse pointer, position the cursor, or click and type into all applications. To further enhance users’ performance and allow them to customize the software according to their needs, SwitchXS also provides a panel editor for users to create their own scan panels.

Autonomia (Steriadis & Constantinou, 2002) is a scanning-enabled system allowing severely motor-impaired users to (i) instruct the movement of the mouse cursor towards eight directions and enable a number of common mouse functions, such as click, wheel, drag-and-drop (ii) provide text input through a virtual keyboard and (iii) start other software applications or set electrical and electronic appliances on or off. In order to carry out a mouse action (e.g., click a screen target) the user has to move the mouse cursor, by selecting one of the eight directional arrow buttons displayed in the cursor control window, and then to identify the type of desired mouse action (e.g., click, double-click, drag & drop), by selecting one of the twelve action possible options available in the cursor control window. The virtual keyboard is a QWERTY keyboard organized into four key groups. Finally, the console screen allowing users to start other applications or control appliances can be customized with 255 buttons, grouped in pages of dozens.

FastScanner (Ntoa, Savidis, & Stephanidis, 2004) is a tool which provides switch access to Microsoft Windows applications, without recourse to any subsequent modification, by employing scanning techniques with dynamic retrieval of the applications’ hierarchical structure. The tool provides sequential access to all the interactive elements of an application, while the currently active element is indicated by a coloured border. The user may interact

with the indicated element by pressing an appropriate switch. FastScanner provides single-switch access and is available in two modes: manual scanning, where the user has to explicitly indicate when the scanning dialogue should move to the next interactive element; and automatic scanning, where scanning automatically proceeds to select the next interactive element, when a specific time interval elapses without a user action. Furthermore, FastScanner supports two modes of function: standard, which addresses less experienced users, and quick scanning, which addresses more experienced users and accelerates interaction (Ntoa et al., 2009). In order to further accelerate interaction, group scanning is also supported, by using container objects (such as windows, group boxes, title bars, tables, frames) as a navigation enhancement, allowing users to directly skip the scanning of large groups of objects.

The cluster scanning system (Biswas & Robinson, 2008) collects all possible targets (e.g., icons, buttons, combo-boxes etc.) by enumerating window processes and then it iteratively divides a screen into several clusters of targets based on their locations. Clusters are sequentially highlighted and once the user selects a relatively small cluster which contains the element he wishes to interact with, eight directional scanning is activated. In the context of this work, a performance evaluation of cluster scanning in comparison with block and eight-directional scanning was carried out, through a simulator. The results of this comparison show that cluster and block scanning systems outperform eight-directional scanning.

A recent research effort (Biswas & Langdon, 2011) combined eye tracking with scanning techniques, in order to speed up interaction for users with severe motor impairments. In more details, the proposed system moves the mouse pointer and places it approximately at the point of the screen where the user is looking. Then users can activate the eight-directional scanning system, by pressing a keyboard key, in order to identify the specific element the wish to interact with and activate it. The purpose of this work was to combine two techniques widely used by users with severe motor impairments, aiming to alleviate the individual interaction difficulties introduced by each one, namely the slow interaction of scanning and the strenuous interaction of eye tracking. A user experiment with eight able-bodied users indicated that although the system achieves similar interaction speed with eye-tracking use only, users rated the system as easier and less strenuous to use.

In a nutshell, most commercial efforts have focused on mouse emulation, supporting the basic mouse functions and allowing thus users to move the mouse pointer towards several directions and to select the mouse action they wish to carry out. Such interaction, although it provides complete access to the operating system, may slow users’ interaction and introduce errors in case the pointer moves further than the desired location. To address such problems, other efforts have suggested alternative target selection approaches, such as for example according to their location on the screen and/or their location in the objects’ hierarchical structure. In order to be complete and also support text entry functions, most of the scanning tools also support an on-screen keyboard.

5 SCANNING ACCESSIBILITY FOR MOBILE DEVICES

Mobile devices are becoming an indispensable every day tool, allowing their users to carry out a variety of tasks, ranging from phone calls, to surfing the web, viewing documents, managing appointments, reading emails, connecting with friends, or playing games. In few words, mobile devices are used as portable mini computers empowering their users to

perform most of the activities they would with their desktop computers. However, an important barrier for users with severe motor disabilities is the touch interaction modality employed by these devices. Given the recent emergence of mobile devices, one might expect that limited efforts would be reported towards their accessibility in general and through scanning with the use of switches in particular. Nevertheless, some commercial solutions are already available, evidencing therefore the influence of mobile devices as well as their prominent role in everyday activities.

The first concern regarding mobile phones’ scanning accessibility is how to connect a switch to the device. To this end, several switch interfaces have been developed, which communicate with the device via Bluetooth. Solutions have been developed for both the Android (Komodo Open Lab, 2012a; Unique Perspectives Ltd., 2012) and iOS (Komodo Open Lab, 2012b; Pretorian Technologies Ltd. 2012) platforms, while the scanning features that are supported include:

- automatic scanning (Komodo Open Lab, 2012a; Komodo Open Lab, 2012b; Pretorian Technologies Ltd. 2012),

- manual scanning (Komodo Open Lab, 2012b; Pretorian Technologies Ltd. 2012) - inverse scanning (Komodo Open Lab, 2012a), - adjustment of the scanning speed (Komodo Open Lab, 2012a; Komodo Open Lab,

2012b; Pretorian Technologies Ltd. 2012), - navigation on-screen keyboard (Komodo Open Lab, 2012a), and - typing on-screen keyboard (Komodo Open Lab, 2012a; Pretorian Technologies Ltd.

2012).

Regarding the number of switches, all the interfaces support access to mobile device with a single, two, or multiple switches. A minor shortcoming of the switch interfaces is that they lead to increased consumption of the device’s battery. A technology that is expected to be available in the near future, specifically addressing iPad users, is Connect (Ablenet Inc. 2012), featuring switch access with scanning capabilities as well as integrated battery. Furthermore, another device supporting scanning access to iPad is the iPad VO Controller (RJ Cooper & Associates 2012b), which does not allow connection with switches; it rather features six buttons for scanning navigation (select, back, next, home, type/move, activate/deactivate keyboard).

Furthermore, another concern refers to the accessibility of the applications provided for the mobile device. In general, scanning accessibility of iOS-based applications is based on the VoiceOver technology (Apple, 2012), and therefore all the applications which are VoiceOver compatible are also accessible through scanning. Scanning-accessible applications are fewer for the Android platform, however given the rapid development of similar applications for the iOS platform, it is expected that more applications will be available in the near future. The type of applications ranges from simple educational games, to entertainment games, or to augmentative and alternative communication applications. Applications can be found on the Google Play Store, as well as on iTunes, while indicative lists of scanning accessible applications are available at Komodo Open Lab (2012c; 2012d) and in Farall and Dunn (2012).

In conclusion, accessibility of mobile devices for switch users is a recently explored topic, which employs techniques already used for computer accessibility, namely access through

switches and switch interfaces, based on the scanning technique. The challenges confronted are more or less similar to the challenges in the case of personal computers and mainly lie in the fact that two levels of control are required: control at the level of the operating system, and application control. The latter issue recurs to developers for embedding accessibility features in their applications, at the extra cost of additional resources and with the gain of an increased target audience. As a result, the discussion on universal design (Stephanidis & Savidis, 2001) and of proactive approaches is re-opened and applicable in this domain as well. A novel challenge that is reported in the case of mobile devices is that of power consumption; however, some efforts have already been targeted to addressing it.

6 SCANNING APPLICATIONS FOR ENVIRONMENTAL CONTROL

An important concern of persons with severe motor impairments refers to controlling their immediate environment, so as to facilitate every day needs, such as controlling communication devices, electronic devices, as well as environment components (e.g., doors, windows, etc.). Environmental control systems are not a new technology; in fact, they are a rather mature technology with several commercial products. However, such systems continuously evolve along with new technological advancements, while they have recently received updated research interest in the context of ambient intelligence environments and ambient assisted living.

6.1 Scanning-enabled environmental control systems

According to a review carried out in the late 1990s (Wellings & Unsworth, 1997) environmental control systems date back to the 1950s. At the time of the review, environmental control systems had the potential to operate communication aids and wheel-chairs as well as household equipment. Systems were reported to incorporate a control unit, which activated peripheral devices and was controlled by the user with a switch through scanning. Furthermore, it was found that many such systems incorporated a remote control unit similar to that used with television sets. Given the high impact of such technologies towards physically disabled persons’ independent living, there are several commercial products and research efforts for environmental control, which are presented in this section.

An early approach for environmental control by motor-disabled people is AUTONOMY (Flachberger, Panek, and Zagler, 1994), which can be used for communication as well as for environmental control. The system can be set-up by a caregiver in order to match the needs of the user in the best possible way, supporting a variety of input methods (switches, joystick, keyboard, mouse, touchscreen, speech) and output modalities (visual through an LCD or CRT screen, speech, or sound). Yamamoto and Ide (1996) describe a system which controls the indoor home electronic devices, personal emergency alarm, and the keyboard emulator of the Microsoft Windows operating system. The system can be used with a single switch, or with two or more (up to 10) switches and supports both automatic and manual scanning techniques. A few years later, Han, Jiang, Scucces, Robidoux and Sun (2000) introduce PowerScan, a single-switch environmental control system for persons with disabilities. PowerScan users can control the electronic devices within their surroundings through a remote control that interacts with every electronic device in the environment, by sending a radio frequency signal to the desired electrical or electronic devices, which are connected to X-10 home automation modules. The remote control can be operated via a single switch by having its functions sequentially scanned. The user initially has to select one among five

operation options, namely TV, VCR, X-10, sleep timer, and delay of the scanning period. Then for each one of the selected options, a number of choices are sequentially scanned.

Since then, several commercial products have become available supporting environmental control for persons with severe motor disabilities. Technology advances lead to a continuous update with new and more sophisticated products, supporting the widest possible range of devices and appliances. Currently available environmental control units may control audio and video sources (Tash Inc., 2000a; Tash Inc., 2000b), as well as additional domotic devices such as lights, lamps, power sockets, alarms, intercoms, doors, windows and curtains (Possum Controls Limited, 1999; Possum Controls Limited, 2009). Some units also support bed control and/or nurse calling (Saje Technology, 2010; Angel ECU, 2012; Break Boundaries, 2010). Additional characteristics of such units include the capability to be programmed (Abilia, 2011a; Abilia, 2011b; Saje Technology, 2010) so as to support the highest possible level of user customization, functionality as a telephone (Saje Technology, 2010; Break Boundaries, 2010) and intercom system (Possum Controls Limited, 2009), or embedded speech output (Possum Controls Limited, 2009) allowing the user to select and play a number of recorded messages. Scanning techniques involved in environmental control systems allow single and dual switch access, scanning speed adjustment (Tash Inc., 2000a; Tash Inc., 2000b; Saje Technology, 2010), as well as automatic and manual scanning. Finally, a different approach is Evoassist (RSLSteeper, 2012), which turns a mobile device (iPhone, iPod Touch or iPad) into a universal home environmental controller.

A recent research on environmental control systems for disabled people (Tao, Zhang, & Wang, 2008) suggests that when designing such a system many issues must be taken into account, such as to identify the easiest way for the user to interact with the system, to determine which are the most important environmental control functions that the system should provide, as well as to take into account safety, setup and support considerations. Taking into account their research findings, Tao et al. (2008) propose a simple environmental control system, supporting three input methods: pneumatic switch, big button switch and touch panel. Scanning interaction with the system is structured in three levels: (i) selection of the available control units; (ii) selection of available equipment from the controlled unit; and (iii) selection of one of the available commands.

Another recent effort towards enabling severely motor disabled users to control their immediate environment is reported by Felzer, Nordmann, and Rinderknecht (2009) who created a scanning-based computer application to enable its user to control the immediate environment, e.g., by making a phone call, toggling the lights, or sending particular IR remote signals. Interaction with the application is accomplished through automatic scanning, while scanning control is not achieved through switches, but by inspecting muscular activity of a single dedicated muscle instead. The application features four modules, namely: (i) a module providing telephone functionality, such as making phone calls, answering incoming calls, managing phone numbers and call history, as well as composing text messages; (ii) a universal remote control module, with up to twenty-four buttons associated with appropriate IR signals; (iii) a switch-board module, allowing users to turn on or off devices (e.g., lamps, a fan, a heater, etc.) connected to switchable power outlets; and (iv) a synthetic speech module, enabling speech impaired users to speak through the computer, by typing words or using predefined phrases.

Summarizing, there are several commercial and research efforts towards enabling physically disabled users to control their environment. Although the high impact of environmental control systems towards fostering physically disabled persons’ independent living has been realized since the 1950s, there is a continuous interest in such systems, in order to follow the rapid technological evolution, and provide systems that can serve the target users’ needs in the best possible way, allowing them to control a wide variety of devices and carry out everyday tasks.

6.2 Environmental control and accessibility of Ambient Intelligence environments

As a result of the increasing demand for ubiquitous and continuous access to information and services, information technologies are expected to evolve toward a new computing paradigm known as ambient intelligence (Emiliani & Stephanidis, 2005). Ambient Intelligence (AmI) presents a vision of a not too distant future where “intelligent” environments react in an attentive, adaptive and (pro)active way to the presence and activities of humans and objects in order to provide appropriate services to the inhabitants of these environments (Stephanidis, Antona, & Grammenos, 2007). This will have profound consequences on the type, content, and functionality of emerging products and services (Emiliani and Stephanidis, 2005), as well as on the way people will interact with them, bringing about multiple new requirements for the development of information technologies and discuss the opportunities and challenges that AmI will bring about for elderly people and people with disabilities. To this end, universal access and design for all have a key role in the development of AmI environments and that in the context of AmI, design for all has the role to act as catalyst toward embedding accessibility and usability into the new technological environment through generic solutions and has therefore the potential to make the difference between ultimate success and adoption or rejection of interactive technologies by users (Stephanidis, 2009).

An effort towards fostering design for all in AmI environments is proposed by Kartakis and Stephanidis (2010) who introduce two tools, named AmIDesigner and AmIPlayer, which have been specifically developed to reduce development efforts and ‘inject’ accessibility issues into AmI applications from the early design stages, through automatic generation of accessible Graphical User Interfaces in AmI environments. AmIDesigner is as a graphical design environment, whereas AmIPlayer is a support tool for GUI generation. The combination of these two tools is intended to significantly reduce the complexity of developing GUIs in AmI environments through a design-and-play approach, while at the same time offering built-in accessibility of the generated user interfaces, by integrating non-visual feedback and a scanning mechanism. By using AmIDesigner for the development of an application, a scanning mechanism is directly embedded in the produced user interfaces, and the designer only has to set the order in which interface widgets need to be scanned. Furthermore, scanning can also be associated with non-visual feedback in a multimodal fashion, thus making switch-based interaction accessible to blind users.

An application that was built using the aforementioned tools is CAMILE (Grammenos, Kartakis, Adami & Stephanidis, 2008) which aims at intuitively controlling multiple sources of light in AmI environments. For example, the application allows users to control the color of a LED light or the intensity level of a neon light. CAMILE was designed so that it can be used by anyone, the young, the elderly, people with visual disabilities, and people with hand-motor disabilities alike. The system supports scanning techniques for motor-impaired users,

as well as touch-screen for sighted users with no motor impairments and remote controlled operation in combination with speech for visually impaired users or tele-operation by sighted users. The system employs hierarchical scanning techniques, controlled manually through three switches, or automatically through one switch. Furthermore, CAMILE’s scanning hierarchy and sequence has been designed so that frequently performed actions (e.g., turn on/off all lights) reside at the top level and similar items are semantically grouped (e.g., all neon lights, all LED lights, dimming accelerator buttons). The system was evaluated with ten participants, who rated its usability highly and provided qualitative feedback through the think-aloud protocol that was employed, regarding the system’s usefulness, effectiveness, learnability and likability.

In summary, there are not many efforts implementing switch and scanning access in ambient intelligence environments. A potential reason for this could be that the field of ambient intelligence is still new and that research in this domain has focused on the design of new interaction modalities, mainly emphasizing natural interaction. As a result, accessibility in general - and in particular accessibility for users with severe motor impairments - as well as universal access issues in ambient intelligence environments, remains an open topic. However, the design of new interaction modalities and flexible/adaptive multimodal user interfaces in the context of AmI environments is expected to contribute to improving accessibility for users with physical disabilities in such environments (Carbonell, 2006). Furthermore, scanning is not likely to become an obsolete technique since for certain user categories it is the most simple and usable way of interaction. Nevertheless, scanning can be combined with other interaction modalities in AmI environments in order to achieve a more efficient interaction.

The next section describes such an approach for multimodal interaction, addressing the needs of physically disabled persons in ambient intelligence environments, which is currently being developed for the smart home environment of the FORTH-ICS Ambient Intelligence Research Facility (Stephanidis, 2006) in the context of the FORTH-ICS internal RTD Programme 'Ambient Intelligence'i

6.2.1 Case Study: Head scanner for domotic control in AmI environments

.

The basic objective of the head scanner system is to provide persons with severe physical disabilities (due to bone injuries, ALS, multiple sclerosis, cerebral palsy, spinal cord injuries, muscular dystrophy, amputations, etc.) intuitive interaction with their environment. Figure 1 illustrates an indicative setup for users with paralysis who have lost the capacity to move any of their body parts below their neck. In more detail, the head scanner system comprises: (i) a motion sensing input device, such as Microsoft Kinect (Microsoft, 2012a) or Asus Xtion Pro (Asus, 2011), which is placed in front of user’s head, (ii) a head pose tracking software and (iii) a scanning based application that provides remote control of the environment’s devices (iv) a switch device. The scanning application runs in a tablet mounted next to the user.

Figure 1 Head scanner system setup

The head scanner system users can gain full control of the surrounding environment’s devices using only their head for selecting the desired device for interaction and a binary input assistive technology, such as sip and puff or binary switches. The head pose tracking software of the system is based on the work described in Padeleris, Zabulis and Argyros (2012), which provides high accuracy and tolerance to occlusions of human head pose estimation based on images acquired by a depth camera. In order for the system to be aware of which device in his immediate environment the user is looking at, it uses a 3D model simulation of the environment, which includes spatial information and the dimensions of the surrounding devices. The head pose tracking software provides as input for the system, at real time, a vertical vector to the user’s face considered at the 3D model’s space. The intersection of the vector with a device signifies to the system that this particular device is currently being looked at by the user.

In order for the system to estimate the selection of a device by the user, it follows the dwell click approach that is regularly used as a method for triggering an active object within head tracking systems (Jacob, 1991). In more detail, every time the user starts looking at a particular device, in a continuous way ("dwelling") for greater than a specified time then the system assumes that this device is selected for interaction and it pauses the head tracking processing. Then it deploys the control panel UI of the selected device, on the tablet. Subsequently, the user is able to navigate and interact with the activated UI through scanning, using binary input devices. The scanning process provides sequential scan of the interactive and informative UI elements (e.g., buttons, labels), highlighting the currently active interface element and providing also auditory cues to the user, in order to allow users to interact with the tablet without necessarily looking at it. Additionally, it supports grouping of relevant interactive elements (e.g., volume up and down buttons of a TV), speeding up the interaction

and enabling users to skip unwanted groups of elements (see Figure 2a). Finally, the design of the objects’ hierarchy has taken into account the frequency of use of specific actions, placing them topmost, in order to further enhance users’ interaction (see Figure 2b, where the most common actions for controlling blinds, i.e. totally fold and totally unfold them, have been placed high in the objects hierarchy, without being grouped).

Figure 2 Head scanner UI examples for (a) TV control (b) blinds control

At any time the user is able to disengage the selected device by just selecting the activation of the head tracking process again, pressing the corresponding lock button which is available in all the devices’ control UIs, and the system starts monitoring his head again. For example, in Figure 2a the interaction is locked at the television set and therefore the user can freely move his head and look around in the room, without issuing false commands to the system. In Figure 2b, the lock icon has just been pressed through scanning, indicating the user’s intention to exit the blinds control UI and activate the head tracking process in order to select another device to control. The diagram of Figure 3 illustrates the overall interaction process of a user with the system. The head scanner system is able to support any domotic device that provides remote control over network, such as for example television, lights, doors, blinds, or beds.

Figure 3 Head Scanner interaction diagram

7 CONCLUSIONS AND CURRENT CHALLENGES

This chapter has provided an overview of the scanning technique, which aims to enable the interaction of severely physically disabled persons with computational devices. In short, scanning allows users to interact with a graphical user interface, be it a single application or an entire operating system, through at least a binary switch. Switches are simple on/off devices which can be activated in a number of ways, e.g., by hand, head, foot, finger, or breath. As a result, scanning abolishes the need for direct selection of interface elements, and establishes a simpler interaction pattern.

Towards an effort for a systematic review of scanning based efforts, their presentation was structured around the context of use, namely personal computer, mobile device and environmental control in smart and AmI environments. The topic of scanning accessibility for personal computers is the most long-standing, requiring as a result an important portion of the discussion. Approaches in this field have been studied according to their intended use: text-entry, web browsing, gaming, learning, communicating, or accessing the entire operating system. The objective of this chapter was to provide a review of existing approaches in each one of the aforementioned topics, highlighting already mature solutions, introducing new advancements and discussing the emerging challenges.

Overall, an important asset of scanning is that in certain cases it is the only possible way of interaction. On the other hand, a considerable barrier is the slow interaction it imposes. As a result, a currently active research topic concerns improving the efficiency of scanning-based interaction. Another concern, which is faced not only in the case of scanning but in the design of accessible systems in general, is the adoption of a user-centered design involving end-users from the design until the evaluation of the system. To this end, several efforts have focused on creating models of interaction aiming to predict the scanning system’s users’ performance and efficiency, restricting as much as possible the required resources in end users.

Concluding, a trend that has become apparent from this literature review is that the focus has now shifted from the traditional interaction with the personal computer to interaction with mobile devices and more recently with AmI environments. This evolution dictates the need for consolidating previous research results and incorporating achievements from past efforts in the emergent interaction environments. On the other hand, the new interaction paradigms and the technological possibilities they offer can assist in further advancing scanning techniques and constituting them an efficient and usable interaction modality, which is a novel issue that needs to be addressed.

8 REFERENCES

Abascal, J., Gardeazabal, L., & Garay, N. (2004). Optimisation of the selection set features for scanning text input. Computers Helping People with Special Needs, 626-626.

Abilia (2011a). 425700 Control Prog English Manual. Retrieved from: http://www.abilia.org.uk/userfiles%5C255786%5CManual_Control_Prog.zip

Abilia (2011b). Control Omni English Manual. Retrieved from: http://www.abilia.org.uk/userfiles%5C255786%5CControl_Omni_User_Guide_DK_Ver_A.zip

Ablenet Inc. (2012). Connect Enabling the iPad for everyone. Retrieved from: http://www.ablenetinc.com/Assistive-Technology/iPad-iPhone-and-iPod-Accessories-Apps/Connect

Angel ECU (2012). Angel FX Simple Overview. Retrieved from: http://www.angelecu.com/simple-overview.html

Apple (2012). VoiceOver in Depth. Retrieved from: http://www.apple.com/accessibility/voiceover/

Applied Human Factors (2012). ScanBuddy. Retrieved 26 October 2012 from: http://newsite.ahf-net.com/scanbuddy/

Asus (2011). Xtion PRO LIVE Quick Start Guide. Retrieved from: http://www.asus.com/Multimedia/Motion_Sensor/Xtion_PRO/#download

Bangemann, M. (1994). Recommendations to the European Council: Europe and the global information society. Brussels: European Commission.

Belatar, M., & Poirier, F. (2008). Text entry for mobile devices and users with severe motor impairments: handiglyph, a primitive shapes based onscreen keyboard. In Proceedings of the 10th international ACM SIGACCESS conference on Computers and accessibility (pp. 209-216). ACM.

Beukelman, D.R., Mirenda, P., (2013). Augmentative and Alternative Communication: Supporting Children and Adults with Complex Communication Needs, fourth edition. Paul H Brookes Pub Co, pp. 73-100

Bhattacharya, S., Basu, A., & Samanta, D. (2008). Computational modeling of user errors for the design of virtual scanning keyboards. Neural Systems and Rehabilitation Engineering, IEEE Transactions on, 16(4), 400-409.

Bhattacharya, S., Samanta, D., & Basu, A. (2008). Performance models for automatic evaluation of virtual scanning keyboards. Neural Systems and Rehabilitation Engineering, IEEE Transactions on, 16(5), 510-519.

Bierre, K., Chetwynd, J., Ellis, B., Hinn, D. M., Ludi, S., & Westin, T. (2005). Game not over: Accessibility issues in video games. In Proc. of the 3rd International Conference on Universal Access in Human-Computer Interaction (pp. 22-27).

Biswas, P., Langdon, P. (2011). A new input system for disabled users involving eye gaze tracker and scanning interface. Journal of Assistive Technologies, 5 (2), 58 – 66.

Biswas, P., Robinson, P. (2008). A new screen scanning system based on clustering screen objects. Journal of Assistive Technologies, 2 (3), 24–31.

Bloorview Kids Rehab (2009). User Guide WiViK On-screen Keyboard: Version 3 / Microsoft® Windows®. Retrieved from: http://www.wivik.com/Downloads/WiViK32UserGuide.pdf

BreakBoundaries (2010). REACH: Technology Helping to Break Physical Boundaries. Retrieved from: http://www.breakboundaries.com/REACHbrochure.pdf

Carbonell, N. (2006). Ambient multimodality: towards advancing computer accessibility and assisted living. Universal Access in the Information Society, 5(1), 96-104.

Carr, N. (2012). The Crisis in Higher Education. MIT Technology Review. Retrieved from: http://www.technologyreview.com/featuredstory/429376/the-crisis-in-higher-education/

Damper, R. I. (1984). Text composition by the physically disabled: A rate prediction model for scanning input. Applied ergonomics, 15(4), 289-296.

Danger, C. (2006). Click N Type. Retrieved from: http://www.bltt.org/software/clickntype/index.htm

Dynavox (2011). Dynavox Maestro. Retrieved from: http://www.dynavoxtech.com/download.ashx?FileId=1865&DocId=38dd1c47-7700-41f4-baa9-79a530c574cd

Emiliani, P. L., & Stephanidis, C. (2005). Universal access to ambient intelligence environments: opportunities and challenges for people with disabilities. IBM Systems Journal, 44(3), 605-619.

Farall, J., and Dunn, A. (2012). Switch Accessible Apps for iPad/iPhone. Retrieved from: http://www.janefarrall.com/html/resources/Switch%20Accessible%20Apps%20for%20iPad.pdf

Felzer, T., MacKenzie, I., Beckerle, P., & Rinderknecht, S. (2010). Qanti: a software tool for quick ambiguous non-standard text input. Computers Helping People with Special Needs, 128-135.

Felzer, T., Nordmann, R., & Rinderknecht, S. (2009). Scanning-based human-computer interaction using intentional muscle contractions. Universal Access in Human-Computer Interaction. Intelligent and Ubiquitous Interaction Environments, 509-518.

Felzer, T., & Rinderknecht, S. (2009). 3dScan: An environment control system supporting persons with severe motor impairments. In Proceedings of the 11th international ACM SIGACCESS conference on Computers and accessibility (pp. 213-214). ACM.

Felzer, T., Strah, B., & Nordmann, R. (2008). Automatic and self-paced scanning for alternative text entry. In Proceedings of the IASTED International Conference on Telehealth/Assistive Technologies (pp. 1-6). ACTA Press.

Flachberger, C., Panek, P., & Zagler, W. (1994). AUTONOMY—a flexible and easy-to-use assistive system to support the independence of handicapped and elderly persons. Computers for Handicapped Persons, 65-75.

Folmer, E., Liu, F., & Ellis, B. (2011). Navigating a 3D avatar using a single switch. In Proceedings of the 6th International Conference on Foundations of Digital Games (pp. 154-160). ACM. Gnanayutham, P., Bloor, C., & Cockton, G. (2004). Soft keyboard for the disabled. Computers Helping People with Special Needs, 999-1002.

Ghedira, S., Pino, P., & Bourhis, G. (2009). Conception and experimentation of a communication device with adaptive scanning. ACM Transactions on Accessible Computing (TACCESS), 1(3), 14, 23 pages.

Glennen, S. L. (1997a). Augmentative and Alternative Communication Systems. In S.L. Glennen and D.C. DeCoste (eds.) the handbook of augmentative and alternative communication. Singular Publishing Group, 59-96.

Glennen, S. L. (1997b). Introduction to augmentative and alternative communication. In S.L. Glennen and D.C. DeCoste (eds.) the handbook of augmentative and alternative communication. Singular Publishing Group, 3-20.

Grammenos, D., Kartakis, S., Adami, I., & Stephanidis, C. (2008). CAMILE: controlling AmI lights easily. In Proceedings of the 1st international conference on PErvasive Technologies Related to Assistive Environments (p. 35). ACM.

Grammenos, D., Savidis, A., & Stephanidis, C. (2005). Ua-chess: A universally accessible board game. HCI: Exploring New Interaction Environments, 7.

Grammenos, D., Savidis, A., & Stephanidis, C. (2009). Designing universally accessible games. Computers in Entertainment (CIE), 7(1), 8.

Han, Z., Jiang, H., Scucces, P., Robidoux, S., & Sun, Y. (2000). PowerScan: A single-switch environmental control system for persons with disabilities. In Bioengineering Conference, 2000. Proceedings of the IEEE 26th Annual Northeast (pp. 171-172). IEEE.

Haneman, B. and Bolter, D. (2007). gok - GNOME on-screen keyboard. Retrieved from: http://www.unix.com/man-page/OpenSolaris/1/gok

Henry, S. L. et al. (2005). Introduction to Web Accessibility. W3C – Web Accessibility Initiative, Copyright © 1994-2012 W3C® (MIT, ERCIM, Keio). Retrieved, November 1, 2012, from: http://www.w3.org/WAI/intro/accessibility.php

Jacob, R. J. (1991). The use of eye movements in human-computer interaction techniques: what you look at is what you get. ACM Transactions on Information Systems (TOIS), 9(2), 152-169.

Jones, P. E. (1998). Virtual keyboard with scanning and augmented by prediction. In Proceedings of the 2nd European Conference on Disability, Virtual Reality and Associated Technologies (pp. 45-51).

Kahn, D. A., Heynen, J., & Snuggs, G. L. (1999). Eye-controlled computing: The VisionKey experience. In Proceedings of the Fourteenth International Conference on Technology and Persons with Disabilities (CSUN’99).

Kartakis, S., & Stephanidis, C. (2010). A design-and-play approach to accessible user interface development in Ambient Intelligence environments. Computers in Industry, 61(4), 318-328.

Komodo Open Lab (2012a). Tecla Access for Android User Guide version 0.3. Retrieved from: http://www.komodoopenlab.com/pub/media/pdfs/Tecla%20Shield%20for%20Android%20-%20User%20Guide%20v0.3.pdf

Komodo Open Lab (2012b). Tecla Access for iOS User Guide version 0.5. Retrieved from: http://www.komodoopenlab.com/pub/media/pdfs/Tecla%20Shield%20for%20iOS%20-%20User%20Guide%20v0.5.pdf

Komodo Open Lab (2012c). Android App Compatibility Reports. Retrieved from: http://komodoopenlab.com/tecla/support/android-app-compatibility/

Komodo Open Lab (2012d). iOS App Compatibility Reports. Retrieved from: http://komodoopenlab.com/tecla/support/ios-app-compatibility/

Lesher, G. W., Higginbotham, D. J., & Moulton, B. J. (2000). Techniques for automatically updating scanning delays. In proceedings of the RESNA 2000 Annual Conference (pp. 85-87).

Lesher, G., Moulton, B., & Higginbotham, D. J. (1998). Techniques for augmenting scanning communication. Augmentative and Alternative Communication, 14(2), 81-101.

Levine, S. H., & Goodenough-Trepagnier, C. (1990). Customised text entry devices for motor-impaired users. Applied ergonomics, 21(1), 55-62.

Lin, Y. L., Chen, M. C., Yeh, Y. M., Tzeng, W. J., & Yeh, C. C. (2006). Design and implementation of a chorded on-screen keyboard for people with physical impairments. Computers Helping People with Special Needs, 981-988.

Mackenzie, I. S., & Felzer, T. (2010). SAK: Scanning ambiguous keyboard for efficient one-key text entry. ACM Transactions on Computer-Human Interaction (TOCHI), 17(3), 11

Madentec Limited (2006). Discover Envoy v1.1 for Macintosh OS X: User Guide. Retrieved from: http://www.madentec.com/downloads/docs/envoy_manual_v1.1.pdf

Maguire, M., Elton, E., Osman, Z., & Nicolle, C. A. (2006). Design of a virtual learning environment: for students with special needs. Human Technology, 2 (1), pp. 119 – 153.

Majaranta, P., & Räihä, K. J. (2002). Twenty years of eye typing: systems and design issues. In Proceedings of the 2002 symposium on Eye tracking research & applications (pp. 15-22). ACM.

Mankoff, J., Dey, A., Batra, U., & Moore, M. (2002). Web accessibility for low bandwidth input. In Proceedings of the fifth international ACM conference on Assistive technologies (pp. 17-24). ACM.

Microsoft (2012a). Microsoft Kinect for Windows: product Features. Retrieved from: http://www.microsoft.com/en-us/kinectforwindows/discover/features.aspx

Microsoft (2012b). Type without using the keyboard (On-Screen Keyboard). Retrieved from: http://windows.microsoft.com/en-US/windows7/Type-without-using-the-keyboard-On-Screen-Keyboard

Miró-Borrás, J., & Bernabeu-Soler, P. (2009). Text entry in the e-commerce age: two proposals for the severely handicapped. Journal of theoretical and applied electronic commerce research, 4(1), 101-112.

Mourouzis, A., Boutsakis, E., Ntoa, S., Antona, M., & Stephanidis, C. (2007). An accessible and usable soft keyboard. Universal Access in Human-Computer Interaction. Ambient Interaction, 961-970.

Norte, S., & Lobo, F. G. (2007). A virtual logo keyboard for people with motor disabilities. In ACM SIGCSE Bulletin (Vol. 39, No. 3, pp. 111-115). ACM.

Ntoa, S., Margetis, G., & Stephanidis, C. (2009). FireScanner: A Browser Scanning Add-On for Users with Motor Impairments. Universal Access in Human-Computer Interaction. Applications and Services, 755-763.

Ntoa, S., Savidis, A., & Stephanidis, C. (2004). FastScanner: An accessibility tool for motor impaired users. In Proceedings of the 9th International Conference on Computers Helping

People with Special Needs (ICCHP 2004), Paris, France, 7-9 July (pp. 796-804). Berlin Heidelberg: Springer-Verlag

Ntoa, S., Savidis, A., & Stephanidis, C. (2009). Automatic Hierarchical Scanning for Windows Applications. In C. Stephanidis (Ed.), The Universal Access Handbook (pp. 35-1 - 35-16). Boca Raton, FL: Taylor & Francis (ISBN: 978-0-8058-6280-5, 1.034 pages).

Ntoa, S., & Stephanidis, C. (2005). ARGO: A System for Accessible Navigation in the World Wide Web. ERCIM News, 61, 53-54

Norte, S., & Lobo, F. G. (2008). Sudoku access: a sudoku game for people with motor disabilities. In Proceedings of the 10th international ACM SIGACCESS conference on Computers and accessibility (pp. 161-168). ACM.

OneSwitch.ork.uk Blog (2012). One switch Games. Retrieved from: http://switchgaming.blogspot.gr/search/label/one-switch%20games

Origin Instruments (2012). SwitchXS™. Scanning Keyboard and Mouse Emulation for Mac OS X. Retrieved 8 November 2012 from: http://www.orin.com/access/switchxs/

Owens, J. & Keller, S. (2000). MultiWeb Australian contribution to web accessibility. Proceedings of the 11th Australasian Conference on Information Systems, 6-8 Dec., Brisbane, Australia

Padeleris, P., Zabulis, X., & Argyros, A. A. (2012). Head pose estimation on depth data based on Particle Swarm Optimization. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2012 IEEE Computer Society Conference on (pp. 42-49). IEEE.

Possum Controls Limited (1999). Freeway Environmental Control Unit User’s Guide. Retrieved from: http://www.possum.co.uk/static/products/45/files/guide.pdf

Possum Controls Limited (2009). HC6000 VIVO! User’s Guide. Retrieved from: http://www.possum.co.uk/static/products/50/files/guide.pdf

Prentke Romich Company (2012). Accent 1200. Retrieved from: https://store.prentrom.com/product_info.php/cPath/11/products_id/207?osCsid=1ta4e4tu2dp5opnd4e4iq0gsa3

Pretorian Technologies Ltd. (2012). Instructions Switch2Scan. Retrieved from: http://www.pretorianuk.com/images/datasheets/Switch2Scan.pdf

RJ Cooper & Associates (2012a). CrossScanner: 1-2 switch mouse emulator. Retrieved 26 October 2012 from: http://www.rjcooper.com/cross-scanner/index.html

RJ Cooper & Associates (2012b). iPad VO Controller. Retrieved from: http://rjcooper.com/ipad-vo-controller/#voiceover

RSLSteeper (2012). Evoassist 2.0: the evolution of home control. Retrieved from: http://assistive-technology.co.uk/uploads/files/EvoAssist2.0_RSLLIT314_for_web1.pdf

Saje Technology (2010). Pocket Mate. Retrieved from: http://www.saje-tech.com/brochure/PocketMate.pdf

Savidis, A., Grammenos, D., & Stephanidis, C. (2006). Developing inclusive e-learning systems. Universal Access in the Information Society, 5(1), 51-72.

Savidis, A., Vernardos, G., and Stephanidis, C. (1997). Embedding scanning techniques accessible to motor-impaired users in the WINDOWS Object Library. In G. Salvendy, M.

J. Smith, and R. J. Koubek (eds.) Design of Computing Systems: Cognitive Considerations, Proceedings of the 7th International Conference on Human-Computer Interaction (HCI International ‘97), Volume 1, 429–432. Amsterdam, The Netherlands: Elsevier Science.

Simpson, R., & Koester, H. H. (1999). Adaptive one-switch row-column scanning. Rehabilitation Engineering, IEEE Transactions on, 7(4), 464-473.

Simpson, R., Koester, H., & LoPresti, E. (2006). Evaluation of an adaptive row/column scanning system. Technology and disability, 18(3), 127-138.

Simpson, R. C., Mankowski, R., Koester, H. H., Kulyukin, V., Crandall, W., Coster, D., ... & Murphy, G. C. (2011). Modeling One-Switch Row-Column Scanning with Errors and Error Correction Methods. Open Rehabilitation Journal, 4, 1-12.

Spalteholz, L., Li, K. F., & Livingston, N. (2007). Efficient navigation on the world wide web for the physically disabled. In Proceedings of the 3rd International Conference on Web Information Systems and Technologies (pp. 321-326).

Spalteholz, L. (2012). KeySurf-A keyboard Web navigation system for persons with disabilities (Doctoral dissertation, University of Victoria).

SpecialEffect’s accessible GameBase (2012). Switch / One Button Games. Retrieved from: http://www.gamebase.info/magazine/category/18534

Antona, M., & Stephanidis, C. (2000). An Accessible Word Processor for Disabled People. In R. Vollmar & R. Wagner (Eds.), Proceedings of the 7th International Conference on Computers Helping People with Special Needs (ICCHP 2000), Karlsruhe, Germany, 17-21 July (pp. 689-696). Wien: Österreichische Computer Gesellschaft..

Stephanidis, C. (2006). A European ambient intelligence research facility at ICS-FORTH. ERCIM News, 31.

Stephanidis, C. (2009). Designing for all in ambient intelligence environments: the interplay of user, context, and technology. Intl. Journal of Human–Computer Interaction, 25(5), 441-454.

Stephanidis, C., Antona, M., & Grammenos, D. (2007). Universal access issues in an ambient intelligence research facility. Universal Access in Human-Computer Interaction. Ambient Interaction, 208-217.

Stephanidis, C., Paramythis, A., Sfyrakis, M., Stergiou, A., Maou, N., Leventis, A., … & Karagiannidis, C. (1998). Adaptable and adaptive user interfaces for disabled users in the AVANTI project. Intelligence in Services and Networks: Technology for Ubiquitous Telecom Services, 153-166.

Stephanidis, C., & Savidis, A. (2003). Unified User Interface Development. In J. Jacko & A. Sears (Eds.), The Human-Computer Interaction Handbook - Fundamentals, Evolving Technologies and Emerging Applications (pp. 1069-1089). Mahwah, New Jersey: Lawrence Erlbaum Associates

Stephanidis, C., & Savidis, A. (2001). Universal access in the information society: methods, tools, and interaction technologies. Universal Access in the Information Society, 1(1), 40-55.

Steriadis, C. E., & Constantinou, P. (2002). Using the scanning technique to make an ordinary operating system accessible to motor-impaired users. The “Autonomia” system. Group, 8(B7), B6.

Steriadis, C. E., & Constantinou, P. (2003). Designing human-computer interfaces for quadriplegic people. ACM Transactions on Computer-Human Interaction (TOCHI), 10(2), 87-118

Tao, C., Zhang, X., & Wang, X. (2008). Research of Environmental Control systems for Disabled people. In 7th Asian-Pacific Conference on Medical and Biological Engineering (pp. 476-479). Springer Berlin Heidelberg.

Tash Inc. (2000a). Mini Relax: User’s Guide #8205. Retrieved from: http://store.ablenetinc.com/downloads/manuals/minirelax.pdf

Tash Inc. (2000b). Relax II: User’s Guide #8200. Retrieved from: http://store.ablenetinc.com/downloads/manuals/relaxii.pdf

Tobii (2009). Tobii Communicator. Retrieved from: http://www.tobii.com/Global/Assistive/Product_Documents/Tobii_Communicator_Leaflet_us.pdf?epslanguage=en

Unique Perspectives Ltd. (2012). ClickToPhone Android App. Retrieved from: http://www.click2go.ie/resources/manuals/clicktophone-android-app/

Wandmacher, T., Antoine, J. Y., Poirier, F., & Départe, J. P. (2008). Sibylle, an assistive communication system adapting to the context and its user. ACM Transactions on Accessible Computing (TACCESS), 1(1), 6.

Wellings, D. J., & Unsworth, J. (1997). Fortnightly review. Environmental control systems for people with a disability: an update. BMJ: British Medical Journal, 315(7105), 409.

Westin, T., Bierre, K., Gramenos, D., & Hinn, M. (2011). Advances in Game Accessibility from 2005 to 2010. Universal access in human-computer interaction. Users diversity, 400-409.

Yamamoto, T., & Ide, M. Development of a multi-switch input controller of the electronic devices for the motor-disabled-person. In Engineering in Medicine and Biology Society, 1996. Bridging Disciplines for Biomedicine. Proceedings of the 18th Annual International Conference of the IEEE (Vol. 2, pp. 508-509). IEEE.

Yuan, B., Folmer, E., & Harris, F. C. (2011). Game accessibility: a survey. Universal Access in the Information Society, 10(1), 81-100.

Zyteq (2012). The Grid 2. Retrieved from: http://www.zyteq.com.au/products/software/the_grid_2

KEY TERMS & DEFINITIONS

Scanning

An interaction method providing sequential access to the elements of a graphical user interface and enabling users to interact with a GUI through even a single binary switch, by activating the switch when the desired interaction element receives the scanning focus (visual through highlighting or auditory).

Block Scanning

A scanning technique, in which the GUI elements are grouped into categories (blocks), allowing users to easily bypass blocks of objects and focus on the desired interactive object faster and easier

Row/Column Scanning

A block scanning technique, in which the GUI elements are grouped in rows which sequentially receive the scanning focus; once the desired row is selected by the user, its columns are then sequentially scanned

Quadrant Scanning (or Three-Dimensional Scanning)

A block scanning technique, in which the GUI elements are divided into quadrants which sequentially receive the scanning focus; once the desired quadrant is selected by the user, its elements are being scanned either as groups or individually

Two-directional Scanning

A scanning technique, in which an object can be selected by the user by specifying its coordinates on the screen that is being scanned at first vertically (a line goes through the screen from top to bottom) and then horizontally (a pointer moves along the selected line)

Eight-directional Scanning

A scanning technique, in which the mouse pointer can be moved towards one of eight directions, according to the user’s preference, by selecting an appropriate button from the scanning control panel (e.g. move the pointer up, by selecting an up arrow button)

Cluster Scanning

A scanning technique, in which elements on the screen are divided into clusters of targets, based on their locations

Hierarchical Scanning

A scanning technique, in which access to windows and window elements is provided according to their place in the window hierarchical structures

Switch

A simple, usually pressure-activated, device (e.g. a button) with two states (on/off), which acts as an input device and allows users with severe motor disabilities to interact with a computational device (e.g. laptop, desktop computer, tablet, etc.)

Switch Interface

A device used to connect switches to a computational device and may also offer mouse and keyboard emulating functions, so that when a user activates a switch connected to the switch interface a specific mouse (e.g. click) or keyboard (e.g. tab key press) action is carried out

i http://www.ics.forth.gr/ami/


Recommended