+ All Categories
Home > Documents > Design and Evaluation of a Flexible Interface for Spatial ...

Design and Evaluation of a Flexible Interface for Spatial ...

Date post: 24-Dec-2021
Category:
Upload: others
View: 2 times
Download: 0 times
Share this document with a friend
8
Design and Evaluation of a Flexible Interface for Spatial Navigation onboard an Intelligent Wheelchair Emily Tsang School of Computer Science McGill University Montreal, QC H3A 2A7 [email protected] Sylvie C. W. Ong School of Computer Science McGill University Montreal, QC H3A 2A7 [email protected] Joelle Pineau School of Computer Science McGill University Montreal, QC H3A 2A7 [email protected] ABSTRACT The paper tackles the problem of designing a graphical user interface for selecting navigational targets onboard an intel- ligent wheelchair navigating in a large indoor environment. We begin by describing the robot platform and interface de- sign. We then present results from a user study in which participants were required to select navigational targets us- ing a variety of input and filtering methods. We considered two types of input modalities (point-and-click and single- switch), to verify usability for individuals with a range of mobility impairments. The filtering methods are used to reduce the amount of information presented onscreen and thereby accelerate selection of the correct option; we con- sider both category-based and spatial filters. 1. INTRODUCTION The design of user interfaces for the navigational command of a robot is an important problem, with application for a wide range of robotic devices. In this paper, we focus on the question of graphical user interface (GUI) design for con- trolling an intelligent wheelchair. Some of the challenges of this particular application include: strict limitations on the size of the display, need to accommodate individuals with a variety of impairments, need for accuracy and efficiency in the command selection. The work described in this paper stems from the SmartWheeler project, an initiative aimed at the development of robotic technology for powered wheelchairs. The SmartWheeler project has been ongoing since 2006. It is one of a few smart wheelchairs that has undergone rigorous user testing in a controlled envi- ronment, including with a number of individuals with physi- cal disabilities [10]. Navigational command of the wheelchair so far has been done primarily using voice commands. How- ever as we prepare to move the wheelchair towards large public indoor environments, such as malls, museums, uni- versities, airports, and others, it becomes imperative to de- velop a navigational command interface that is adapted to the noisy, crowded, and changing conditions of these envi- ronments. The role of that command interface is to allow the user to select specific navigational targets for the robot. For example, in a university setting, the user may wish to select a specific building and room number, or else to request navigation to the closest bathroom, or elevator. There are many robotic challenges that arise when devel- oping such a system. In this paper, we focus primarily on the design and validation of the navigational command GUI. The design of navigational GUIs has received attention from the HRI and HCI communities recently [3, 14]. Some of the principles arising from this literature are applicable in our case. However we face additional challenges due to the na- ture of our target population. The design of navigational GUIs that support accessibility has received substantially less attention, though there are a few notable exceptions. There have been interfaces developed to control a robotic arm on a wheelchair to direct it towards an object one would like to grasp [16, 17]. Their interfaces, like ours, accom- modate a variety of input types. However, their primary focus is on the control and navigation of the robotic arm within the local space visible through a camera mounted on the wheelchair. In other work, a GUI was designed for cognitively impaired users, which allows them to set local navigational targets for their wheelchair through a tactile screen [8]. This interface was later adapted to function with electroencephalogram (EEG) signal as input [6]. This work also focuses on the problem of local navigation, as the user can only set targets within the portion of the environment that is currently visible from the user’s point of view (as shown on a generated 3D map). In contrast, our work fo- cuses on using the GUI to achieve global navigational tasks within a large indoor environment. Users are not restricted to selecting destinations in their immediate surroundings, which would then require them to set multiple intermedi- ate navigational targets to reach a desired goal. Rather, we assume the user is presented with a large set of possible nav- igational targets from the global map. We rely on several filtering techniques to reduce the set of targets such that the interface is manageable, and the interaction efficient. We begin the paper by describing the SmartWheeler sys- tem. We then describe the design and development of the graphical user interface used for the navigational control of the wheelchair. One of the contributions of this paper is the description and empirical comparison of various filter- ing methods that allow the user to select from a large set of global navigational targets. We consider both category-
Transcript
Page 1: Design and Evaluation of a Flexible Interface for Spatial ...

Design and Evaluation of a Flexible Interface for SpatialNavigation onboard an Intelligent Wheelchair

Emily TsangSchool of Computer Science

McGill UniversityMontreal, QC H3A 2A7

[email protected]

Sylvie C. W. OngSchool of Computer Science

McGill UniversityMontreal, QC H3A [email protected]

Joelle PineauSchool of Computer Science

McGill UniversityMontreal, QC H3A 2A7

[email protected]

ABSTRACTThe paper tackles the problem of designing a graphical userinterface for selecting navigational targets onboard an intel-ligent wheelchair navigating in a large indoor environment.We begin by describing the robot platform and interface de-sign. We then present results from a user study in whichparticipants were required to select navigational targets us-ing a variety of input and filtering methods. We consideredtwo types of input modalities (point-and-click and single-switch), to verify usability for individuals with a range ofmobility impairments. The filtering methods are used toreduce the amount of information presented onscreen andthereby accelerate selection of the correct option; we con-sider both category-based and spatial filters.

1. INTRODUCTIONThe design of user interfaces for the navigational commandof a robot is an important problem, with application for awide range of robotic devices. In this paper, we focus on thequestion of graphical user interface (GUI) design for con-trolling an intelligent wheelchair. Some of the challenges ofthis particular application include: strict limitations on thesize of the display, need to accommodate individuals with avariety of impairments, need for accuracy and efficiency inthe command selection.

The work described in this paper stems from the SmartWheelerproject, an initiative aimed at the development of robotictechnology for powered wheelchairs. The SmartWheeler projecthas been ongoing since 2006. It is one of a few smart wheelchairsthat has undergone rigorous user testing in a controlled envi-ronment, including with a number of individuals with physi-cal disabilities [10]. Navigational command of the wheelchairso far has been done primarily using voice commands. How-ever as we prepare to move the wheelchair towards largepublic indoor environments, such as malls, museums, uni-versities, airports, and others, it becomes imperative to de-velop a navigational command interface that is adapted tothe noisy, crowded, and changing conditions of these envi-

ronments. The role of that command interface is to allowthe user to select specific navigational targets for the robot.For example, in a university setting, the user may wish toselect a specific building and room number, or else to requestnavigation to the closest bathroom, or elevator.

There are many robotic challenges that arise when devel-oping such a system. In this paper, we focus primarily onthe design and validation of the navigational command GUI.The design of navigational GUIs has received attention fromthe HRI and HCI communities recently [3, 14]. Some of theprinciples arising from this literature are applicable in ourcase. However we face additional challenges due to the na-ture of our target population. The design of navigationalGUIs that support accessibility has received substantiallyless attention, though there are a few notable exceptions.There have been interfaces developed to control a roboticarm on a wheelchair to direct it towards an object one wouldlike to grasp [16, 17]. Their interfaces, like ours, accom-modate a variety of input types. However, their primaryfocus is on the control and navigation of the robotic armwithin the local space visible through a camera mountedon the wheelchair. In other work, a GUI was designed forcognitively impaired users, which allows them to set localnavigational targets for their wheelchair through a tactilescreen [8]. This interface was later adapted to function withelectroencephalogram (EEG) signal as input [6]. This workalso focuses on the problem of local navigation, as the usercan only set targets within the portion of the environmentthat is currently visible from the user’s point of view (asshown on a generated 3D map). In contrast, our work fo-cuses on using the GUI to achieve global navigational taskswithin a large indoor environment. Users are not restrictedto selecting destinations in their immediate surroundings,which would then require them to set multiple intermedi-ate navigational targets to reach a desired goal. Rather, weassume the user is presented with a large set of possible nav-igational targets from the global map. We rely on severalfiltering techniques to reduce the set of targets such that theinterface is manageable, and the interaction efficient.

We begin the paper by describing the SmartWheeler sys-tem. We then describe the design and development of thegraphical user interface used for the navigational control ofthe wheelchair. One of the contributions of this paper isthe description and empirical comparison of various filter-ing methods that allow the user to select from a large setof global navigational targets. We consider both category-

Page 2: Design and Evaluation of a Flexible Interface for Spatial ...

Figure 1: The SmartWheeler platform.

based and spatial filters. We compare the efficiency andaccuracy of these methods using different types of input(point-and-click, single-switch) to accommodate individualswith a variety of mobility disorders. We measure standardperformance indicators, including the time and number ofclicks to selection, as well as the number of errors.

2. SMARTWHEELER PROJECTThe SmartWheeler project aims to develop a prototype ofan intelligent wheelchair that can be used by individualswith mobility impairments, to assist in their daily displace-ments [9]. Our first prototype, shown in Figure 1, is builton top of a commercial electrical wheelchair. The chair hasbeen outfitted with an additional onboard computer, frontand back laser range-finders, an 8-inch touchscreen, andwheel odometers. The SmartWheeler was initially developedto process voice commands, with a complementary tactileinterface. This system underwent a sequence of user test-ing according to the detailed Wheelchair Skills Test, whichdemonstrated that the intelligent system could correctly un-derstand and carry out a variety of typical wheelchair driv-ing tasks [10].

In the next phase of the project, the wheelchair will be de-ployed in an indoor mall environment, where it will be taskedwith navigating this large, crowded space according to thecommands of the user. In this setting, a dialog-based in-terface will be inadequate due to the noisy conditions [11].Thus we have developed a new graphical interface to allowthe user to select high-level navigational targets. It is impor-tant to note that in contrast with other smart wheelchairs,where the user’s input is limited to the usual set of joystickcommands leading to local navigational targets (e.g. forwardand backward motion, left and right turns), our system isdesigned to allow the user to select global navigational tar-gets (e.g. go to location X, find the nearest exit, etc.) Toallow the user to make maximal use of these capabilities, itis important that the navigational targets be easy to selectfor users with a wide range of motion impairments. Thesmart wheelchair’s onboard computer system is equippedwith standard robotic software (mapping, localization, pathplanning and obstacle avoidance, see [1] for details) allowingit to reach the selected targets.

3. GUI DESIGNThis section describes the design and implementation of anew graphical interface for achieving global navigation usinga smart wheelchair.

3.1 Guidelines and constraintsWe considered certain general usability principles [13] tohelp guide our design.

1. Learning time: Have a system that is easy to learnwithout an extensive training period.

2. Performance speed: Design the GUI to be responsiveand to provide the user with quick ways to set targets.

3. Error rate: Minimize the impact of errors by makingthem easy to reverse.

4. Subjective satisfaction: Make an interface that is en-joyable to use and that minimizes possible sources offrustration.

Our design is also bound by certain constraints that arisebecause we are developing a system for disabled users.

1. Adaptability to various input types: The huge spec-trum of disabilities may require users to resort to verydifferent input methods.

2. Limited display size: The screen on which the GUI ispresented to the user must be mounted on a wheelchairand, therefore, has a constraint on its dimensions.

3.2 GUI layoutThe GUI, as shown in Figure 2, is divided into five pan-els. The central (and largest) panel shows a map of thewheelchair’s environment. We show here a map of the par-ticular mall in Montreal, Quebec, where an upcoming de-ployment will take place. Only the second floor of the mallis presented at this stage, as we deemed it sufficient to eval-uate the GUI design. The bottom panel contains a smallspace where user feedback may be displayed; this commu-nicates the state of the smart wheelchair to the user. Wehave found in previous studies that this is an importantcomponent for usability of this device [10]. The left panelcontains a list of locations that may be selected as naviga-tional targets. These locations are simultaneously displayedand labelled on the map. Currently, the user is only allowedto select from a fixed set of pre-programmed targets. Thetop-center panel contains buttons with various icons. Thesebuttons allow the user to filter the targets displayed on themap and list according to their category. For instance, ifthe button with the clothing icon is selected, only apparelstores will be displayed, both on the map (central panel)and in the list (left panel). Finally, the top-right panel con-tains buttons that allow zooming in and out on the map,thus allowing targets to be filtered according to their spatiallocation. These buttons function similarly to the categoryselection buttons – zooming allows the user to restrict tar-gets belonging to a particular region for display, both on themap and in the list.

Page 3: Design and Evaluation of a Flexible Interface for Spatial ...

Figure 2: A screen shot of the GUI for single-switch input where the left panel is highlighted.

3.3 GUI featuresWe incorporated a set of features which took into accountthe design guidelines while working within the constraints.The major features are:

Visual-based and text-based navigational target se-lection

The user has the option of either selecting the navigationaltarget via the map by choosing among the labelled locationsdisplayed on the map, or selecting the navigational target viathe list by choosing among the alphabetically-sorted buttonsdisplayed in the list. Our conjecture is that providing bothvisual-based and text-based selection methods enhances theusability of the GUI. In particular, visual-based selectionseems appropriate for interacting with a robot to specifynavigational tasks. However text-based selection may alsobe desirable in cases where an individual is not very familiarwith the global layout of the environment (as prescribed bythe first usability principle above).

Adaptation to a range of input devices

In order to accommodate various input devices, the GUIis designed in two different versions – point-and-click andsingle-switch. The first can be used with any point-and-click device such as a mouse, a touch-sensitive screen, or ajoystick; the second is suitable for single-switch input (e.g.pushbutton, sip-and-puff device, etc.). The various inputdevices supported cater to a large spectrum of motor im-pairments.

Using a switch as input is the equivalent of having a singleaction to interact with the interface. The use of switches isoften the only viable option for users who lack the fine motorskills required to operate a joystick or to touch the screendirectly. Among such users, some may be able to operatetwo or more switches. However, we chose to design our GUI

to work with a single switch since this is the most basicinput type. A single-switch interface is suitable for userswith severe motor impairments and can be easily augmentedto work with devices with more degrees of freedom.

The main difference between the two GUI versions is themethod for selecting items on the display. The informationpresented on the panels is the same in both versions.

Target selection for single-switch input

Selecting items on the display—such as buttons from the listor locations on the map—is straightforward for point-and-click devices as the user has the freedom to point to anypart of the display and ‘click’ for selection. The situationfor single-switch devices is somewhat more complicated.

The challenge when using a single-switch input is to findan efficient way for the user to maneuver between items.Many single-switch-adapted interfaces use automatic scan-ning, where items are sequentially highlighted and the useractivates the switch when the desired item is highlighted [5].There is software available for overlaying on existing appli-cations to achieve switch-based mouse emulation, for exam-ple, WiVik R©(http://www.wivik.com). However, when us-ing such software, the pattern of scanning is not tailored tothe specific application, so the result can be slow and cum-bersome [4]. Therefore, to maximize efficiency and speedof item selection, we implemented a custom scanning pat-tern suited to our GUI display layout. In the panel-selectionmode, the three panels with buttons (left panel, top-centerpanel, top-right panel) are scanned through. Once a panelhas been chosen, the buttons within that panel are scannedthrough in the button-selection mode. Each panel has abutton that allows the user to exit the current panel and re-turn to panel-selection mode. The map in the central panelis not included in the panel-selection mode and thus cannotbe selected. However, when the left panel is selected and the

Page 4: Design and Evaluation of a Flexible Interface for Spatial ...

list of navigational targets are scanned through, the corre-sponding labelled locations on the map are also highlighted.Hence, with switch-input, there is no distinction betweenmap and list-based navigational target selection: the twohappen simultaneously.

Navigational target filtering

In our application, there is quite a large set of possiblenavigational targets – our map currently includes 37 pre-programmed navigational targets, and we view this as a min-imal set, that is likely to grow following the initial deploy-ment. This, together with the limited display size, results inthe map being fairly cluttered when the full set of targets aredisplayed. Similarly, the full list of buttons correspondingto the targets cannot be accommodated on the display, sothe user needs to scroll through the list to see all the listedtargets. This potentially affects accuracy and efficiency forboth map and list-based navigational target selection. Apractical solution is to allow users to focus their search byfiltering out navigational targets that do not interest them.This is of particular importance to ensure efficient GUI in-teraction for the single-switch interface. Scanning through alengthy list is slow, and furthermore, missing the desired se-lection may be frustrating because the user must then waitfor the entire list to be traversed before it is highlightedagain.

We provide the user with two ways to filter the list of naviga-tional targets: filtering by category (via the top-center panel)and filtering by region using the zoom (via the top-rightpanel). To ensure compatibility with single-switch input,the zoom function is quadrant-based [16]. Each quadrantof the map is highlighted when the corresponding zoom-inbutton from the top-right panel is highlighted, allowing theuser to activate the switch to enlarge that portion of themap. The user may zoom in at most twice, which results indisplaying 1/16-th of the map.

Error minimization and recovery

We implemented several features with the view of minimiz-ing error and enhancing error recovery.

With point-and-click devices, accuracy may become an is-sue when the button size is too small. Due to the spaceconstraints of our display, we put special consideration intobutton sizes. We wanted the buttons to be large enough tobe selected easily when using a point-and-click device, butnot so large as to impede on the space for the map. In par-ticular, with regards to the category-selection buttons, westrived to achieve an efficient balance between the number ofcategories (i.e. the number of buttons, which indirectly af-fects the button size) and the number of targets per category(i.e. the number of targets to search through per category).There was an average of just over five navigational targetsper category.

To aid error recovery, in the point-and-click version, we alsoadded a feature whereby the map automatically zooms inwhen the user clicks an area of the map where there is noavailable target. This helps users who ”miss” when trying toselect a navigational target by enlarging the portion of the

map they are considering and making selection easier on thesecond try.

Another important error recovery feature is the addition ofa confirmation step that appears as a small popup windowwhen a navigational target has been selected. This appliesto both GUI versions. The pop-up window additionallypresents the name of the store as well as the icon of thecategory the store belongs to. Informing the user of thecategory is helpful for finding the same store again, by us-ing filtering by category. The concept of target validationhas been used by other wheelchairs capable of autonomousnavigation [6, 8].

Last but not least, to make the GUI easier to learn, weavoided using a nested menu system and instead opted tomake all the buttons visible to the user. The only exceptionis the list in the left panel, which may need to be scrolleddown if there are too many navigational targets to be dis-played.

4. USER STUDYThe goal of our user study is to investigate the efficiencyand intuitiveness of various ways of accessing navigationalcommands for the robot using the features provided.

4.1 HypothesesHypothesis 1. Users prefer selecting the navigational tar-

gets via the map, rather than selecting the navigationaltargets via the list, because it is a more visual way ofsetting goals.

Hypothesis 2. Providing users with ways to filter out un-desired targets improves their efficiency.

Hypothesis 3. Users prefer to filter targets with the cate-gories, rather than by region with the zoom.

As was pointed out by an occupational therapist collaborat-ing on the project, the input method used will be dictatedby the user’s available motor function. Therefore, we didnot consider it pertinent to perform any direct comparisonsbetween the single-switch and point-and-click input modes.

4.2 ParticipantsNineteen participants were recruited to test the interface,15 males and 4 females, between the ages of 19 and 35.All participants were university students, with no mobilityimpairment and without involvement in the project. Eachparticipant tested both the point-and-click and the single-switch versions of the GUI1. The point-and-click version wasimplemented by tactile input to the display screen. Thesingle-switch version was implemented using the space barof a keyboard as the switch. The order in which each inputmode was tested was randomized between subjects.

1There was a problem with the data collection for theswitch-input interface for one of the participants. There-fore, the results only include the data for 18 participants forthis version of the GUI.

Page 5: Design and Evaluation of a Flexible Interface for Spatial ...

4.3 TaskThe participants were required to interact with the GUI dis-played on an 8-inch Lilliput touch-sensitive screen identicalto the one mounted on the wheelchair. The touchscreenused for the testing was not connected to the wheelchairto minimize burden and risk. Navigation to the target wassimulated via the interface only.

Participants were prompted to select nine navigational tar-gets per input type. These destinations were presented asflash cards, as shown in Figure 3, displaying the store name,the type of store (category), as well as its relative locationon the map. Providing all this information ensured thatpeople without any prior knowledge about this particularmall and set of stores were not at a disadvantage. Once pre-sented with a flash card, participants had to use the GUIand prescribed input method (either single-switch or point-and-click input) to achieve selection of the store listed onthe card. All participants were given the same set of navi-gational targets; these were spread among different areas ofthe map and belonged to a variety of categories.

Figure 3: A sample flash card used to instruct theuser study participants

As noted in Section 3, the user can select a navigationaltarget via the map or via the list when using the point-and-click version of the interface. The user can also filterthe set of targets, using either the categories, or the zoom,or a mixture of both. To explore participant preference forthese functionalities, the participants were first instructed inhow to use each of these four features. The order in whichthe different aspects of the GUI were shown to them wasrandomized so that preference was not influenced by ordereffect. Participants were then prompted to navigate towardsa few destinations as practice. The data for these practicetasks do not figure in the results. They were simply a meansto ensure that the participants had properly grasped howthe GUI functioned. Finally, participants were given thenine test navigational targets to access and were free to usethe map or the list, including alternating between them,to complete the task. The participants were also allowedto use one or both filtering feature (categories and zoom)should they want to. The same procedure was applied totest the single-switch version of the GUI, except that onlythe category and zoom features were compared, since withthe switch-adapted GUI, the sequential highlighting of thetargets during the scanning phase happens simultaneouslyon both the map and the list.

4.4 Data collectedFrom automated logs, we gathered the time to task com-pletion, the number of clicks (for point-and-click input) orswitch activations (for single-switch input) required to com-plete the task, and the number of errors as well as theirnature. There are two main types of possible errors whenusing a scanning interface: selection errors, which come from

choosing the wrong item, and timing errors, which involvemissing an element the first time it is highlighted [2]. Weonly counted selection errors. Timing errors are apparentthrough an increased time to task completion, but were nototherwise quantified.

Selection errors are also relevant to the point-and-click ver-sion of the interface. Apart from the obvious error of select-ing the wrong target, we also considered the following selec-tion errors: zooming into the incorrect portion of the map,zooming out then zooming into the exact same quadrant, se-lecting the wrong category, and reselecting the current cat-egory. Additionally, there are certain selection errors thatare only applicable to single-switch input, including: en-tering and exiting a panel without making a selection, andleaving a panel then returning to it without making a selec-tion in between. In all cases, the incorrect selection with thenecessary action to undo, if any, is counted as a single error.Hence, most errors result in a pair of additional clicks (orswitch activations) rather than a single one.

We also collected questionnaire data from all participants.This was done after they had completed their interactionwith the GUI using both input modes. We asked them torate navigational target selection via the map and via thelist, as well as the two filtering features, on a five-point Likertscales. We also included open-ended questions as suggestedby previous work [15]. Finally, we collected observer notes,including transcription of comments uttered aloud by theparticipants as they were fulfilling the task.

5. RESULTSWe compared mean values related to the metrics of the tar-get selection via the map or the list, and of filtering by cat-egory or by zoom. The metrics considered were the timerequired to reach the nine navigational targets, the numberof clicks or switch activations, and the number of errors. Toestablish whether there is a statistically significant differ-ence between the compared means, we performed unpairedtwo-tailed t-tests at the 0.95 confidence level.

For both input types, only a small number of people choseto filter by region using the zoom and never used the cat-egories. Therefore, we compared the metrics for the userswho filtered only by category to those of all the other partici-pants, which gave us more reasonably sized samples. Hence,the column entitled “other” in Tables 2 and 3 refers to par-ticipants who used only the zoom as well as those who some-times filtered by categories and sometimes by zoom.

5.1 Tactile inputAs shown in Table 1, the mean time to task completion,number of clicks, and quantity of errors, is lower for theparticipants who selected their navigational targets via thelist, as opposed to via the map or by alternating betweenthese two methods. However these results were not foundto be statistically significant.

Of the 19 participants who tested the GUI, 15 chose to useonly the map or only the list to select all their targets (theother 4 participants used a mix of both). There were tworecurring justification for using the list: it is faster and easierto search through an alphabetically-sorted set of names, and

Page 6: Design and Evaluation of a Flexible Interface for Spatial ...

Table 1: Mean time, clicks and number of errors forthe selection of nine navigational targets with tactileinput.

Map List BothTime (in s.) 82.6 75.9 92.2Clicks 33.8 27.9 30.5Errors 1.25 0.857 2Sample size 8 7 4

it can be tedious to situate a target on an unfamiliar map.Of those who selected their targets only on the map, someexplained that they thought it was more efficient. However,a few cautioned that they only liked using the map whenit was not too cluttered. This was a point brought up byseven people, three of which refused to use the map directlyfor that reason.

Indeed, every participant resorted to at least one of the twofiltering features at some point during the task. Figure 4shows that the filtering by category was used more oftenthan filtering via the zoom. Of the 7 participants who se-lected targets using the list (not the map), all used onlycategorical filtering (no zoom) and used it for finding alltargets. The proponents of the category-based filtering saidit was faster, required fewer clicks, and reduced the sets oftargets more than filtering by region with the zoom. Thesejustifications applied to the use of the categories both fortactile and single-switch input. However, as can be seenfrom Table 2, the time and number of clicks required tocomplete the task when using only the categories was notshown to be significantly different from using just the zoomor both the zoom and the categories with tactile input.

Certain reasons may explain why fewer participants used thezoom than the categories with the point-and-click version.Two people mentioned that they found the zoom buttonsconfusing, while three others maintained that it was difficultto determine in which quadrant certain target locations werein, when using tactile input. Regarding the zoom function-ality, five people expressed their dislike for having to avoidclicking on nearby stores. From the nine people who did usethe zoom at some point with the touchscreen, three usedthe zoom-in buttons in the top-right panel, five zoomed inby clicking on the map, and one used both the buttons andthe map to zoom in. Only two participants used the zoomfor 8 or 9 of their targets, and both of those did so via thebuttons in the zoom panel. These two participants wereamong the three who did not use the categories with thepoint-and-click GUI.

From the observer notes, we find that, with the exception ofone person, all the people who clicked directly on the mapto zoom in did so unintentionally because they had “missed”their target location with their click. Some people missedmore than one target, but only one person missed again oncethe map had zoomed in to facilitate the task.

5.2 Single-switch inputAs shown in Table 3, we observe that using filtering by cate-gories with single-switch input is faster, requires fewer switchactivations, and results in fewer errors, than otherwise. Fur-

Figure 4: Number of targets selected by categoryand zoom filtering with tactile input (averaged overparticipants).

Table 2: Mean time, clicks and number of errors forthe selection of nine navigational targets with tactileinput.

Categories Other P value (t-test)Time (in s.) 75.6 88.7 0.301Clicks 27.8 34.3 0.094Errors 0.8 1.78 0.204Sample size 10 9

thermore, the difference between the mean time required tocomplete the task when using only the categories, as op-posed to the available alternatives, is statistically significant.Therefore, in the case of single-switch input, participantswere correct in stating that using the categories took lesstime than filtering with the zoom, or using a combination ofzoom and categories.

Not only was filtering by categories faster than using onlythe zoom or both filtering features, but it was also the pre-ferred choice of the participants. This was measured in theiractive choice, as shown in Figure 5, and also by the Likertrankings, which averaged 1.4 for the category filtering and2.1 for the zoom filtering. We used a five-point Likert scalewhere 1 was most positive and 5 was most negative.

Five people used both filter features during the full exper-iment, although only one person used them in conjunctionto get to a particular target. No one alternated back andforth between using the filtering by categories and filteringby region. Some changed the filtering feature they were us-ing half way through because they encountered an aspect

Table 3: Mean time, switch activations and numberof errors for the selection of nine navigational targetswith single-switch input.

Categories Other P value (t-test)Time (in s.) 205 288 0.0137Switch activations 57.3 67.2 0.438Errors 3.33 4.5 0.097Sample size 12 6

Page 7: Design and Evaluation of a Flexible Interface for Spatial ...

Figure 5: Number of targets selected by categoryand zoom filtering with single-switch input (aver-aged over participants).

they disliked with whichever they had chosen first. For in-stance, participant 3 started off filtering by region with thezoom, but then lost track of where he was on the map, socontinued the task using filtering by category. This is theopposite of the behaviour observed with the point-and-clickversion of the GUI where several users switched back andforth between filtering by category and using the zoom.

6. DISCUSSIONThe results neither confirm nor reject hypothesis 1– it isnot clear whether users prefer map-based or list-based nav-igational target selection. Certain factors may influence auser’s predisposition to choose targets directly on the mapor from the list. For instance, it is possible that users whoare familiar with the map being used may be more temptedto use it directly instead of the list. On the other hand, anapplication limited to a very small area for a map or thathas an extremely cluttered map may entice users to opt forselecting items from a list. Thus we encourage designers toinclude both selection mechanisms when designing GUIs forspatial navigation.

Participant concern for map clutter indicates that hypothe-sis 2, which concerns the utility of means to filter throughthe set of locations, is correct. Further evidence supportingthis hypothesis is that every participant resorted to using afiltering feature at least once when using tactile input, evenwhen given the choice not to, and regardless of whether theychose to select navigational targets via the map or the list.Indeed, both the categories and the zoom have the dual func-tion of clearing some of the labels on the map and reducingthe length of the list. It must be noted that given a largerarea for the map, or a map with more evenly distributed ora lower density of navigational targets, users may not haveresorted to the filtering features as often.

Some of the results could not be shown to be significant;this may be due to our small sample sizes. This is not un-usual when developing assistive technology. Nevertheless,we were able to observe a few convincing trends, particu-larly the strong appreciation for filtering by category, bothwith point-and-click and single-switch input. This findingties in with hypothesis 3. Of course, it is rather easy to sort

stores into various categories in a mall setting. However, thisresult suggests that finding an intuitive classification systemfor possible navigational targets may be useful in other envi-ronments, even when categorizing the set of potential targetsis not as obvious.

Filtering by region with the zoom was not as popular withboth versions of the GUI. A possible explanation for this isthat in some cases, using the zoom may require first zoom-ing out then zooming into a new region, hence more steps.Despite a fairly obvious preference for filtering by category,a few things must be taken into consideration. First, theparticipants were not asked to drive around in an actualwheelchair. When faced with the actual task of navigating amall, users may want a closer view of the map to see wherethey are headed. Second, alternating between filtering bycategory and using the zoom required the user to first dis-play all targets by choosing the “all” button, then zoominginto the appropriate quadrant. This is straightforward todo with a point-and-click device, but is more complex whenusing the single-switch input. It was not made any easierby the fact that the “all” button was the last of the cate-gory buttons and required a certain time for the scanning toreach. Our next design will feature the ”all” button first inthe set of categories.

While the quadrant-based zoom currently provided on theGUI worked well for the single-switch interface, our imple-mentation of the zoom function is much less adequate for apoint-and-click device. Several participants expressed somelevel of dissatisfaction concerning the way the zoom func-tioned with tactile input. A future design may incorporatea more flexible way of zooming in, where the user may clickanywhere on the map and the portion selected will doublein size while centering on the spot of contact. This wouldrequire the user to first set the GUI into zoom mode, byclicking an extra button, for example. This alternative im-plementation would differentiate the act of zooming in in-tentionally, and of selecting a navigational target. It alsosolves the problem of having to avoid navigational targetswhen clicking the map with the intention of zooming in,which some participants disliked. However, we will keep thefeature where the map zooms in when the user accidentallyclicks beside a target, rather than on it. This feature notonly improves the user’s chance of correctly selecting thetarget, but also provides visual feedback that the click wasregistered. Without such a visual cue, the user may believethe system is being unresponsive which could be a source offrustration.

It is important to remember that the participants were able-bodied, and that we may see different patterns of interactionwith disabled users. Therefore, it will be essential to vali-date our system with the target population in future studies.Nonetheless, our results are a valid starting point, as theycan be used as an estimate of an upper bound of perfor-mance [18]. They also offer some useful findings regardingthe usefulness of filtering methods; we expect these resultsto hold with the target population.

It is also worth noting that for the single-switch version ofthe GUI, it may well be possible to reduce the number ofboth selection and timing errors by tailoring the scanning

Page 8: Design and Evaluation of a Flexible Interface for Spatial ...

speed to the user. Different methods already exist for auto-matically adjusting scanning delays to make them optimalfor individual users [7, 12].

Although we conceived our GUI for an upcoming deploy-ment in a large indoor mall, the design can be used forother wheelchair navigational tasks, such as driving arounda home or apartment. Many single-switch users currentlyrequire the aid of a caretaker to get around their home.Giving them a simple way to navigate their environmentcould provide these individuals with an unprecedented levelof autonomy.

Finally, we believe that some of our findings may be gener-alized to guide the design of other small navigational GUIs,including those not intended for disabled users. For instance,providing a way to filter potential targets with a set of cate-gories or having the region of interest enlarged when the useris beside a possible target could be useful features for a mul-titude of navigational tasks. Further research in this areais especially pertinent as robots become more ubiquitous inhuman-centered environments and task domains.

AcknowledgmentsThis work was supported by the Natural Sciences and En-gineering Research Council of Canada (NSERC). We thankthe participants of the user study, as well as the followingcolleagues, who contributed to the early development of theinterface: Julieta Jabukowicz, Jeremy Cooperstock, PaulaStone, Julien Villemure.

7. REFERENCES[1] A. Atrash, R. Kaplow, J. Villemure, R. West,

H. Yamani, and J. Pineau. Development andvalidation of a robust speech interface for improvedhuman-robot interaction. International Journal ofSocial Robotics, 2009.

[2] S. Bhattacharya, D. Samanta, and A. Basu. Usererrors on scanning keyboards: Empirical study, modeland design principles. Interacting with Computers,20:406–418, May 2008.

[3] S. Burigat, L. Chittaro, and S. Gabrielli. Navigationtechniques for small-screen devices: an evaluation onmaps and web pages. International Journal ofHuman-Computer Studies, 66(2):78–97, 2008.

[4] S. Carter, A. Hurst, J. Mankoff, and J. Li.Dynamically adapting guis to diverse input devices. In8th International ACM SIGACCESS conference onComputers and Accessibility, Assets ’06, pages 63–70,2006.

[5] D. Harris and G. C. Vanderheiden. Augmentativecommunication techniques. In R. Schiefelbusch, editor,Non-Speech Language and Communication. UniversityPark Press, Baltimore,Maryland, 1980.

[6] I. Iturrate, J. Antelis, A. Kubler, and J. Minguez. Anoninvasive brain-actuated wheelchair based on a p300neurophysiological protocol and automated navigation.Robotics, IEEE Transactions on, 25(3):614–627, 2009.

[7] G. W. Lesher, D. J. Higginbotham, P. D, P. D, andB. J. Moulton. Techniques for automatically updatingscanning delays. In Annual Conference onRehabilitation Technology (RESNA), pages 85–87,

2000.

[8] L. Montesano, J. Minguez, M. Diaz, and S. Bhaskar.Towards an intelligent wheelchair system for userswith cerebral palsy. IEEE Trans Neural Syst RehabilEng, 18:193–202, Apr 2010.

[9] J. Pineau and A. Atrash. Smartwheeler: A roboticwheelchair test-bed for investigating new models ofhuman-robot interaction. In AAAI Spring Symposiumon Multidisciplinary Collaboration for SociallyAssistive Robotics, pages 59–64, 2007.

[10] J. Pineau, R. West, A. Atrash, J. Villemure, andF. Routhier. On the feasibility of using a standardizedtest for evaluating a speech-controlled smartwheelchair. International Journal of IntelligentControl and Systems, To appear.

[11] L. P. Reis, R. A. M. Braga, M. Sousa, and A. P.Moreira. IntellWheels MMI: A flexible interface for anintelligent wheelchair. In RoboCup-2009, LNAI 5949,pages 296–307, 2010.

[12] D. Samanta and P. Biswas. Designing computerinterface for physically challenged persons. InProceedings of the 10th International Conference onInformation Technology, pages 161–166, 2007.

[13] B. Shneiderman and C. Plaisant. Designing the UserInterface: Strategies for Effective Human-ComputerInteraction (4th Edition). Pearson Addison Wesley,2004.

[14] A. H. Siyong and C. W. L. Kenny. Evaluation of onscreen navigational methods for a touch screen device.In 2010 5th ACM/IEEE International Conference onHuman-Robot Interaction (HRI), pages 83–84, 2010.

[15] K. Tsui, D. Feil-Seifer, M. Mataric, and H. Yanco.Performance Evaluation and Benchmarking ofIntelligent Systems, chapter Performance evaluationmethods for assistive robotic technology. Springer US,2009.

[16] K. Tsui and H. Yanco. Simplifying wheelchairmounted robotic arm control with a visual interface.In AAAI Spring Symposium on MultidisciplinaryCollaboration for Socially Assistive Robotics, pages247–251, 2007.

[17] K. Tsui, H. Yanco, D. Kontak, and L. Beliveau.Development and evaluation of a flexible interface fora wheelchair mounted robotic arm. Interfaces, 3:11,2008.

[18] K. Tsui, H. Yanco, D. Kontak, and L. Beliveau.Experimental design for human-robot interaction withassistive technology. In Proceedings of the HRIWorkshop on Robotic Helpers: User Interaction,Interfaces and Companions in Assistive and TherapyRobotics, 2008.


Recommended