+ All Categories
Home > Documents > TAUCHI – Tampere Unit for Computer-Human Interaction Blind Text Entry for Mobile Devices Grigori...

TAUCHI – Tampere Unit for Computer-Human Interaction Blind Text Entry for Mobile Devices Grigori...

Date post: 20-Dec-2015
Category:
View: 215 times
Download: 3 times
Share this document with a friend
22
TAUCHI – Tampere Unit for Computer-Human Interaction Blind Text Entry for Mobile Devices Grigori Evreinov Dept. of Computer Sciences University of Tampere, Finland [email protected] Georgios Yfantidis NOKIA R&D Copenhagen, Denmark Georgios.Yfantidis@nokia. com AAATE’05 AAATE’05 Henri Warembourg Faculty of Henri Warembourg Faculty of Medicine Medicine Lille - France - September 6-9, Lille - France - September 6-9, 2005 2005
Transcript
Page 1: TAUCHI – Tampere Unit for Computer-Human Interaction Blind Text Entry for Mobile Devices Grigori Evreinov Dept. of Computer Sciences University of Tampere,

TAUCHI – Tampere Unit for Computer-Human InteractionTAUCHI – Tampere Unit for Computer-Human Interaction

Blind Text Entry for Mobile Devices

Grigori Evreinov

Dept. of Computer Sciences

University of Tampere, Finland

[email protected]

Georgios Yfantidis

NOKIA R&D

Copenhagen, Denmark

[email protected]

AAATE’05AAATE’05Henri Warembourg Faculty of MedicineHenri Warembourg Faculty of Medicine

Lille - France - September 6-9, 2005 Lille - France - September 6-9, 2005

Page 2: TAUCHI – Tampere Unit for Computer-Human Interaction Blind Text Entry for Mobile Devices Grigori Evreinov Dept. of Computer Sciences University of Tampere,

TAUCHI – Tampere Unit for Computer-Human InteractionTAUCHI – Tampere Unit for Computer-Human Interaction

Introduction

• Handheld devices with touchscreens are becoming widespread. (Smart phones and Personal Digital Assistants).

• Diverse set of applications / targeted to many users • Blind people could also benefit from using those touchscreen

based portable devices• Touchscreen interaction (combined with GUIs) is completely

antithetical to blind user skills• No feedback cues / Absolute point and selection is difficult /

Virtual keyboard tapping is out of the question. Needs accuracy and point relocation after each character entry.

• Adapting technologies to issues of challenged usersTack about this here Blind people could also benefit from using those touchscreen based portable devices

• Touchscreens: handheld devices, smart phones and PDA

• Blind touchscreen interaction

- Feedback, absolute point and selection

- Virtual keyboard ???…tapping is out of the question

- accuracy and point relocation after each character entry...

ARIAL 22-24 is better everywhere

Page 3: TAUCHI – Tampere Unit for Computer-Human Interaction Blind Text Entry for Mobile Devices Grigori Evreinov Dept. of Computer Sciences University of Tampere,

TAUCHI – Tampere Unit for Computer-Human InteractionTAUCHI – Tampere Unit for Computer-Human Interaction

•We developed a new text entry technique (Gesture Driven Software Button)•Adequate typing speed + Accessible for blind users =NOT NEEDED here, maybe in conclusions

Introduction 2• Some of the inspiration came from Gedrics

Page 4: TAUCHI – Tampere Unit for Computer-Human Interaction Blind Text Entry for Mobile Devices Grigori Evreinov Dept. of Computer Sciences University of Tampere,

TAUCHI – Tampere Unit for Computer-Human InteractionTAUCHI – Tampere Unit for Computer-Human Interaction

Introduction 3• Gedrics provide a way to manipulate a graphical user

interface with the help of icons that respond to gestures made directly inside the icon.

• The most distinctive aspect of Gedrics was their “structuredness” and their intuitive way of gestural interaction.

• We wanted to maintain a similar simplicity that would make the gestures GDSB reliable, easy to learn and universally accessible.

• BUT.. While in Gedrics icons occupied definite positions on a touchscreen, we have envisaged that it could bring more benefits if the icon could follow the stylus and could contain full text-entry functionality – “ a full keyboard under the fingertip”.

You can comment this in previous slide

So you have to present the ppt but the report/text can be different if you need it

Page 5: TAUCHI – Tampere Unit for Computer-Human Interaction Blind Text Entry for Mobile Devices Grigori Evreinov Dept. of Computer Sciences University of Tampere,

TAUCHI – Tampere Unit for Computer-Human InteractionTAUCHI – Tampere Unit for Computer-Human Interaction

Interface and Interaction style

The gesture driven software button has a rectangular shape that contains eight characters positioned in the basic arrow directions (up, down, left, right) and the intermediate positions between them.

There are three layers in total, each containing adifferent set of the characters realized for English alphabet, 24 characters directly arrayed in the three layers plus 2 characters that are embedded in certain positions.

Text is Not needed in slide

Page 6: TAUCHI – Tampere Unit for Computer-Human Interaction Blind Text Entry for Mobile Devices Grigori Evreinov Dept. of Computer Sciences University of Tampere,

TAUCHI – Tampere Unit for Computer-Human InteractionTAUCHI – Tampere Unit for Computer-Human Interaction

Layer accession methods/modes

• GDSB has two different modes for layer accession.

• ADS mode changes layers cyclically, in an adaptive

time loop

• 2K mode bases the layer change in two physical

arrow keys .

• In both modes touching anywhere on the screen

with stylus/finger activates the software button’s

interface with the first layer ready to use.

Page 7: TAUCHI – Tampere Unit for Computer-Human Interaction Blind Text Entry for Mobile Devices Grigori Evreinov Dept. of Computer Sciences University of Tampere,

TAUCHI – Tampere Unit for Computer-Human InteractionTAUCHI – Tampere Unit for Computer-Human Interaction

ADS Mode

• One layout succeeds the other after dwell time interval• Layer switching does NOT happen continuously. It is triggered manually when user waits in central position without lifting the stylus after the initial

tap. // just your comments

• Dwell time starts after the following events:

- initial touching of the screen (central position of any layer),

- stopping after backtracking to start position (cancel of the previous selection),

- gesturing (movement) towards a direction was completed without lifting the stylus, that is, dwell determines when the substitution function will be activated.

Page 8: TAUCHI – Tampere Unit for Computer-Human Interaction Blind Text Entry for Mobile Devices Grigori Evreinov Dept. of Computer Sciences University of Tampere,

TAUCHI – Tampere Unit for Computer-Human InteractionTAUCHI – Tampere Unit for Computer-Human Interaction

Adaptation

• The adaptation of the button takes place by changing

the position and the functionality,

• but the time is the main factor that determines how this

adaptation and transitions happen is • One of the design decisions to be taken in the development of the technique concerned is which time

interval for dwelling satisfies the user requirements and

when it should be changed. The original algorithm [Evreinov and Raisamo, 2004]

was modified and improved concerning the proposed technique . (small fonts => just talk about)

Page 9: TAUCHI – Tampere Unit for Computer-Human Interaction Blind Text Entry for Mobile Devices Grigori Evreinov Dept. of Computer Sciences University of Tampere,

TAUCHI – Tampere Unit for Computer-Human InteractionTAUCHI – Tampere Unit for Computer-Human Interaction

2K Mode

• The 2K (2-key) version of our technique features a layer-switching system that is based on two physical arrow keys. Interacting with the gesture driven button without pressing any physical key happens within the first layer.

• Up-Down keys transfer interaction in the respective layer.

• The text entry can be concurrent in this way. Thelayer access is happening in combination with the movement/gesture.

Page 10: TAUCHI – Tampere Unit for Computer-Human Interaction Blind Text Entry for Mobile Devices Grigori Evreinov Dept. of Computer Sciences University of Tampere,

TAUCHI – Tampere Unit for Computer-Human InteractionTAUCHI – Tampere Unit for Computer-Human Interaction

Entering Text

• The first action that the user has to plan when s/he enters text or command is to select the correct layer in which the character belongs.to. This can be a sequential or a concurrent action depending on the technique used.

• The actual text entry begins when user moves/slides the stylus towards one of the eight directions which encode the characters. After this sliding there is a speech auditory signal with the name of the character that is about to be entered.

• Blind or sighted users are relying on this auditory feedback to smoothly and successfully interact with the application. The lifting of the stylus signifies the end of the text entry gesture.

• If for any reason, user would like to cancel a selection instead of lifting, s/he can backtrack towards the start position.

Page 11: TAUCHI – Tampere Unit for Computer-Human Interaction Blind Text Entry for Mobile Devices Grigori Evreinov Dept. of Computer Sciences University of Tampere,

TAUCHI – Tampere Unit for Computer-Human InteractionTAUCHI – Tampere Unit for Computer-Human Interaction

Substitution Process• The layers can only accommodate 24 characters in directly accessible

positions.

• Other special characters, signs and operations have to be activated through a process of “substitution”.

• The concept is that some of the primary characters may have a coupled function or another character in the same direction, which can substitute the basic haracter when user decides to do so.

• The differentiation from the normal way of selecting and entering is the use of waiting time that follows the sliding towards a direction.

• Instead of lifting the stylus after the first feedback cue, user can wait in the same position to hear a second signal. Lifting of the stylus at this point will result insuccessful entry of the symbol or operation that “dwells behind” the primary character.

Page 12: TAUCHI – Tampere Unit for Computer-Human Interaction Blind Text Entry for Mobile Devices Grigori Evreinov Dept. of Computer Sciences University of Tampere,

TAUCHI – Tampere Unit for Computer-Human InteractionTAUCHI – Tampere Unit for Computer-Human Interaction

Substitution 2

There are certain mnemonic rules that led us to couple certain characters and functions with each other. For example “D” can be associated with “Delete”, “S” with “Space” and “N” with “Next Line” in terms of their initial letters correspondence.

Layer 1 Substitut. function

Layer 2 Substitut. function

E ? C Z

R UpCase L -

A Ä P -

S space H -

T , D Delete

I . F Function

O Ö G J

N NextLine M -

Page 13: TAUCHI – Tampere Unit for Computer-Human Interaction Blind Text Entry for Mobile Devices Grigori Evreinov Dept. of Computer Sciences University of Tampere,

TAUCHI – Tampere Unit for Computer-Human InteractionTAUCHI – Tampere Unit for Computer-Human Interaction

Method Evaluation

• 16 volunteers (Staff and students from UTA) divided in two groups.

• 2K mode testing (6 male 2 female)• ADS mode testing (3 male 5 female)• None of the participants had experience in

text entry with gestures.• 14 right handed, 2 left handed• Blindfolded users• GDSB layout hidden at all times• The study was carried out on iPAQ Pocket PC 3800 series.• Tests took place in a usability laboratory

Page 14: TAUCHI – Tampere Unit for Computer-Human Interaction Blind Text Entry for Mobile Devices Grigori Evreinov Dept. of Computer Sciences University of Tampere,

TAUCHI – Tampere Unit for Computer-Human InteractionTAUCHI – Tampere Unit for Computer-Human Interaction

Test

• One exploratory trial to get acquainted with the feature. During that trial participants were tutored about the interface and the interaction style.

• One trial consisted of entering twenty words, randomly selected from a set of 150 words, and displayed one at a time at the top line on the screen of experimenter.

• The blindfolded subjects only listened to the test word and they repeated its playback on demand by clicking on the left bottom corner of touchscreen.

• The test words were 6 - 13 characters in length, with mean 8.5.• Each of the subjects accomplished 10 trials, the last eight of which were

taken for statistical analysis.• 10880 characters were entered per each mode. • Key figures such as the number of errors per trial, motor reaction times,

average reply time and parameters of dwell were stored for each trial in a data array.

Page 15: TAUCHI – Tampere Unit for Computer-Human Interaction Blind Text Entry for Mobile Devices Grigori Evreinov Dept. of Computer Sciences University of Tampere,

TAUCHI – Tampere Unit for Computer-Human InteractionTAUCHI – Tampere Unit for Computer-Human Interaction

Results

• During the experiments with 2K version, speed varied from 10 to 22 words per minute

• and the average typing speed was about 15 wpm or 75 characters per minute.

• The average typing speed with ADS was about 9 wpm (standard deviation 0.8),which translates to about 45 characters per minute.

• Taking into account that this is a blind interaction through touchscreen and it also supports one-hand manipulation, this can be considered as a reasonable typing rate.

Page 16: TAUCHI – Tampere Unit for Computer-Human Interaction Blind Text Entry for Mobile Devices Grigori Evreinov Dept. of Computer Sciences University of Tampere,

TAUCHI – Tampere Unit for Computer-Human InteractionTAUCHI – Tampere Unit for Computer-Human Interaction

Errors

• Contrary to what might have been the logical expectation, errors proved not to affect typing performance for both versions.

• The subject that achieved the maximal typing speed with 2K version had the same amount of errors with subject 7 which performed a lot worse.

• Subjects 3 and 6, who shared the same figure for typing speed, had a big difference in the amount of errors that they committed

• The subjects with the second and third best typing performance had more errors than 50% of the total persons who took part on the experiment.

Page 17: TAUCHI – Tampere Unit for Computer-Human Interaction Blind Text Entry for Mobile Devices Grigori Evreinov Dept. of Computer Sciences University of Tampere,

TAUCHI – Tampere Unit for Computer-Human InteractionTAUCHI – Tampere Unit for Computer-Human Interaction

The typing speed and errors averaged on 8 trials for all subjects and both modes of the technique (2 × 1280 words, 2 × 10880 tokens).

0

4

8

12

16

20

1 2 3 4 5 6 7 8

subjects

wpm

0

10

20

30

40

50

errors 2K, wpm ADS, wpm

2K, errors ADS, errors

Page 18: TAUCHI – Tampere Unit for Computer-Human Interaction Blind Text Entry for Mobile Devices Grigori Evreinov Dept. of Computer Sciences University of Tampere,

TAUCHI – Tampere Unit for Computer-Human InteractionTAUCHI – Tampere Unit for Computer-Human Interaction

Errors 2

• About 30% of the errors during the experiment happened because of software was written in Visual Basic and some procedures could not be fast enough due to restrictions of hardware. The processor speed of the PDA used was 200 MHz.

• Sometimes the users were typing so fast that thecharacter was not entered, and they had to repeat it, while a continuity and rhythm are very important for typing.

• Since the errors were occurring mostly to the faster users at the peak of their performance, they were able to rapidly correct their error by re-entering the character.

• Prevention of speed loss in error situations was enhanced by the inherent compactness of the software button (23 × 23 mm 2 ), which has a very small“dead zone” (8 × 8 mm 2 ) and provides fast gestures recognition.

Page 19: TAUCHI – Tampere Unit for Computer-Human Interaction Blind Text Entry for Mobile Devices Grigori Evreinov Dept. of Computer Sciences University of Tampere,

TAUCHI – Tampere Unit for Computer-Human InteractionTAUCHI – Tampere Unit for Computer-Human Interaction

The average motor reaction time (and STD) per character, and the

typing speed in words per minute (8 trials, 1280 words, 10880

characters) with 2K technique0

400

800

1200

1600

2000

1 2 3 4 5 6 7 8

subjects

ms

0

4

8

12

16

20

wpm reaction time words per minute

0

400

800

1200

1600

2000

1 2 3 4 5 6 7 8subjects

ms

0

4

8

12

16

20

wpm reaction time words per minute

Reaction time and performance

Page 20: TAUCHI – Tampere Unit for Computer-Human Interaction Blind Text Entry for Mobile Devices Grigori Evreinov Dept. of Computer Sciences University of Tampere,

TAUCHI – Tampere Unit for Computer-Human InteractionTAUCHI – Tampere Unit for Computer-Human Interaction

• Figure 2 shows that an inverse correlation between the reaction time with 2Kversion and the performance of the test subjects regarding typing speed is high enough -0.9898.

• That is, the greater efficiency in typing is proportional to a consistently reduced reaction time.

• Also subjects demonstrated a clear tendency to react faster as they proceeded through the test sessions.

• For instance, the average reaction times for subject 1 in ms decrease from 970 during the first session to 640 by the last session.

• The rest of the subjects followed a similar pattern and they were gradually improving their response time. Furthermore, all of them managed to reach, at least once during the experiment, values close to the average reaction time of about 18 wpm.

• When dwell time is being used (ADS version), a calculation of the reaction time is occurred dynamically. Since the reaction time value is part of the dwell time, there is not a reasonable way to discuss the raw values of the response.

• Just comments to previous slide

Page 21: TAUCHI – Tampere Unit for Computer-Human Interaction Blind Text Entry for Mobile Devices Grigori Evreinov Dept. of Computer Sciences University of Tampere,

TAUCHI – Tampere Unit for Computer-Human InteractionTAUCHI – Tampere Unit for Computer-Human Interaction

Selection times per layer

• The average selection time of the characters in the first layer was about 760 ms using the 2K version.

• The average values for second and third layer were about 880 and 1180 ms respectively.

• Those differences in time derived mostly from the different frequency of the characters used during the test that each group (layer) of the characters has.

• In that sense, the results are expected and slow selection time in the third layer came mostly because the characters in that layer occur infrequently in the English language, (less than 9%.)

• Users did not have the chance to practice enough entering those characters, and the layout of the third layer failed to become as “intuitive” in its use as the commonly used first two layers.

• Thus, the figure for average selection time in the third layer is subject to significant decrease after experience with the technique is established.

Do it like in previous slide (comments are not needed to show)

Page 22: TAUCHI – Tampere Unit for Computer-Human Interaction Blind Text Entry for Mobile Devices Grigori Evreinov Dept. of Computer Sciences University of Tampere,

TAUCHI – Tampere Unit for Computer-Human InteractionTAUCHI – Tampere Unit for Computer-Human Interaction

Summary

• We have developed two versions of a text entry technique that is suitable for blind people. The method can be adapted to many platforms.

• We tested the technique in aPDA to assess its suitability and effectiveness for smart mobile devices.

• The tests indicated that average text entry speed is in a high level for 2K version (15-18 wpm) while the ADS method is a reasonable tradeoff that balances speed (8-12 wpm) and accessibility offering single-hand manipulation.

• Another positive outcome of the testing is that typing errors with both versions of our technique do not threaten its effectiveness, as the majority of the users can easily correct them on the spot.

• Having a strong tool for text entry is the first but definite step to finally give full accessibility of smart mobile devices to visually impaired users.

Thank you and Merci Beaucoup!!!On mouse click after that click above


Recommended