+ All Categories
Home > Documents > Interface and Infrastructure Design for Ubiquitous Pen...

Interface and Infrastructure Design for Ubiquitous Pen...

Date post: 15-Jul-2020
Category:
Upload: others
View: 2 times
Download: 0 times
Share this document with a friend
4
Interface and Infrastructure Design for Ubiquitous Pen- based Interaction on Paper Chunyuan Liao University of Maryland, College Park Human-Computer Interaction Lab & Computer Science Department College Park, MD 20742, USA [email protected] ABSTRACT One of the goals of ubiquitous computing is to allow users to naturally access various digital functions with daily physical artifacts like paper. Existing paper-digital integra- tion systems, however, either sacrifice inherent paper flexi- bility or support very limited digital functionality on paper. This thesis explores how to enable rich pen-based interac- tion on paper without loss of the original pen-paper affor- dances. Such interaction paradigm imposes design chal- lenges on the interface and infrastructure, such as weak visual feedback and varying computing resources available to mobile users. I study a) Supporting techniques and de- sign principles, for instance, combining physical engage- ment, ink and pen-top multimodal feedback for digital op- erations on paper; b) Applications of the techniques in ar- eas such as document reviewing, note-taking and 3D mod- eling to demonstrate the feasibility and usability, and to glean insights for more general use scenarios. Categories and Subject Descriptors: H5.2 [Information interfaces and presentation]: User Interfaces. - Graphical user interfaces. Additional Keywords and Phrases: Ubiquitous comput- ing, pen interaction, paper interface, multi-modal feedback, distributed interaction INTRODUCTION Ubiquitous computing aims to extend people’s working space from screen-centered and fix-positioned stations to an environment seamlessly linking digital and physical worlds, in which users can easily and naturally access computing resources “any time and any where”, for in- stance, via physical artifacts in people’s daily life [18]. Among those physical artifacts are pens and paper. Al- though many researchers have studied pen-computing on electronic screens [6, 21], fewer do that on physical display surfaces like paper. Interestingly, paper possesses unique advantages over existing computers: Comfortable reading and writing user experience, easy two-handed manipulation, flexible display layout, superior robustness and low price [16]. Even the emerging electronic-paper displays [3] can hardly beat paper in terms of writing, robustness and cost. Thus, it is important to study integrating these paper merits with digital media’s high efficiency in editing, duplicating, archiving, search and distribution. In this field, existing systems span a wide spectrum of trade-off between digital functions and interface ubiquity. At one end, systems like DigitalDesk [19] support powerful digital interactions on paper with overhead projectors and cameras, but the non-portable infrastructure constrains pa- per flexibility; At the other end, systems such as PaperPDA [5] and Anoto [1], although allowing for natural pen-paper- only practices, provide limited digital functions like form- filling; In the middle, systems like PaperLink [2] and Pa- per++ [14] use paper and additional separate screens for simple functions like fixed “hot spots”, with potential in- teraction interference caused by the extra devices. My thesis explores how to bring rich digital functions to pen/paper interaction without loss their inherent merits. The core is a novel interaction paradigm as illustrated in Figure 1. A digital pen [1] and a paper document page compose a virtually standalone mobile computing device: Figure 1: (a~c) An example of ubiquitous pen interaction paradigm on paper: (a) A picture is copied from a printout, then (b) pasted to a note (Marks are highlighted for clarity). (c) The result is shown on a PC after pen synchronization. (d) The architecture: Written strokes and the printed document ID are both captured; the corresponding digital copy is retrieved, and the pen gesture is interpreted as commands to execute accordingly and to interact with other devices. Copyright is held by the author/owner. UIST ’07, October 7–10, 2007, Newport, Rhode Island, USA. ACM 978-1-59593-679-2/07/0010 a b c d
Transcript
Page 1: Interface and Infrastructure Design for Ubiquitous Pen ...uist.acm.org/archive/adjunct/2007/pdf/doctoral_consortium/p21-liao.pdfmobility, a key to ubiquitous pen-paper interaction.

Interface and Infrastructure Design for Ubiquitous Pen-based Interaction on Paper

Chunyuan Liao University of Maryland, College Park

Human-Computer Interaction Lab & Computer Science Department College Park, MD 20742, USA

[email protected]

ABSTRACT One of the goals of ubiquitous computing is to allow users to naturally access various digital functions with daily physical artifacts like paper. Existing paper-digital integra-tion systems, however, either sacrifice inherent paper flexi-bility or support very limited digital functionality on paper. This thesis explores how to enable rich pen-based interac-tion on paper without loss of the original pen-paper affor-dances. Such interaction paradigm imposes design chal-lenges on the interface and infrastructure, such as weak visual feedback and varying computing resources available to mobile users. I study a) Supporting techniques and de-sign principles, for instance, combining physical engage-ment, ink and pen-top multimodal feedback for digital op-erations on paper; b) Applications of the techniques in ar-eas such as document reviewing, note-taking and 3D mod-eling to demonstrate the feasibility and usability, and to glean insights for more general use scenarios. Categories and Subject Descriptors: H5.2 [Information interfaces and presentation]: User Interfaces. - Graphical user interfaces. Additional Keywords and Phrases: Ubiquitous comput-ing, pen interaction, paper interface, multi-modal feedback, distributed interaction

INTRODUCTION Ubiquitous computing aims to extend people’s working space from screen-centered and fix-positioned stations to an environment seamlessly linking digital and physical

worlds, in which users can easily and naturally access computing resources “any time and any where”, for in-stance, via physical artifacts in people’s daily life [18]. Among those physical artifacts are pens and paper. Al-though many researchers have studied pen-computing on electronic screens [6, 21], fewer do that on physical display surfaces like paper. Interestingly, paper possesses unique advantages over existing computers: Comfortable reading and writing user experience, easy two-handed manipulation, flexible display layout, superior robustness and low price [16]. Even the emerging electronic-paper displays [3] can hardly beat paper in terms of writing, robustness and cost. Thus, it is important to study integrating these paper merits with digital media’s high efficiency in editing, duplicating, archiving, search and distribution. In this field, existing systems span a wide spectrum of trade-off between digital functions and interface ubiquity. At one end, systems like DigitalDesk [19] support powerful digital interactions on paper with overhead projectors and cameras, but the non-portable infrastructure constrains pa-per flexibility; At the other end, systems such as PaperPDA [5] and Anoto [1], although allowing for natural pen-paper-only practices, provide limited digital functions like form-filling; In the middle, systems like PaperLink [2] and Pa-per++ [14] use paper and additional separate screens for simple functions like fixed “hot spots”, with potential in-teraction interference caused by the extra devices. My thesis explores how to bring rich digital functions to pen/paper interaction without loss their inherent merits. The core is a novel interaction paradigm as illustrated in Figure 1. A digital pen [1] and a paper document page compose a virtually standalone mobile computing device:

Figure 1: (a~c) An example of ubiquitous pen interaction paradigm on paper: (a) A picture is copied from a printout, then (b) pasted to a note (Marks are highlighted for clarity). (c) The result is shown on a PC after pen synchronization. (d) The architecture: Written strokes and the printed document ID are both captured; the corresponding digital copy is retrieved, and the pen gesture is interpreted as commands to execute accordingly and to interact with other devices.

Copyright is held by the author/owner.

UIST ’07, October 7–10, 2007, Newport, Rhode Island, USA.

ACM 978-1-59593-679-2/07/0010

a b c d

Page 2: Interface and Infrastructure Design for Ubiquitous Pen ...uist.acm.org/archive/adjunct/2007/pdf/doctoral_consortium/p21-liao.pdfmobility, a key to ubiquitous pen-paper interaction.

It displays document content by printing, captures user input by the digital pen, and manipulate associated digital documents and interact with other paper and devices via an infrastructure. The updated digital documents could be printed later for the next task cycle. Aiming at ubiquitous computing, the interface consists of only paper and pens, highly mobile 1 and completely compatible with conven-tional pen-paper interactions; the infrastructure allows us-ers to issue rich pen-gesture commands directly on paper, ranging from copy/paste to googling to keyword search [13]. With these system features, I hypothesize that such a vir-tual “pen-paper computer” interaction paradigm can im-prove user experiences and working efficiency in tasks that interweave physical paper and digital media, such as document reviewing and note-taking. I conduct research at two aspects: First, Supporting techniques. I a) explore the general interface techniques and design principles to ad-dress issues for the aimed paradigm, like weak-visual feed-back on paper. Specifically, I design a special pen-gesture command system and a pen-top multimodal feedback plat-form, which are applicable not only to paper but also to other weak-visual situations like eye-free operations [22] and low-refresh-rate bi-stable electronic displays [3]; b) design a distributed infrastructure to translate pen-actions to digital manipulations and to support cross-media opera-tions, which can be used to other physical-artifact-based interaction, such as building digital 3D model via a physi-cal model [17]. Second, Applications. I apply the interface and infrastructure to specific areas such as Active Reading and note-taking to evaluate the new paradigm’s feasibility and usability, impact on user behavior, and explore insights for more general use.

CHALLENGES For “accessing any time and any where”, I focus on system mobility, a key to ubiquitous pen-paper interaction. It sug-gests challenges on both interface and infrastructure design: 1) Weak visual feedback, due to paper’s static nature and

the usual lack of external rendering devices (e.g. a pro-jector), in a mobile setting.

2) Less user visual attention in a distractive environment, where a visual-only interface may overload users or fail to inform users in time.

3) Varying computing resources available to mobile users.

1 The paper mobility is up to a degree. We assume the amount of

“work-on” paper documents is below that point.

Depending on locations, users may use only pen and paper (e.g. in a train) or combine them with a nearby computer (e.g. in an office). The system should make full use of the accessible media and other resources without obtrusive and complex interaction.

To address these issues, I first explore the supporting tech-niques and design principles.

A PEN-GESTURE COMMMAND SYSTEM FOR WEAK VSIUAL FEEDBACK DISPLAY SURFACES Our “pen-paper-computer” notion comes from PADD (Pa-per Augmented Digital Document) [4]. It adopts a foun-tain-pen-like Anoto digital pen [1] to captures handwriting on printouts with a built-in camera recognizing special pat-tern in paper background. PADD can automatically merge the captured written marks with the corresponding digital documents, enabling digital annotation via paper. Although useful, the system does not support many frequently used operations such as copy/paste/hyperlinking selected docu-ment segments and keyword search. For such more com-plex functions, a command system is usually needed.

A portable and natural pen-based command system To retain the existing nature pen/paper practices, the com-mand system should be simple, nature and easy to learn. In the meantime, it be also flexible and efficient. To this end, we design a pen-gesture based command system Papier-Craft [13] using the highly portable Anoto [1] digital pen. A PapierCraft command, similar to its counterpart Scriboli [6] on Tablet PCs, consists of three parts: scope, type and the pigtail delimiter between them (see Figure 2). For easy access, we only adopt user-familiar marks like underlines, lassoes, margin bars, and cropping marks, which can be drawn at arbitrary places within a document; the command type is specified through marking menu [7], of which the stroke tail direction indicates the desired menu item. For efficiency, adjacent parts can be drawn continuously. Note: these spatially separate parts are grouped with time stamps recorded by the digital pen, so it supports cross-page opera-tions like copy/paste in Figure 1, which is hardly available in scanning-based system like PaperPDA [5].

Using physical engagement and ink as feedback The key design principle of PapierCraft is to avoid any traditional dynamic GUI widgets (e.g. popup menus, target highlighting and marquee selection), and to resort to physi-cal engagement and ink as the main feedback channels. For example, to switch pen modes between “annotation” and “command”, users are required to manually pressing a hardware button with the physical pressure as feedback, because other (semi-)automatic approaches like [21] usu-ally require a popup menu on screen for disambiguation. And then, in “command” mode, the gestures are designed as strong visual clues: The command scope marks act as a placeholder so that, for example, a “paste” scope suggests no more notes in the occupied area; the direction of a marking menu indicates actions applied to a scope; the ad-jacency of command scope and type can further enhance such effect, which is usually not available in other paper

Figure 2. PapierCraft command structure:(a) scope (b)pigtail delimiter (c) marking menu (d) “short-cut” drawn in one stroke. Strokes highlighted for clarity.

a

a

b

c

d

Page 3: Interface and Infrastructure Design for Ubiquitous Pen ...uist.acm.org/archive/adjunct/2007/pdf/doctoral_consortium/p21-liao.pdfmobility, a key to ubiquitous pen-paper interaction.

command systems [1, 9] using printed buttons on a sepa-rate card or at a fixed area on paper. We conducted an early-stage user evaluation on Papier-Craft. It showed positive feedback on the easy and intuitive gesture design, and revealed that stronger real-time feed-back is desired for higher user confidence and more inter-active tasks such as keyword search. Thus, I turned to other modalities beyond vision.

MULTIMODAL PEN-TOP FEEDBACK I explore exploiting visual, tactile and auditory information conveyed by a pen for a weak feedback display like paper and for eye-free operations on a handheld device. In par-ticular, I identify three general feedback categories, exam-ine characteristics of the multimodal feedback through a hardware experiment platform, and then apply the found design principles to PapierCraft command system as a case study, with a controlled experiment showing the effective-ness of the pen-top feedback.

Feedback Types We categorize interface feedback into three groups: 1) dis-covery feedback for users to discover functions not visible on a display (e.g. a popup menu), 2) status-indication feed-back for confirmation and error report, and 3) task feed-back for a specific interactive task like keyword search.

Characteristics of Multimodal Feedback Non-visual pen-top feedback has been proposed in litera-ture, such as Haptic pen [10] with haptic feedback and Leapfrog Fly pen [9] with a voice-only interface. I explore multimodal feedback for mobile devices. To this end, I have built a hardware experiment platform [11] (Figure 3-a) by augmenting a digital pen with microcontroller-charged LEDs, solenoids and audio (currently, simulated by a separate PC). Further, based on user studies, we have established some general design guidelines for pen-top multimodal feedback [11] like the following: 1) Tactile feedback should be used parsimoniously, be-

cause of its distraction in long run, but it is good for short “warning” signal; tactile feedback as positive feedback can actually slow down user response.

2) LEDs work well for long modal feedback (e.g., pen

mode, command selection confirmation); in color-coding, the number of colors be limited, like up to 4, to reduce user mental load.

3) Voice feedback should be reserved for feedback under more extended time frames due to its sequential nature, such as discovery and task feedback.

Application of Multimodal Feedback in PapierCraft For a case study with PapierCraft, I explore how to com-bine the individual modalities to achieve appropriate trade-off between ease and efficiency in feedback design (Figure 3). For example, for novice users, we use slow and intui-tive speech to help discover a command type menu (Figure 3-b); for experts, only a brief LED pulse for final confirma-tion (Figure 3-c). The two schemes can be automatically switched with a time-out mechanism as marking menu does [8]. Similarly, a keyword search application (Figure 3-d) uses slow speech for rough target scope (like “page 3, left, top”), and quick LED for more precise line-wise position. we conducted a user evaluation in the context of a menu selection task[11], which showed the effectiveness of the multimodal feedback for novices to discover and learn an invisible menu on paper. Compared to non-feedback set-tings, the feedback pen significantly improves error detec-tion and correction with only a small performance penalty. The pen-paper interface only captures user input and pre-sents feedback, and it is the infrastructure that actually per-forms data processing in the background.

INFRASTRUCTURE The primary function of the infrastructure is physical-digital mapping. Our design is based on PADD [4], which identifies a physical page with its unique Anoto patterns and automatically registers each printout with the original digital page. With the registration information, strokes from paper can be mapped into the linked digital document and processed accordingly. To support the ubiquitous ac-cess, the mapping is done at a remote central server sepa-rate from the pen-paper interface. Further, considering varying resources available to mobile users, such as an extra computer besides the pen and paper, we extend ideas of Pick&Drop [15] to support both paper-

Figure 3: (a) A multimodal feedback pen prototype (b) Feedback scheme for a novice user to browse an invisible PapierCraft menu on paper while she drawing a gesture (red line) from left to right. (c) An expert selects the same command with only feedback for final confirmation. (d) Feedback for keyword search. Within the target page first told by a voice, the user draws a line from top to bottom. When approaching a line with a hit, the LED on the pen lights up (the thickness of the green line in figure for brightness). For clarity, the target word “computer(s)” is underlined.

a b c d

Page 4: Interface and Infrastructure Design for Ubiquitous Pen ...uist.acm.org/archive/adjunct/2007/pdf/doctoral_consortium/p21-liao.pdfmobility, a key to ubiquitous pen-paper interaction.

only and mixed-media document operations [13], with similar gesture commands on different media. And for varying network connection, the strokes can be submitted in real time or in batch.

APPLICATIONS OF UBIQUITOUS PEN COMPUTING My colleagues and I have built several applications to evaluate the feasibility and usability of design. For example, In ButterflyNet [20], we explored the interaction between paper notebooks and multimedia devices. Users can use PapierCraft gestures to link paper notes with digital data like pictures and GPS data. A user study with 14 biologists found the system well aligned with existing practices and promising for integral physical-digital data management. PaperCP [12] is a system for classroom Active Learning with a real-time paper-pen-interface coupled with a digital data transfer infrastructure. Students can submit written answers from paper to the instructor via computer network, enjoying the combined paper and digital affordances. A preliminary experiment in a realistic classroom has shown the system feasibility and usability, as well as challenges such as printout layouts and real-time feedback. Also, the infrastructure has been extended to 3D physical surfaces by Song in ModelCraft [17]. Users can draw pen gestures on a physical model to, for instance, cut and ex-trude the corresponding digital model. The initial deploy-ment suggested the advantage of combining the physical artifacts and the digital operations.

FUTURE WORK My future work will focus on evaluation on the hypothesis of my thesis. First, I will conduct an experimental study to compare the “pen-paper-computer” paradigm with tablet PCs and normal pen&paper in supporting tasks like proof-reading. With this side-by-side comparison, I expect to identify the key interface design factors, such as paper af-fordances and feedback, and examine whether and how (if the hypothesis is true) the new design can improve working efficiency. Second, to grasp the long-term effect of the design, I plan to deploy an application supporting Active Reading [13] in a university graduate seminar class and conduct a longitudinal evaluation. With iterative in-situ design cycles, I expect to identify the usability issues, ex-amine the impact of ubiquitous pen interaction on user be-havior and explore more general use scenarios.

RESEARCH CONTRIBUTION In total, my thesis will contribute to the ubiquitous comput-ing by: a) the supporting techniques and design guidelines for weak-visual-feedback interface and pen-top multimodal feedback; b) the supporting infrastructure for ubiquitous pen interaction on physical surfaces in a mobile setting; and c) applications and evaluations to promote new pen-paper interaction paradigm and advance seamless integra-tion of physical-digital world for ubiquitous computing.

ACKNOWLEDGEMENTS I would like to thank my advisor François Guimbretière for his great help. This work was supported by Microsoft Re-

search and NSF under Grant IIS-0414699 and IIS-00447730.

REFERENCES 1. Anoto, http://www.anoto.com. 2. Arai, T., D. Aust, and S.E. Hudson. PaperLink: a technique

for hyperlinking from real paper to electronic content. Pro-ceedings of CHI'97, pp. 327 - 334.

3. E-Ink, (http://www.eink.com/). 4. Guimbretiere, F. Paper Augmented Digital Documents. Pro-

ceedings of UIST'03, pp. 51 - 60. 5. Heiner, J.M., S.E. Hudson, and K. Tanaka. Linking and mes-

saging from real paper in the Paper PDA. Proceedings of UIST'99, pp. 179 - 186.

6. Hinckley, K., P. Baudisch, G. Ramos, et al. Design and analy-sis of delimiters for selection-action pen gesture phrases in scriboli. Proceedings of CHI'05, pp. 451-460.

7. Kurtenbach, G. and W. Buxton. Issues in combining marking and direct manipulation techniques. Proceedings of UIST'91, pp. 137 - 144.

8. Kurtenbach, G., The design and Evaluation of Marking Menus, PhD thesis, University of Toronto. 1993

9. LeapFrog, Fly Pen, http://www.leapfrog.com. 2005. 10. Lee, C.J., H.P. Dietz, D. Leigh, et al. Haptic pen: a tactile

feedback stylus for touch screens. Proceedings of UIST'04, pp. 291-294.

11. Liao, C., F. Guimbretière, and C.E. Loeckenhoff. Pentop feedback for paper-based interfaces. Proceedings of UIST'06, pp. 211-220.

12. Liao, C., F. Guimbretière, R. Anderson, et al. PaperCP: Ex-ploring the integration of physical and digital affordances for active reading. Proceedings of INTERACT'07, pp.

13. Liao, C., F. Guimbretière, K. Hinckley, et al., PapierCraft: A Gesture-Based Command System for Interactive Paper. ACM ToCHI, 2007. (in press).

14. Norrie, M.C. and B. Signer. Switching Over to Paper: A New Web Channel. Proceedings of Web Information Systems En-gineering'03, pp. 209-218.

15. Rekimoto, J. Pick-and-drop: a direct manipulation technique for multiple computer environments. Proceedings of UIST '97, pp. 31-9.

16. Sellen, A.J. and R.H.R. Harper, The Myth of the Paperless Office. 1st ed. 2001: MIT press.

17. Song, H., F. Guimbretière, C. Hu, et al. ModelCraft: capturing freehand annotations and edits on physical 3D models. Pro-ceedings of UIST'06, pp. 13-22.

18. Weiser, M., Some computer science issues in ubiquitous com-puting. Communications of the ACM, 1993. 36(7): p. 74-84.

19. Wellner, P., Interacting with paper on the DigitalDesk. Com-munications of the ACM, 1993. 36(7): p. 87 - 96.

20. Yeh, R.B., C. Liao, S.R. Klemmer, et al. ButterflyNet: A Mo-bile Capture and Access System for Field Biology Research. Proceedings of CHI'06, pp. 571-580.

21. Zeleznik, R. and T. Miller. Fluid Inking : Augmenting the Medium of Free-Form Inking with Gestures. Proceedings of Graphics Interface '06, pp. 155-162.

22. Zhao, S., P. Dragicevic, M. Chignell, et al. Earpod: eyes-free menu selection using touch input and reactive audio feedback. Proceedings of CHI'07, pp. 1395-1404.


Recommended