+ All Categories
Home > Documents > An Architecture for Interactive Context-Aware Applications · 2007. 6. 29. · Cambridge Systems...

An Architecture for Interactive Context-Aware Applications · 2007. 6. 29. · Cambridge Systems...

Date post: 21-Jan-2021
Category:
Upload: others
View: 1 times
Download: 0 times
Share this document with a friend
9
© 2005 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. For more information, please see www.ieee.org/portal/pages/about/documentation/copyright/polilink.html. MOBILE AND UBIQUITOUS SYSTEMS www.computer.org/pervasive An Architecture for Interactive Context-Aware Applications Kasim Rehman, Frank Stajano, and George Coulouris Vol. 6, No. 1 January–March 2007 This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Transcript
Page 1: An Architecture for Interactive Context-Aware Applications · 2007. 6. 29. · Cambridge Systems Associates Frank Stajano and George Coulouris University of Cambridge ... fication

© 2005 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for

creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained

from the IEEE.

For more information, please see www.ieee.org/portal/pages/about/documentation/copyright/polilink.html.

MOBILE AND UBIQUITOUS SYSTEMS

www.computer.org/pervasive

An Architecture for InteractiveContext-Aware Applications

Kasim Rehman, Frank Stajano, and George Coulouris

Vol. 6, No. 1January–March 2007

This material is presented to ensure timely dissemination of scholarly and technicalwork. Copyright and all rights therein are retained by authors or by other copyright

holders. All persons copying this information are expected to adhere to the terms andconstraints invoked by each author's copyright. In most cases, these works may not be

reposted without the explicit permission of the copyright holder.

Page 2: An Architecture for Interactive Context-Aware Applications · 2007. 6. 29. · Cambridge Systems Associates Frank Stajano and George Coulouris University of Cambridge ... fication

1536-1268/07/$25.00 © 2007 IEEE ■ Published by the IEEE Computer Society PERVASIVEcomputing 73

An Architecture for Interactive Context-Aware Applications

Many current context-awareapplications suffer from threeusability problems: They makeinferences on the user’s behalfwithout communicating the

assumptions on which those inferences are based;they fail to provide feedback that would let usersevaluate why something did or didn’t work; andthey don’t give users control.

Although researchers are aware of such defi-ciencies,1,2 we still have difficulty implementing

known design principles. In ourview, there are two main rea-sons for this. First, there hasbeen some reluctance within thecommunity to employ visual-ization for context-aware appli-cations due to technical diffi-culties, 1 despite visualization’simportant role in Mark Weiser’soriginal conception of ubicomp

as “calm technology.”3 Second, we’re missing aninteraction model for context-aware applicationdesign, such as Donald Norman’s Seven Stagesmodel4 for traditional HCI.

The main difference between traditional HCIdesign and context-aware design is how we dealwith context. Hence, any interaction model for thelatter must address the question of what context is.

In our work, we pick up on recent ubicomp com-munity trends, drawing from sociology and focus-ing on interaction’s communicative aspects (see theRelated Work sidebar for comparisons to othersignificant approaches).2

So, starting from the premise that interaction iscommunication, we propose a new interactionmodel for context-aware applications. We thenderive an architectural framework that developerscan use to implement our interaction model. Themain benefit of our architecture is that, by model-ing context in the user interface, developers can rep-resent the application’s inferences visually for users.

Research contextOur research into indoor location-aware appli-

cations motivated us to create a UI for context-aware applications. In our lab, location-awareapplications support researchers in their dailyinteraction with computing, communication, andI/O facilities by adapting to changes in user andobject locations.

However, the user experience has remainedsuboptimal because of the three usability prob-lems we noted earlier. Our approach introducesvisual interaction using augmented reality (AR).With AR, we can show users—who wear head-mounted displays—visualizations anywhere inspace.5 Visualization of context-aware applica-

The interactive behavior of context-aware applications, unlike that of desktop applications, depends on the physical and logical context in which the interaction occurs. A new architecture derived from theModel-View-Controller paradigm models such context in the frontend,helping users better understand application behavior.

C O N T E X T A W A R E N E S S

Kasim RehmanCambridge Systems Associates

Frank Stajano and George CoulourisUniversity of Cambridge Computer Laboratory

Page 3: An Architecture for Interactive Context-Aware Applications · 2007. 6. 29. · Cambridge Systems Associates Frank Stajano and George Coulouris University of Cambridge ... fication

tions is a challenge the research com-munity is only beginning to explore. Ourprototype uses an AR interface (basedon a bulky headset that introduces itsown usability problems), but our archi-tecture is independent of the visualiza-tion technology. As a result, it wouldwork equally well with projector-basedvisualization, PDAs, or displays embed-ded in the environment.

Our location-aware applications use theSPIRIT (Spatially Indexed Resource Identi-fication and Tracking) backend.6 SPIRIT

maintains a world model based on sen-sors’ real-world observations. For locationtracking, we use an indoor ultrasoundlocation system that can track Active Bats(small tags). Each lab member is equippedwith an Active Bat that can also act as aninteraction device if its two small buttonsare used. SPIRIT evaluates spatial relation-ships between objects and people in theworld model. Applications can subscribeto be informed through events when cer-tain relationships are fulfilled.

Our interaction modelFigure 1 shows our interaction model,

which includes the physical environment(left) and the application (right). Users in-teract with the application while per-forming their daily tasks. The applicationis sensitive to the user’s context and inter-prets all interaction against the currentcontext. When a user changes context, the

application reacts by changing its frame ofreference for interpreting the user’s ac-tions. Such context changes can occurthrough implicit interaction, which isinteraction not directly targeted at theapplication. For example, in a context-aware communication application, expli-cit interaction includes tasks such as set-ting up the connection, sending media,and so on, while implicit interaction con-sists of moving to another room.

This interpretation of what context ismakes sense if you compare it with whatcontext is in everyday life. In a conversa-tion, for example, context isn’t a piece ofinformation about a person or object, it’sa frame of reference that people use tointerpret content. Similarly, in our case,explicit interaction is the content, and thecontext is maintained through implicitinteraction.

So, what of “object context”? It appearswe have only modeled user context. In ourinteraction model real world objects don’thave “context,” even though they havephysical properties. For example, does apair of computer loudspeakers sitting ona desk have context? In our view, you can’ttalk about context without an interpreter.So, the speakers don’t have context ontheir own, but they can form part of thecontext in interactions between the userand, say, a follow-me music application.If, with such an application enabled, theuser moved into the speakers’ vicinity,

both the user and application would seethe speakers as contextually relevant andassume this understanding of each other.

In other words, we’re working with aphenomenological view of context.7 Ourapproach might seem alien, given theubicomp community’s prevailing posi-tivist/engineering viewpoint, but as PaulDourish has successfully argued, the pos-itivist view misinterprets the way peopleuse context in everyday life.7 Essentially,the positivist view is concerned with rep-resenting context, whereas the phenom-enological view emphasizes that peoplecreate and maintain context during inter-action. And so, we believe that humansperceive context as a property of inter-action, rather than of objects or people.

This distinction is important becausecommunication between humans andcontext-aware applications requires aclear definition of context to be mademore intelligible. In particular, our aim isto model context in the interface.

ArchitectureOur architecture is based on the

Model-View-Controller paradigm.8 Fig-ure 2 shows the MVC derivative we usedto build our interactive context-aware ap-plications. We represent the applicationdomain using a set of models. Some ofthese models represent physical objects,while others represent abstract data struc-tures that the application uses. Later, wepresent an example application withmodels for both physical objects (ActiveBats) and abstract data structures, suchas a list. We assume that models of phys-ical objects can access the backend toupdate their states when properties of thecorresponding physical objects change. Inour implementation, the views are visibleAR avatars of the entity mapped to themodel.5 MVC lets you represent a modelin multiple ways by attaching multiple

74 PERVASIVEcomputing www.computer.org/pervasive

C O N T E X T A W A R E N E S S

Implicitinteraction

ApplicationEnvironment

Change of context

Sensors

Domainmodel

Context

Context

Context

Explicitinteraction

Figure 1. Our interaction model. Theuser’s explicit interaction is always interpreted against the current context.Implicit interaction can lead to changes of context.

Page 4: An Architecture for Interactive Context-Aware Applications · 2007. 6. 29. · Cambridge Systems Associates Frank Stajano and George Coulouris University of Cambridge ... fication

views.8 Although we haven’t imple-mented this into our prototype, this archi-tectural aspect might be exploited to sup-port multiple alternative visualizationtechnologies—such as AR goggles, pro-jectors, and handheld position-sensitivePDAs serving as “lenses”—and let usersquickly switch to the most convenientrepresentation.

The top of figure 2 shows a chain ofcontext components; the application willuse one of these components for each usercontext it wants to react to. For location-aware computing, the user’s locationdetermines the user’s context, but the con-text evaluation function can account forother environmental factors as well.

Implicit and explicit event handling When context components register for

events from the backend, they receivenotifications about implicit and explicitinteraction events (a designer-specifieddistinction). The backend then deliversstreams of these events.

If a component receives an implicitinteraction event, it calls its evaluationfunction, which then checks the currentenvironmental conditions against the con-ditions that characterize the context it rep-resents. Notably, this evaluation functioncan account for the environment’s physi-cal properties—that is, the properties tra-ditionally called “context.” If the func-tion returns false, the context representedby this component is not the current one.The component thus forwards the eventto the chain’s next context component.This continues until one context evalu-ates to true. The corresponding contextcomponent then becomes active, activatesits controllers, and initiates an action todeactivate all other context components.

This is because, at any given time, wewant the application to interpret our ex-plicit interaction against exactly oneframe of reference (context). The dottedlines in figure 2 show the referencesthrough which the activation and de-activation functions effect changes in theviews.

When an explicit interaction event en-ters the chain, components forward theevent through the chain until it reaches theactive context component. The active con-text component then forwards the eventdown a level to its controllers. So, onlycontrollers attached to the active contextreceive and interpret explicit interactionevents, because all deactivated contextcomponents ignore and forward them.Explicit interaction events change modelcomponents. Such changes are instantlyreflected in the view, which updates when-ever the model changes.

We write one controller per model foreach context modeled in the application.These controllers encapsulate how themodel reacts to interaction in each con-text. We then attach the controllers to thecorresponding context components. Thekey is that each controller only “wakesup” when its context becomes active.Because we use many controllers, we can

separate context handling from interac-tion handling—the controllers contain nocontext evaluation code. A context can beactivated again if it receives an implicitinteraction event that makes its evaluatefunction evaluate to true. (Even deacti-vated context components monitor im-plicit interaction events; the componentssimply pass them on if they evaluate tofalse.)

The result is context-aware interac-tion modeled in architecture. Interac-tion is always interpreted according toits context.

What this architectureseeks to enforce

Our architecture attempts to make thecommunication between the user and ap-plication more intelligible in several ways.

The designer’s model is communicated.When modeling the world, applicationdesigners inevitably use approximationsand assumptions such as, “if X is here,he wants to do Y.” In previous work, weexplored how not communicating suchassumptions affected usability in a well-known location-aware application.5

When we presented a visual image of theactual model that the application de-

JANUARY–MARCH 2007 PERVASIVEcomputing 75

ModelModel

Events from the backend

Model

ViewView

Controller

Controller

View

Controller Controller

Context Context

Controller Controller

Context Context

Controller Controller Controller Controller

Context changes

Event flows

Figure 2. The context-aware applicationarchitecture. Implicit interaction eventsare passed on from one context component to another until a contextactivates itself upon receiving the event.Only the active context component(shown in gold) passes explicitinteraction events to controllers.

Page 5: An Architecture for Interactive Context-Aware Applications · 2007. 6. 29. · Cambridge Systems Associates Frank Stajano and George Coulouris University of Cambridge ... fication

signer used to build the application, theusers understood the application’s capa-bilities and limitations to a degree theythemselves had often not expected.

So, how can an MVC architecture helpcommunicate the designer’s model? First,it helps designers elicit the model they’reusing for the application by forcing themto specify the application using model ob-jects. Second, it helps them communicateit by requiring that view objects remainfaithful representations of model objectsat all times. As we show later in our exam-ple, the application components effec-tively and elegantly communicate theiroperation to the user as the applicationreacts to the user’s movement and inter-action.

Context is not a piece of information, butrather a frame of reference. In our archi-

tecture, we use context to interpret inter-actions. Each time the context changes,the framework switches the set of activecontrollers, whose function is to interpretuser interaction. In other words, our archi-tecture does not implement “if x, do y,”but rather, “as long as x, interpret all inter-actions taking place as relating to x.” Thisis closer to our understanding of contextin real life.

The current context—and entry anddeparture from it—is always visible. Vic-toria Bellotti and Keith Edwards firstmade this demand.9 In our framework,we designed the controllers’ activationand deactivation functions to change theview when the context changes.

The system state is always visible andinstant feedback is always provided. We

represent all application entities as mod-els. Because each model has a view, thesystem state is always visible. More im-portantly, the system instantly updatesthe view as the system state changes.

An example of visualinteraction design

Our design process consists of twoparts: creating an application domainmodel and designing the visual interac-tion. Here, we concentrate on the latterbecause creating a domain model is partof standard object-oriented design.

Our example application is a loca-tion-aware desktop teleporting applica-tion. Using it, people can spontaneouslyapproach any computer in our lab, pusha single button on their Bats, andthereby “teleport” their personal desk-tops to the computer. Once the user

76 PERVASIVEcomputing www.computer.org/pervasive

C O N T E X T A W A R E N E S S

P aul Dourish believes that applications should always under-

stand context as the work practices of humans.1 The dif-

ference between our interaction model and Dourish’s model is

that we’ve left the sociological domain and tried to apply his

context interpretation to human-computer communication.

Victoria Bellotti and her colleagues proposed the idea of develop-

ing an interaction model that highlights interaction’s communica-

tive aspects.2 In designing our model, we use their comparison of

human-computer interaction with human-human interaction.

More recently, Albrecht Schmidt3 has advocated introducing de-

sign principles to application building through “implicit human-com-

puter interaction” (iHCI). Schmidt contrasts this with traditional HCI,

wherein all interaction is explicit. Although we use this distinction of

implicit and explicit interaction, our interaction model follows the

conversational metaphor more closely.

Microsoft’s EasyLiving project4 also aims to build interactive

context-aware applications. Although its overall aims are similar

to ours, we’re more concerned with understanding what context

means for humans and trying to implement this in systems.

Dourish has also elaborated on the importance of communicat-

ing the designer’s model.5 He regards system design as a commu-

nication act between the designer and user that’s intelligible only

if there is a set of mutually agreed upon facts. For designers, “mak-

ing a system usable” includes communicating “relevant aspects of

the designer’s model” to the user.5

Finally, Roy Want and his colleagues were instrumental in for-

warding the context-aware computing concept.6,7

REFERENCES

1. P. Dourish, “What We Talk About When We Talk About Context,” Per-sonal and Ubiquitous Computing, vol. 8, no. 1, 2004, pp. 19–30.

2. V. Bellotti et al., “Making Sense of Sensing Systems: Five Questions forDesigners and Researchers,” Proc. Computer-Human Interaction (CHI),ACM Press, 2002, pp. 415–422.

3. A. Schmidt, “Interactive Context-Aware Systems—Interacting withAmbient Intelligence,” Ambient Intelligence, G. Riva et al., eds., IOSPress, 2005, pp. 159–178.

4. B. Brumitt, “EasyLiving: Technologies for Intelligent Environments,”Proc. 2nd Int’l Symp. Handheld and Ubiquitous Computing (HUC 00),LNCS 1927, Springer, 2000, pp. 12–29.

5. P. Dourish, Where the Action Is, MIT Press, 2001.

6. R. Want et al., “An Overview of the Parctab Ubiquitous ComputingExperiment,” IEEE Personal Communications, vol 2, no.6, 1995, pp.28–43.

7. B. Schilit, N. Adams, and R. Want, “Context-Aware Computing Applica-tions,” Proc. 1st Ann. Workshop Mobile Computing Systems and Applica-tions (WMCSA), IEEE CS Press, 1994, pp. 85–90.

Related Work

Page 6: An Architecture for Interactive Context-Aware Applications · 2007. 6. 29. · Cambridge Systems Associates Frank Stajano and George Coulouris University of Cambridge ... fication

presses the button, the location-awareapplication running in the backgroundinfers which computer the person isgoing to use and connects it to a serverholding the user’s desktop. If users haveseveral desktops, all are potential tele-port candidates. Finally, users can over-ride this facility by turning the teleport-ing service on or off. We reengineeredthe teleport application to give it a visualinterface, using AR overlays in spaceand on the Active Bats.

Generally, our location-aware appli-cations take over Bats as interaction de-vices. So, our aim was to design an inter-face that clearly shows users how ourapplication interprets explicit (Bat) inter-action. The application’s detailed work-ings are as follows: Users press the bot-tom button to enable teleporting. If usersmove into a teleport region, they can usethe top button to first connect to theserver and then to scroll through theirdesktops on the server. If they move out-side of the teleport region, pressing thetop button will have no effect. If (at anytime) they press the bottom button, tele-porting will be disabled and all subse-quent top button presses has no effect,

regardless of whether users are inside oroutside the teleport region. Even thissmall application presents a reasonabledesign problem. To keep the user in-formed, the interface must track andshow two different kinds of state transi-tions in parallel: transitions in thedomain model occurring as an effect ofexplicit interaction (top button pressed,bottom button pressed) and changes ofcontext.

From the description, we see that theapplication’s behavior is characterizedby whether

• teleporting is on or off, • the user is inside or outside a tele-

porting region, and • the nearby computer is connected to

the server.

Figure 3 shows how we specify theinteraction in controllers. We assumethat three models have been previouslyidentified:

• the Bat model, which describes theBat;

• the desktops model, which lists the

user’s desktops; and• the teleport service model, which de-

scribes the teleport service state.

Because we’re dealing with two con-texts (inside and outside a teleportregion), we had to design six controllers.The three possible actions—top buttonpressed, bottom button pressed, and con-text changed—map to controller func-tions topBatButtonPressed, bottomBatButtonPressed,and activate because a context change is al-ways accompanied by controller activa-tion. We use the batMoved function to up-date the Bat’s position for the AR overlay.The system calls the functions in rows1–3 when the corresponding explicit in-teraction event from the backend reachesthe controller; it calls the functions activateand deactivate whenever the context thatholds the particular controller is enteredor left.

Each of the six controllers operates onone of three model-view pairs. Lookingat cell B1 we see that when the user is inthe teleport region and presses the topbutton with teleporting enabled, the con-troller initiates an action to connect thenext desktop. This happens by effecting

JANUARY–MARCH 2007 PERVASIVEcomputing 77

1

2

3

4

5

A B C D E F

Controller

FunctionBatcontroller Desktops controller

Teleportservicecontroller

Teleportservicecontroller

Batcontroller

Desktopscontroller

topBatButtonPressed() –

If teleporting is active,connect next desktop

bottomBatButtonPressed()

Toggleteleportingstate

Toggleteleportingstate

batMoved(x, y, z)

UpdateActive Batposition

UpdateActive Batposition

activate()

If active and connected,maximize view;If active and not connected,minimize view;If teporting is not active,deactivate view;

deactivate() Deactivate view ––

– –

Inside teleport region Outside teleport region

Figure 3. Controller design. The columns (A–F) represent controllers and the rows (1–5) represent functions in those controllersthat the framework invokes in response to events.

Page 7: An Architecture for Interactive Context-Aware Applications · 2007. 6. 29. · Cambridge Systems Associates Frank Stajano and George Coulouris University of Cambridge ... fication

a change in the desktops model; this even-tually results in an update of the desktopsview. The desktops view consists of a labelfor the button that controls the desktops(“Desktop>>”) and a menu that lists theuser’s desktops.

From cell B4, we see that whenever theuser enters the teleport region and is con-nected to a desktop, the system displaysthe menu and label. However, if the userhas just entered the region and is not yetconnected, the menu is not displayed(because the user has no desktops tochoose from). If teleporting is inactive, thebutton isn’t functional, so the system dis-plays neither the button nor the menu.Figure 4 shows these maximized, minimi-zed, and deactivated states. Finally, if theuser leaves the teleport region, the systemdeactivates the view because teleportingis no longer available (cell B5).

When the Bat moves, the Bat con-troller updates the Bat model regardlessof whether it is inside or outside the tele-port region (cells A3 and D3). In our sys-tem, the Bat view is always a cuboid Batoverlay indicating where the system“sees” the Bat.

When a user presses the bottom button,the teleport service model updates regard-less of whether the button press occurredinside or outside the teleport region (cellsC2 and F2), eventually changing the view.The teleport service view displays a but-ton label that shows whether the serviceis on or off.

The perhaps complicated design of thedesktops view not only reflects the func-tionality available to users at any point,but can also be explained as follows:When there’s a change of context, wemust inform users in what explicit inter-action state they are entering the new con-text. By looking at the interface, users candeduce the current context and interac-tion state.

DiscussionFirst and foremost, this is a frontend

architecture. It therefore complementsrather than competes with backend archi-tectures that focus on the context acquisi-tion and abstraction process, such as theContext Toolkit10 or SPIRIT. Our architec-ture aims to construct a UI layer betweenusers and the backend that communicatesthe system’s inference process. We nowcontrast our architecture with how devel-opers previously built applications usingSPIRIT.

Architectural benefitsBefore using our architecture, develop-

ers would build the teleport application ina monolithic manner. It was a piece of ap-plication logic that the backend would callif it sensed a user action. Then, for eachpossible case of action, the applicationwould check for all possible combinationsof conditions that could affect its response,such as, for example, whether the user wasinside or outside a teleport region. This

was done using nested IF statements. Oneproblem is that this kind of cross-check-ing exponentially increases control flowcomplexity as the application is requiredto account for additional conditions.

Our architecture succeeds in reducingsuch condition-checking code by abstract-ing context-tracking and event-handlingcode common to all applications builtwithin this framework. For example, it re-moves the need for the application devel-oper to check what kind of event was re-ceived. This step is now performed in a“SPIRIT controller” superclass, resulting ina virtual function call of the event’s han-dler. Similarly, developers no longer haveto check for the current context for everyaction. Rather, they simply write a con-troller for each context and attach it to thecorresponding context component, whichensures that the code piece runs only whenthe corresponding context is active. Devel-opers still need to test domain-model-related state variables using IF statements,but each controller tests only variables thataffect its model-view pair.

Our architecture’s real power emergeswhen developers prototype UIs. To changehow the application responds to events,developers need only edit correspondingcontrollers, rather than find all affected IFbranches. We’ve essentially automated theprocess of finding the correct branch whenan event arrives. So, what developers oncehad to tediously program now occurs atthe superclass level; developers simplyimplement appropriate virtual functionsto fit their individual applications.

However, our architecture’s benefitsover the traditional implementation ex-ceed mere refactoring. It’s comparable totraditional UI architectures such asMVC.8 As such, it lets developers useobject-oriented design to construct andcommunicate domain models. Further-more, it provides a set of abstractions—

78 PERVASIVEcomputing www.computer.org/pervasive

C O N T E X T A W A R E N E S S

not connected

Desktop view:maximized

Desktop view :Teleportservice view

Case 1 Case 2 Case 3

minimized deactivated

MyDesktop1

inside insideTeleporting Off

insideTeleporting Onnot connectedconnected

Teleporting On

Desktop>>Desktop>>

MyDesktop3

MyDesktop2 Teleportingis On

Teleportingis Off

Teleportingis On

Figure 4. Three sets of views that userswould see through AR goggles. The current context and explicit interactionstate fully determine the look of theinterface in each case.

Page 8: An Architecture for Interactive Context-Aware Applications · 2007. 6. 29. · Cambridge Systems Associates Frank Stajano and George Coulouris University of Cambridge ... fication

most notably the context component—tosimplify the process of communicating theapplication’s workings to the user.

Using the architectureOur architecture underlies a toolkit

we’ve used to build visually interactivelocation-aware applications. The toolkititself requires system support,5 includ-ing a ubicomp backend (such as SPIRIT),interface components between the back-end and the application that tag eventsbased on interaction type, and a render-ing engine (which, in our system, is AR-based).

To build an application, you first modelthe domain using object-oriented analysis.Then, you implement models for each do-main object making use of the facilitiesyour ubicomp infrastructure provides youwith. For each identified model, you writecontrollers and views. It might seem thatwriting so many controllers is a lot ofwork. But, as we showed in the monolithicversion comparison, the controllers con-tain no more code than they’d need in themonolithic version anyway. In our case, wesimply distribute that code among manymore alternately activated components.

In developing this architecture, wewanted to show users how the applica-tion uses context. Existing toolkits forcontext-aware applications can takeabstraction too far, architecturally sep-arating the inference-managing compo-nents from the application itself.9 Suchcomponents then make inferences with-out knowing how applications will usethose inferences. Similarly, the applica-tions don’t know how the inferenceswere arrived at and thus can’t commu-nicate this to users. Our architecture suc-cessfully keeps the context trackingprocess within the application.

Further developmentHow could we further develop this ar-

chitecture? Our architecture can accountfor physical information from various

sources when it evaluates the context.However, we still must provide a contextcomponent for each context-variable com-bination to determine the contextual state.This can exponentially increase contextcomponents as the number of contextvariables grows. To counter this, we mightbe able to develop more complex—andpossibly hierarchical—context compo-nents. However, the problem with intro-ducing a hierarchical organization is thatthe context that people create and main-tain in daily interactions isn’t hierarchical.

Perhaps a greater limiting factor onthe number of possible context variables

is the number of dependencies users canactually understand. Also, we’re not try-ing to model all information that the sys-tem can sense about the world (the back-end can manage that). Our goal is tomodel only the context relevant to a par-ticular application’s interactions.

Another limit of our framework is thatit assumes only one user. This is actually alimitation of our interaction model: we’velooked at conversation as a one-to-oneactivity. We are planning to research howcontext is established and maintained inreal-world multiparty conversations so wecan model it accordingly.

We have derived our frame-work from an interactionmodel that was inspiredby a particular sociologi-

cal view of what context means for peo-ple in everyday life. So, your decisionabout whether to use the architecturemust be partially based on how much of

this view you accept. We believe ourarchitecture is beneficial if you

• believe that context is a property ofinteraction, rather than objects or people,7

• want your applications to model thisview, and

• want to visually convey to users howthe application is using context.

If you believe that context is synony-mous with physical information, you’llhave difficulty applying this architecture.Applications such as electronic reminders

that trigger only when users enter par-ticular locations are called “context-aware” on this basis. To exploit our ar-chitecture’s full potential, however, yourapplication must contain a large interac-tion component and adapt its look andfeel to different contexts. After all, it’s anarchitecture for interactive context-awareapplications.

ACKNOWLEDGMENTS

We are grateful to Andy Hopper for providing visionand support throughout this research project.

REFERENCES1. K. Rehman, F. Stajano, and G. Coulouris,

“Interfacing with the Invisible Computer,”Proc. 2nd Nordic Conf. Human-ComputerInteraction (NordiCHI), ACM Press, 2002,pp. 213–216.

JANUARY–MARCH 2007 PERVASIVEcomputing 79

In a conversation, context isn’t

a piece of information about a person

or an object: it’s a frame of reference

for interpreting content.

Page 9: An Architecture for Interactive Context-Aware Applications · 2007. 6. 29. · Cambridge Systems Associates Frank Stajano and George Coulouris University of Cambridge ... fication

2. V. Bellotti et al., “Making Sense of SensingSystems: Five Questions for Designers andResearchers,” Proc. Computer-HumanInteraction (CHI), ACM Press, 2002, pp.415–422.

3. M. Weiser and J.S. Brown, “The ComingAge of Calm Technology” Xerox PARC, 5 Oct. 1996; www.ubiq.com/hypertext/weiser/acmfuture2endnote.htm.

4. D.A. Norman, The Design of EverydayThings, MIT Press, 1989.

5. K. Rehman, F. Stajano, and G. Coulouris,“Visually Interactive Location-AwareComputing,” Proc. 7th Int’l Conf. Ubiq-uitous Computing (Ubicomp), LNCS 3660,Springer, 2005, pp. 177–194.

6. A. Harter et al., “The Anatomy of a Con-text-Aware Application,” Proc. 5th Ann.ACM/IEEE Int’l Conf. Mobile Computingand Networking (Mobicom), ACM Press,1999, pp. 59–68.

7. P. Dourish, “What We Talk About WhenWe Talk About Context,” Personal andUbiquitous Computing, vol. 8, no. 1, 2004,pp. 19–30.

8. G. Krasner and S. Pope, “A Description ofthe Model-View-Controller User InterfaceParadigm in the Smalltalk-80 System,” J.Object Oriented Programming, vol. 1, no.3, 1988, pp. 26–49.

9. V. Bellotti and K. Edwards, “Intelligibilityand Accountability: Human Considerationsin Context-Aware Systems,” Human-Com-puter Interaction, vol. 16, nos. 2–4, 2001,pp. 193–212.

10. A. Dey, G.D. Abowd, and D. Salber, “AContext-Based Infrastructure for SmartEnvironments,” Managing Interactions inSmart Environments (MANSE 99), ACMPress, 1999, pp. 114–128.

For more information on this or any other comput-ing topic, please visit our Digital Library at www.computer.org/publications/dlib.

80 PERVASIVEcomputing www.computer.org/pervasive

C O N T E X T A W A R E N E S S

the AUTHORSKasim Rehman is an associate at Cambridge Systems Associates, where he works onfinancial software. His research interests include interaction design and visualization.He received his PhD in ubiquitous computing from the University of Cambridge.Contact him at Cambridge Systems Associates, 5-7 Portugal Place, Cambridge, UK;[email protected].

Frank Stajano is a tenured lecturer (associate professor) at the University ofCambridge Computer Laboratory. His research interests include computer security,ubiquitous computing, and privacy in the electronic society. He received his PhD incomputer security from the University of Cambridge. He is the author of Security forUbiquitous Computing (Wiley, 2002). Contact him at the Univ. of Cambridge Com-puter Laboratory, 15 J.J. Thomson Ave., Cambridge CB3 0FD, UK; [email protected]; www.cl.cam.ac.uk/~fms27.

George Coulouris is an emeritus professor of computer systems at the University ofLondon and a senior visiting fellow in the University of Cambridge Computer Labo-ratory. His research interests include ubiquitous and sentient systems. He received aBSc in physics at University College London. Contact him at the Digital TechnologyGroup, Computer Laboratory, Cambridge Univ., William Gates Building, CambridgeCB3 0FD, UK: [email protected].


Recommended