+ All Categories
Home > Documents > Using behavioral patterns to assess the interaction of users and product

Using behavioral patterns to assess the interaction of users and product

Date post: 28-Mar-2016
Category:
Upload: marc-hassenzahl
View: 228 times
Download: 1 times
Share this document with a friend
Description:
We hypothesized that users show different behavioral patterns at work when using interactive products, namely execute, engage, evolve and expand. These patterns refer to task accomplishment, persistence, task modification and creation of new tasks, each contributing to the overall work goal.
Popular Tags:
14
(This is a sample cover image for this issue. The actual cover is not yet available at this time.) This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution and sharing with colleagues. Other uses, including reproduction and distribution, or selling or licensing copies, or posting to personal, institutional or third party websites are prohibited. In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier’s archiving and manuscript policies are encouraged to visit: http://www.elsevier.com/copyright
Transcript
Page 1: Using behavioral patterns to assess the interaction of users and product

(This is a sample cover image for this issue. The actual cover is not yet available at this time.)

This article appeared in a journal published by Elsevier. The attachedcopy is furnished to the author for internal non-commercial researchand education use, including for instruction at the authors institution

and sharing with colleagues.

Other uses, including reproduction and distribution, or selling orlicensing copies, or posting to personal, institutional or third party

websites are prohibited.

In most cases authors are permitted to post their version of thearticle (e.g. in Word or Tex form) to their personal website orinstitutional repository. Authors requiring further information

regarding Elsevier’s archiving and manuscript policies areencouraged to visit:

http://www.elsevier.com/copyright

Page 2: Using behavioral patterns to assess the interaction of users and product

Author's personal copy

Int. J. Human-Computer Studies 69 (2011) 496–508

Using behavioral patterns to assess the interaction of users and product

Stefanie Harbicha,n, Marc Hassenzahlb

aE D SGA-EM R&D, SIEMENS AG, Grundlacher Str. 260, 90765 Furth, GermanybUser Experience and Ergonomics, Faculty of Design, Folkwang University of the Arts, Universitatsstraße 12, 45141 Essen, Germany

Received 8 February 2010; received in revised form 23 February 2011; accepted 23 March 2011

Communicated by S. Wiedenbeck

Available online 30 March 2011

Abstract

We hypothesized that users show different behavioral patterns at work when using interactive products, namely execute, engage,

evolve and expand. These patterns refer to task accomplishment, persistence, task modification and creation of new tasks, each

contributing to the overall work goal. By developing a questionnaire measuring these behavioral patterns we were able to demonstrate

that these patterns do occur at work. They are not influenced by the users alone, but primarily by the product, indicating that interactive

products indeed are able to support users at work in a holistic way. Behavioral patterns thus are accounted for by the interaction of

users and product.

& 2011 Elsevier Ltd. All rights reserved.

Keywords: Evaluation method; Behavior; HCI; Motivation; Interaction

1. Introduction

A new concept emerges in the field of Human–ComputerInteraction: the User Experience (UX) described and sum-marized in a number of approaches (e.g., Forlizzi andBattarbee, 2004; Hassenzahl, 2010; Hassenzahl and Trac-tinsky, 2006; McCarthy and Wright, 2004). While thoseapproaches differ in detail, they all share a holistic notionof the interaction between user and product, thereby extend-ing the task-oriented view beyond the mere instrumental.They emphasize the experiential with its situational, temporaland emotional aspects and the interplay of thinking, feelingand doing. Hassenzahl (2010), for example, bases his notionof experience on action theories, such as Activity Theory

(Kaptelinin, 1995; Kuutti, 1995). It highlights the object-orientedness of activities as well as their mediation by tools,shaped and developed by usage, which in turn shape people’sactivities. Activity Theory and many other action theoriespostulate a hierarchical structure of activity, which can well

be applied to the work place. At work, several levels of goalshave to be fulfilled. On the lowest level are operations on asensomotoric level such as clicking on a button (i.e., opera-tions). They are a part of the higher level self-contained goals(i.e., actions), such as completing a spreadsheet analysis.Activities then consist of several of such goals and aremotivated, for example, by the aspiration to be a good clerk(see also Carver and Scheier, 1998). To fulfill high level goals,plans will be defined (Miller et al., 1970)—either to someextent beforehand or as action unfolds. There may even beseveral plans to choose from, each leading to the goal, but atdifferent costs and at different results (e.g., Stock andCervone, 1990). While action per se was always in the focusof Human–Computer Interaction, the inclusion of high-levelgoals and aspirations, motives and psychological needs, andrelated emotions and experiences is new. It is nevertheless anecessary step, especially important for the evaluation ofinteractive products. The object of investigation must beextended beyond product qualities to the users and theirindividual perception of and experience with the product.According to Carver and Scheier (1998), experience includes

the users’ behavior and is, thus, accessible through the users’behavior. In Information Science there has likewise been ashift from a system-centered approach to a person-centered

www.elsevier.com/locate/ijhcs

1071-5819/$ - see front matter & 2011 Elsevier Ltd. All rights reserved.

doi:10.1016/j.ijhcs.2011.03.003

nCorresponding author. Tel.: þ49 911 654 2417; fax: þ49 911 654 3003.

E-mail addresses: [email protected],

[email protected] (S. Harbich),

[email protected] (M. Hassenzahl).

Page 3: Using behavioral patterns to assess the interaction of users and product

Author's personal copy

approach (Wilson, 2000). Ellis’ (1989) behavioral model ofinformation seeking strategies, for example, is concerned withbehaviors like browsing, i.e., semi-directed searching, orextracting, i.e., selectively identifying relevant material. Inour opinion, it is a promising approach to evaluate theinteraction of user and product by taking into account thebehavioral patterns users reveal when working with a productand the thoughts and opinions around these patterns.

In previous work, we (Harbich and Hassenzahl, 2008)have focused on the work context, a context surprisinglyneglected by User Experience research as a recent reviewconcluded (Bargas-Avila and Hornbæck, 2011). We iden-tified four sets of behavioral patterns related to the use ofinteractive products at work: execute, engage, evolve andexpand. We start with characterizing those sets in moredetail. After formulating our research questions, we sum-marize our initial work aimed at developing a question-naire. We will then describe an analysis of the behavioralpatterns in our main study, present its results and suggestthe implications for our questions.

2. Execute, engage, evolve, expand

In the domain of usability, it is widely accepted thatinteractive products can support users to change theirbehavioral patterns and feelings to the positive whenexecuting tasks. In our opinion, other aspects of the workcontext are capable of such assistance, too. As describedabove, people have goals on different hierarchical levelsand plan their steps to achieve these goals. To fulfill thehigher-level goals, ‘‘be-goals’’ in terms of Carver andScheier (1998), people even have to generate lower-levelgoals themselves, as well as the plans to achieve them. Forbeing a popular and well-accepted employee (be-goal) inmarketing, for example, I might have to succeed inimplementing the advertising campaign for a new film.Related lower-level goals (do-goals) might be to create filmposters, to devise the storyboard for the TV commercialand to design toys for merchandising. The creation of thefilm posters again can be broken down into several steps,one of them being the assembly of photos, title, textualinformation, etc. Inserting a photo into the poster, scalingand placing it are even lower-level goals that again can bebroken down into the single operations like selecting aphoto by clicking on it, etc. Miller et al. (1970) do notspeak of goals in this context but refer to plans that arehierarchically organized. These plans are the more detailedthe closer they are to the present situation (see alsoCropanzano et al., 1995; Oesterreich, 1981). From atraditional usability perspective, an interactive producthas to allow its users to pursue existing plans, that is,completing their tasks (i.e., lower-level goal like inserting aphoto) effectively, efficiently and satisfyingly (Interna-tional Organization for Standardization, 1998, 2006).Together with the adequate functionality, such interactiveproducts enable users to implement the behavior necessaryfor pursuing their plans and attaining the low-level goals.

We call this behavior ‘execute’. It is the first of a set of fourbehavioral patterns we assume useful for achieving thehigh-level goal of doing a good job with the aid ofinteractive products.Our second behavioral pattern is called ‘evolve’. To

achieve a goal regardless of its hierarchical level, peoplehave to formulate a plan consisting of the single steps. Whendoing routine tasks, they can make use of existing plans, butwhen they try to achieve a new goal, people have to make anew plan. To work out the details of the plan, people need toknow as many alternatives and constrains as possible toattain the goal effectively and efficiently. For interactiveproducts, this means knowing about most functions of theproduct and being able to use them. Only when users areaware of their possibilities, they are able to tap the fullpotential of their product and are therefore able to efficientlyfigure out the best plan. By using newly discovered func-tions, users may even reformulate their goals (Carroll et al.,1991) and evolve not only the plans to reach their lower-levelgoals but their higher-level goals as well and thus their workin general. A higher-quality goal may come into reach,enabling users to enhance the quality of their work withoutenhancing their efforts—simply by choosing the best plan.For instance, when inserting a photo into the film poster, theplan might be to open the photo, scale and position it (whichcan be broken down even further, of course). Those areseveral steps containing various possible sources of errors.The interactive product might provide a function for defin-ing different areas and for automatically scaling the photoafter having assigned it to the area. First, the photo then hasthe right size and resolution, and, second, when tryinganother photo for the poster, it simply has to be substitutedwithout the subsequent steps. This alteration of the lower-level goals of opening, scaling and positioning can result in amore efficiently reached goal and even in a better accom-plished goal because of the easy possibility to try differentphotos—if the users know their interactive product well. Soin our opinion, interactive products are able to support theirusers by not only allowing them to pursue their goal as theyhad planned (execute), but also to devise that plan, modifytheir goals and improve in efficiency and quality (evolve).As described, we expect users to modify their existing

goals because of functionality provided by their interactiveproduct. However, goals such as creating a film poster arenot always given at work. Employees may only have giventhe higher-level goal of implementing the advertisingcampaign for a new film but are left to figure out how torealize this on their own. Of course, they do not have toalways invent new ways to do this but can rely onestablished methods. Nevertheless, sometimes it may benecessary to come up with a new, innovative lower-levelgoal to accomplish the higher-level goal (Hacker, 1986). Ifemployees succeed outstandingly with this new method,they complement the be-goal of being a ‘‘good’’ clerk.Interactive products can help them by being flexible enoughto allow implementing the new methods. Instead of thetraditional methods of advertising, the employee might, for

S. Harbich, M. Hassenzahl / Int. J. Human-Computer Studies 69 (2011) 496–508 497

Page 4: Using behavioral patterns to assess the interaction of users and product

Author's personal copy

example, discover the functionality of embedding graphicsas an overlay into the satellite view of Google Earth tomake it available via the Internet, as it was done for thefilm Pirates of the Caribbean: Dead Man’s Chest (The WaltDisney Internet Group, 2009). A realistic looking islandwith the shape of a skull was placed into the sea withrotating gold coins on it opening different Internet pageswith further information about the film or a lottery. Usingthis feature of Google Earth for advertising purposes hasbeen both an innovative new goal for the employee and aninnovative new form of usage for the interactive product.The scope of both the user’s work and the interactiveproduct has been expanded. For Google Earth, the use ofoverlays is not extraordinary, but using them for advertis-ing purposes is an innovative application that even thesoftware developers might not have intended. The conceptof innovativeness plays a major role in Diffusion of

Innovations Theory (Rogers, 1995). Innovators are the firstto buy a new product on the market and allow others toadapt to their behavior. Hirschmann (1980) extended thistheory and introduced use-innovativeness. It describes thedeployment of a previously adopted product to solve a newproblem. This is similar to our third behavioral patterncalled ‘expand’. It describes the invention of new higher-level goals with the help of interactive products by offeringenough flexibility to implement the new goal or by eveninspiring the invention of a new goal. With execute, evolveand expand, we cover most aspects from plans and low-level goals to higher-level goals, namely pursuing low-levelplans without impairment (execute), formulating plans andeventually modifying goals (evolve) and even inventing newgoals (expand) by the use of interactive products.

However, we have not covered one important aspect yet.To pursue their plans and to reach and evolve and expandlow- and high-level goals, users have to be motivated. Whenthey are motivated, they do not avoid the execution of theirtasks or finish with the least possible effort (and correspond-ing results, see Norman, 2004). In the above example, theemployee might, for instance, settle with the first photoinserted into the film poster without trying other alternatives.On the contrary, with more motivation he might instead havetried other photos as well and would have created a muchnicer film poster. Intrinsic motivation enhances well-being ingeneral, job satisfaction and performance (Baard et al., 2004;Gagne and Deci, 2005). It can be evoked by the fulfillment ofbasic needs (Ryan and Deci, 2000), i.e., need for competence,need for autonomy and need for relatedness, forming aholistic approach similar to Hassenzahl’s (2003) ‘‘hedonicaspects’’ in the context of interactive products. Isen et al.(1991) induced positive affect in medical students and letthem decide which patient out of six might have cancer onthe basis of a set of data. The positive-affect group performedas well as the neutral control group but reached their decisionearlier and, thus, achieved their goal more efficiently.Additionally, they went beyond their assigned task andmentioned diagnoses for other patients and even consideredtreatments in some cases. In other words, they made use of

the extra time they got because of their fast decision processand were able to evolve their initial goal into a higher-levelgoal. In the context of interactive products, intrinsic motiva-tion can mean examining interactive products to discovertheir functionality and to use them in a playful way to be ableto expand their scope. Google Earth, for example, is a veryintriguing product, inviting for exploration of the earth’ssurface and for exploration of the product itself, having led toits expansion of scope as described. Studies show thatperceived enjoyment while working with the product canlead to more usage, even in spite of usability problems (Daviset al., 1992; Igbaria et al., 1994). Hence, we suggest that,when users are motivated at work, they show our threehypothesized behavioral patterns execute, evolve and expandmore easily and pursue plans, devise plans to reach or evenexceed goals and invent new lower-level goals to complementhigher-level goals more easily. Being motivated thus formsour fourth behavioral pattern, called ‘engage’. With it, ourmodel of four patterns execute, engage, evolve and expandcalled e4 is complete.With this study, we want to explore the model and

examine whether the hypothesized behavioral patterns dooccur at all in connection with interactive products. Ourapproach differs from most approaches in HCI in that wedo not ask the users about their perceptions of theproduct. Instead, we focus on their self-reported behavior.In our opinion, interactive products can help users to showthe described behavioral patterns. Yet, one might arguethat it is either the users themselves who account for theirbehavior by their individual user characteristics or that it isthe product through its product attributes that elicits andshapes the behavioral patterns of its users. If it was solelythe users that accounted for their behavior, this wouldmean that interactive products could add nothing tofacilitate work, which would contradict every theory ofHuman–Computer Interaction (e.g., Fogg and Nass, 1997;International Organization for Standardization, 1998;Nielsen, 1993; Shackel, 1991). Thus, our second questionis what causes the behavioral patterns.In an initial study, we developed a short questionnaire to

be able to address two questions of our main study: Do thesupposed behavioral patterns occur at work? Are theydepending on the users, the products or both?

3. Initial work

When evaluating the interaction between users andinteractive products, a wide range of different behaviorsplays a role. Some of them can be observed frequentlyfrom the very beginning, but some behavioral patterns willonly rarely occur and some are almost impossible to beobserved directly. Especially when investigating a largepopulation of users and products and rare behaviors, wehave to rely on the users’ self-reports (Howard, 1994). Anappropriate approach might have been to use momentaryassessment techniques similar to the Experience SamplingMethod (Csikszentmihalyi and Larson, 1987) or the Day

S. Harbich, M. Hassenzahl / Int. J. Human-Computer Studies 69 (2011) 496–508498

Page 5: Using behavioral patterns to assess the interaction of users and product

Author's personal copy

Reconstruction Method (Kahneman et al., 2004), whichbetter take the characteristics of the human cognition andability to correctly recall the execution of a behavior intoaccount by avoiding or minimizing retrospection. Yet wewanted to construct a method that can be easily used byresearchers without much effort and that helps us toexamine several hundred users. For these purposes aquestionnaire seemed more appropriate. Thus, in the firstpilot study, we developed a short questionnaire to obtainusers’ behavioral patterns and to understand the relationof these behavioral patterns to user characteristics andproduct attributes, which were then examined in the mainstudy. To address not only Human–Computer Interactionexperts and to facilitate rating of the items, the items wereformulated from a first-person view describing behaviors,feelings and situations. The items do not directly refer tothe used interactive products but to the users’ perceptionsof themselves or of their use of the product. The first pilotstudy was implemented to provide a short, well-evaluatedcollection of items, which cover the four described sets ofbehavioral patterns. These sets thus include one set con-cerned with task accomplishing behaviors (execute), oneset regarding motivational and persistency aspects(engage), one set concentrating on the creation of plansand alteration of goals (evolve) and one set addressing thecreation of goals (expand). We assume these behavioralpatterns to occur in everyday work with computers and tobe able to support in accomplishing routine tasks as well asin accomplishing new and challenging goals and thus toenhance general job performance.

3.1. Initial item formulation

Two 3 h workshops with a total of 13 experts (sevenusability engineers, six psychologists) were held to gatherrelevant behavioral items for the survey. Both workshopsbegan with a brief discussion of supporting particular typesof behavior when working with interactive products. Theneach workshop participant was asked to individually createpotential Likert items (seven point, ranging from strongly

disagree to strongly agree; all items and labels were inGerman) which capture relevant behavior. Subsequently,all items were discussed and reviewed by the whole group.Both workshops resulted in a pool of 246 items. The firstauthor reviewed these items and excluded redundant andinappropriate items (for example, representing questionsabout the interface instead of questions about the behaviorof the users, e.g., ‘Are there ways to copy and paste intoother applications?’, ‘Do you miss any functionality?’, ‘Isthe interface color changeable?’). The remaining 47 itemswere finally reviewed by an independent communicationexpert to check and fine-tune intelligibility and grammar.

3.2. Item exploration: first pilot study

We administered the 47 items from the workshops to afirst pilot sample of 366 participants via an online survey

(created with www.surveymonkey.com). Participants wererecruited by flyers and emails with a link to the survey’swebsite. A raffle for three book vouchers (value 20 h each)was given as incentive.Surveymonkey allows randomizing the order of items

for each participant individually to minimize potentialorder effects. As the users’ behavioral patterns or theusers’ perceptions of an interactive system might beinfluenced by the degree of experience the users have withthe product (Venkatesh and Davis, 2000), attention waspaid to not ask complete novice users of a product butusers at least to some extent familiar with a product.Accordingly, a period of 1 month of product use was set asa precondition for the study. A minimum of expertise witha product is an important prerequisite for eliciting beha-vior, because participants have to rely on an existingsample of own behavioral patterns to be able to respondto the items. Participants chose an interactive product usedby them in a work context for at least a month and with aminimum of 3 h per week. The free choice of the productensured a variety of interactive products, which allows fora broader generalization of respective findings and ofreliability and validity issues (see Monk, 2004). Afterchoosing the product, the participants were asked toindicate the agreement or disagreement (seven-point Likertscale, with strongly agree and strongly disagree as verbalanchors) with the 47 items that captured different facets ofbehavior (for example, see Fig. 1). In addition, participantswere asked to rate their expertise with computers ingeneral and their chosen product in particular (i.e., monthsof usage, hours of usage per week) and to providedemographic information (i.e., age, gender, profession).Of the 366 participating persons, 255 completed the

survey (70% retention rate, a response rate cannot becomputed due to the fact that the total number of invitedpeople is unknown). Twenty-one responses were excludedbecause either participants used the rated interactiveproduct for less than 3 h a week or the questionnaire wasnot filled in correctly (e.g., did not specify the ratedproduct, had more than 5% missing answers, etc.). Thisleft 234 responses for further analysis. The majority ofparticipants were male (74%), with a median age of 32years (min¼19, max¼62 years). The participants rated115 different products or product versions.We began the exploratory item analysis by identifying

and excluding ‘‘skewed’’ items, i.e., items with a mean thattends to either end of the scale. We set a skewness criterionof 91.09 for each item. Seven items exceeded this criterionand were excluded from further analysis. The remaining 40items were analyzed for substantive coherence by conduct-ing a Principal Component Analysis (PCA) with Varimaxrotation. A Scree Test recommended the extraction androtation of five factors. Subsequently, all items loading withtheir highest factor loading on other than those five factorswere excluded. Another Principal Component Analysiswith Varimax rotation was conducted with the remainingitems. This time, we set the extraction criterion to five

S. Harbich, M. Hassenzahl / Int. J. Human-Computer Studies 69 (2011) 496–508 499

Page 6: Using behavioral patterns to assess the interaction of users and product

Author's personal copy

factors. Four of the five components clearly matched thefour behavioral patterns (i.e., execute, engage, evolve andexpand), with a range of 5–12 items loading on each of thefour factors. The fifth represented the functional range ofthe product (e.g., ‘I think I use only a small amount of theproduct’s functionality’) and was not related to a beha-vioral pattern at all. We excluded those three items (i.e., thefifth component) and ran a third PCA with Varimaxrotation and an extraction criterion of four components.The structure of those four components remained stable.We then examined the validity of the four resulting scales.The results turned out to be not satisfying for the applica-tion of the scales as questionnaire. An analysis of

Cronbach’s a of the items of each factor revealed that thefactors related to the behavioral pattern engage had aninternal consistency lower than .70. In addition, some werenot coherent in respect of content. For example, for thescale expand, those items had to be excluded that assuredthe diversity of this scale and the coverage of the wholepattern. This left only rather similar items (e.g., ‘SometimesI use this product in an unusual manner to achieve my goal’and ‘Sometimes I try to outsmart this product to achievemy goal’). Thus, in cooperation with the communicationexpert, the whole set of the remaining 37 items were furtherrevised, refined or removed and were complemented byadditional items, resulting in a set of 44 items capturing

dnapxeevloveegagneetucexe

This product sometimes responds differently 408..detcepxenaht

Sometimes I spend a long time searching for functions I need for my work. .800

When working on tasks with this product I often need more time than intended. .757

Sometimes I am surprised about the product's 647..seirtneymotsnoitcaer

The work with this product is sometimes 896..emosrebmuc

I tend to forget the time now and then when working with this product. .801

This product allows approaching my tasks 827..ylevitaerc

In my spare time I'm playfully exploring this 776..tcudorp

Even if my actual task already is satisfactorily completed, I sometimes try to improve it with

515..tcudorpsihtfodiaeht

I can enhance my work's quality without additional effort by using this product. .811

This product helps me to complete my tasks better than expected without additional effort. .735

With this product, I can sometimes even exceed my aim without additional effort. .723

I believe this product has many functions I 206.414..yllautnevedeenyam

I occasionally have "misused" this product for purposes beyond its usual range. .850

Now and then I'm completing tasks with this product, it isn't really intended for. .804

Occasionally I use this product in an odd manner to complete my task. .801

I'm sometimes using this product for tasks probably not typical for this product. .773

I believe I sometimes use this product differently compared to other users. .618

93.304.235.273.3eulavnegiE

8.814.311.417.81ecnairavdenialpxe%

Note. Principal Component Analysis with Varimax rotation. Loadings < .400 are not shown.

N = 90.

Fig. 1. Item structure.

S. Harbich, M. Hassenzahl / Int. J. Human-Computer Studies 69 (2011) 496–508500

Page 7: Using behavioral patterns to assess the interaction of users and product

Author's personal copy

different facets of behavior (see Fig. 1 for examples). Theseitems were then given to the second pilot sample and testedin a second online survey. They were again arranged in aquestionnaire with randomized order and were answeredon a seven-point Likert scale with the poles strongly

disagree and strongly agree.

3.3. Final item selection: second pilot study

One hundred and thirty individuals participated in theonline survey for the second pilot study. Of those, 96completed the survey (retention rate: 74%). Six responseswere excluded due to an intensity of using the ratedinteractive product that was less than our set minimumof 3 h a week. This left 90 participants (28 women, 31%)for further analysis. The sample’s age ranged from 18 to 56years, with a median of 33 years. Participants wererecruited by flyers and emails, exactly as in the firstpilot study.

Again, participants were asked to choose an interactiveproduct they were using in a work context for at least 1month and a minimum of 3 h per week. They werepresented with the 44 items in randomized order and wereasked to rate their expertise with computers in general andto provide demographic information.

An item analysis revealed three items with a skewnessthat exceeded the value of 91.09, so these items wereexcluded from further analysis.

To test the structure of the remaining 41 items, a PCAwith Varimax rotation was performed. We set the extrac-tion criterion to four components (i.e., confirmatoryapproach) and obtained the expected structure. The bestfitting 18 items out of the 41 were chosen (criterions werehighest loading on the component and fit of content to thecomponent) and another PCA with Varimax rotation wasperformed. The four expected components again emergedand the validity of the scales was good, as we show indetail below. Therefore, the four components were set asthe four scales capturing the behavioral patterns execute,engage, evolve and expand. They explained each 13–19%variance and 65% in total. Fig. 1 shows the loadings ofeach item, Eigenvalues and the explained variance of thefour components. Only the item ‘‘I believe this product hasmany functions I may need eventually’’ shows a secondaryloading larger than .40 (16% explained variance).

Further analysis demonstrated the acceptable statisticalcharacteristics of the item set (see Fig. 2). The scale meansrange around the midpoint of the theoretical scale (4). Thestandard deviations and the skewness indicate a normaldistribution. All scales have good internal consistency (seeCronbach’s a), so their items capture the same behavioralpattern per scale. The interscale correlations show somecorrelation between scales. Especially engage seems tocorrelate with the other scales with values ranging fromr¼ .22 (engage) to r¼ .50 (expand), suggesting an inter-dependency of motivation and the extent of beingsupported in goal attainment, in improving quality andin future goals. Whether motivation leads to working morefocused and creatively, or whether efficient work support ismotivating, needs to be examined. The rather smallcorrelations of execute underline the difference of thesebehavioral patterns related to goal attainment and thepatterns related to goal creation. However, the interscalecorrelations never exceed the internal consistency and thePCA with the orthogonal Varimax rotation supports anacceptable discriminant validity of the four scales.The initial work resulted in an 18 items questionnaire,

which covers the targeted four aspects of user behavior atwork when using interactive products (i.e., execute, engage,evolve, expand). It has a satisfactory reliability, discrimi-nant validity and acceptable distributional properties. Inthe following main study, the questionnaire was used toexplore the occurrence of the different behavioral cate-gories and their relation to product attributes and usercharacteristics.

4. Analysis of the behavioral patterns

In the main study, we addressed two related issues. First,we were interested in the mere occurrence of the behavioralpatterns, their stability as well as their temporal develop-ment. Second, we examined the relation of the character-istic of the users and/or interactive products to theoccurrence of behavioral patterns. Of course, the differ-ences in users (e.g., expertise) represent a source ofvariation, but also the different products may vary in theextent they evoke the different sets of behavioral patterns.Typically, in the field of Human–Computer Interactionusers are considered the ‘‘subjects’’ and sampled accord-ingly. This, however, downplays the interactive product as

scaleCronbach’s α

interscale correlation

mean (SD) skewness execute engage evolve

execute 4.36 (1.49) -.31 .84

engage 3.91 (1.51) .16 .74 .22*

evolve 4.58 (1.38) .64 .78 .35** .50**

expand 3.45 (1.51) .31 .86 .00 .42** .25*

Note. N = 90.

*p < .05. **p < .01.

Fig. 2. Statistical characteristics of the questionnaire.

S. Harbich, M. Hassenzahl / Int. J. Human-Computer Studies 69 (2011) 496–508 501

Page 8: Using behavioral patterns to assess the interaction of users and product

Author's personal copy

a source of variation, which is especially relevant if theissue is the interaction between user and product. To avoidfalling for this ‘‘the product as a fixed-effect fallacy’’(Monk, 2004), one must of course take into account theinteractive product as well, that is, one must sampleproducts in the same way as one samples participants toprovide substantial heterogeneity. One objective of thisstudy was to determine whether there are structuraldifferences when using the variance stemming from peopleversus the variance stemming from interactive products.Therefore, a second PCA was conducted with the productsas objects of research and a third analysis that deliberatelydisregarded the variance produced by the rated productsand that therefore observed solely the participants.

Concepts of interactive product use tend to focus on theproduct itself, but they also strongly take the user intoaccount. Usability, for example, depends by definition ofthe widely accepted norm ISO 9241-11 (InternationalOrganization for Standardization, 1998) on specified users.Thus, apart from potential structural differences, weexamined the relation between user characteristics, suchas age, expertise or skills, and the behavioral patterns, aswell as the relation between perceived product attributesand the behavioral patterns.

All data collected and analyzed is self-report data. Itshould be taken into account that the ratings might bebiased. Especially when certain attributes are difficult toassess, for example, when users do not have enoughexperience with a product, other more easily accessible ormore apparent criteria are consulted. Venkatesh and Davis(2000) found that perceived usefulness was influenced bysubjective norms in the first month of system usage, beforeactual first-hand experience took over. Similarly, beauty hasbeen found to influence judgments about interactive productsbecause beauty is more directly accessible than, for example,usability (Hassenzahl and Monk, 2010). The overall evalua-tion of a product plays an important role, as it is often usedto infer judgments about less accessible aspects. In the presentstudy, we accordingly controlled for these effects.

4.1. Method

4.1.1. Participants

Three hundred and sixty-three individuals participatedin the main study. They were recruited by flyers and emailswith a link to the survey’s website and were announced areward in the form of a raffle for three book vouchers.Two hundred and seventy-eight completed the survey(77% retention rate) and were working with their inter-active products for at least 1 month and more than 3 h perweek. The remaining participants, 65 female (24%), were16–63 years old, with a median of 37 years. They had 16years and 6 months of experience with computers onaverage (SD¼6.7, min¼1, max¼37) and used them 36 hper week (SD¼12, min¼4, max¼70). Seventy-six differ-ent interactive products were evaluated and people workedwith them on average for 33 months (SD¼32, min¼1,

max¼216) and 16 h per week (SD¼11, min¼3, max¼80).Participants estimated their skills in using their chosenproducts as 3.9 on average on a scale of 1 (not at all) to 5(very good) (SD¼ .8, min¼1, max 5).To examine the stability of the behavioral patterns over

time, we asked the participants to take part in the study 6weeks later. Forty-six participants responded to this calland filled out the survey again after 7–10 weeks. Forty-three of them remembered their previously evaluatedproduct correctly, were using their products for at least3 h per week and fully completed the survey. Their ageranged from 23 to 59 years (median¼37). Participants wereusing their products for 35 months on average (SD¼42,min¼1, max¼200) and 17 h per week (SD¼11, min¼4,max¼45). They rated their skill in using their chosenproducts as 4.0 on average on a scale ranging from 1 (not

at all) to 5 (very good) (SD¼ .6, min¼2, max¼5).

4.1.2. Procedure

We conducted the study as an online survey consistingof the 18 earlier identified items. Additionally, participantswere asked to provide some information about themselvesand their computer experience, as well as to answer theAttrakDiff2 questionnaire (Hassenzahl et al., 2003). Thisallowed for addressing the relationship between productattributes, user characteristics and users’ behavioral pat-terns. Participants could choose any interactive productused at work regularly for at least 3 h per week and at leastfor 1 month. At the end of the survey, participants wereasked to leave an identifier and their email address for thesecond part of the survey. Six to ten weeks later theyreceived an email inviting them to fill out the survey for thesame product a second time.Hassenzahl (2003) argued that the perceived qualities of an

interactive product can be divided into instrumental, prag-matic and non-instrumental, self-referential, hedonic aspects.Pragmatic quality refers to a judgment of a product’spotential to support particular ‘‘do-goals’’ (e.g., to make atelephone call) and is akin to a broad understanding ofusability as ‘‘quality in use’’. Hedonic quality is a judgmentwith regard to a product’s potential to support pleasure in useand ownership, that is, the fulfillment of so-called ‘‘be-goals’’(e.g., to be admired, to be stimulated). The AttrakDiff2 is aquestionnaire to measure those perceived qualities and togenerally evaluate an interactive product. It consists of asemantic differential with 28 bipolar items, constituting fourscales with seven items each: perceived pragmatic quality(PQ), perceived hedonic quality-stimulation (HQS), perceivedhedonic quality-identity (HQI) and appeal (AP).

4.2. Results and discussion

4.2.1. Occurrence of the behavioral patterns

Our first aim was to investigate whether the assumedbehavioral patterns occur at all at work and whether usersfeel that the behavioral patterns are associated with theproduct they are using. In the present sample, 278

S. Harbich, M. Hassenzahl / Int. J. Human-Computer Studies 69 (2011) 496–508502

Page 9: Using behavioral patterns to assess the interaction of users and product

Author's personal copy

participants rated 76 different products or versions ofproducts. The means and standard deviations for the 18items are shown in Fig. 3. As some of the items describebehavioral patterns that are counterproductive at work, weadjusted the poles for the analysis. High values thus implythat those behavioral patterns were elicited, which seemdesirable at work. The means ranged from 2.92 (SD¼1.98,min¼1, max¼7) to 5.38 (SD¼1.49, min¼1, max¼7).Minimum and maximum of each item was 1 and 7,respectively, the overall mean was 3.94 (SD¼ .95). Thisshows that behavioral patterns and opinions of the fourdescribed behavioral patterns associated with a specificinteractive product did in fact occur. So users experiencedthose behavioral patterns and felt that their products wereor were not able to support them.

4.2.2. Structure

We examined the potential structural differencesbetween the group of products and the group of partici-pants by two separate Principal Components Analyses(PCA) with Varimax rotation on the 18 behavioral items.For the product analysis, we first averaged each itemacross those participants that rated the same product. Thisresulted in mean item ratings for 76 different products (seeFig. 3; prod.) and eliminated variance stemming fromparticipants. If the behavioral patterns depend solely onusers’ personal characteristics and were not influenced bythe products they use, a PCA of the mean product ratingsshould not reveal any meaningful structure. Quite theopposite, in line with our expectations, a similar patternas the one in the initial study emerged (see Fig. 1). Onlyfour out of the 18 items had their highest loadings on otherscales than identified in the initial study, hence illustratinga not as well-defined structure (‘‘I tend to forget the timenow and then when working with this product’’, ’’I believeI sometimes use this product differently compared to otherusers’’ and ‘‘This product lets me approach my taskscreatively’’). Overall, the initial study and the productanalysis revealed a similar, four component structure. Theproduct analysis explained 13–22% variance for eachcomponent and 71% variance in total.

For the participant analysis, we subtracted the meanrating for the corresponding evaluated product from eachindividual rating. This eliminates variance stemming fromthe different products (i.e., centering). A second PCA onthis (see Fig. 3) revealed again a structure close to thestructure in the initial study, except for the item ‘‘Thisproduct lets me approach my tasks creatively’’, whichswitched from the factor execute to evolve. The factorsexplained 11–18% variance and 59% in total, indicating aslightly decreased fit compared to the initial data, butnonetheless a similar structure. So the allocation ofbehavioral items to the four aspects of work is accountingfor both users and interactive products and, thus, describestheir interaction. Unfortunately, this analysis could not bedone reversely for the products’ sample, as no participantrated several products.

4.2.3. Stability and development of the behavioral patterns

Roughly 8 weeks later (M¼56 days; SD¼12; min¼39,max¼79), users reported similar rates of behavior, con-firming the stability of the behavioral patterns (see Fig. 4).Forty-three participants of the main sample rated the itemsagain for the same product. The rated behavioral patternswere stable, as the significant correlation of the clusteredbehavioral patterns of the first and the second measure-ment indicated, ranging from r¼ .73 (expand) to .86(evolve and execute). Thus, participants showing thebehavioral patterns at the first measurement did so 8weeks later, too.Comparing the means of the four patterns for t1 and t2

showed that execute and evolve did not differ significantly,whereas engage and expand revealed a significant increasefrom the first to the second measurement. This suggeststhat at the first point of measurement the patterns executeand evolve may have been already fully developed. Usageof the product beyond this point did not further increasethe occurrence of these behavioral patterns. In contrast,engage and expand may require more time. However, for adefinitive answer to the question of whether the observeddifferences represent systematic change over time a long-itudinal study is needed, which allows for controlling thespecific product expertise.

4.2.4. User characteristics and product attributes

In addition to the behavioral patterns, we obtainedproduct perceptions with the AttrakDiff2 questionnaireand a number of user characteristics, such as age, generalcomputer expertise in years, intensity of general computerusage in hours per week, specific product expertise inmonths, intensity of specific product usage in hours perweek and self-rated product expertise. Fig. 5 shows themean values for each measure, its standard deviation andtheir inter-correlation. Given the reported measure was ascale, Cronbach’s a is reported in the diagonal.The analysis of the product attributes and user char-

acteristics showed coherent results and interrelation of themeasures.To determine the relation of user characteristics, product

attributes and behavioral patterns, we conducted four sepa-rate stepwise multiple regression analyses with each of thefour behavioral patterns as criterion, six user characteristics(age, computer expertise [years], computer usage [hours/week], product expertise [months], product usage [hours/week], self-rated product expertise) and three product attri-butes (PQ, HQS, HQI) as predictors (see Fig. 6). A stepwiseregression determines a set of meaningful predictors by asequence of F-tests. It selects the variables with the largestprobability to explain the criterion variable and providesinformation about the contribution of each remaining pre-dictor to explain the criterion’s variance. We chose thisanalytical approach because of the relatively substantialintercorrelation of predictors.For each behavioral pattern, at least one model

emerged. As expected, execute is predicted by pragmatic

S. Harbich, M. Hassenzahl / Int. J. Human-Computer Studies 69 (2011) 496–508 503

Page 10: Using behavioral patterns to assess the interaction of users and product

Author's personal copy

execute engage evolve expand

N M (SD) prod. part. prod. part. prod. part. prod. part.

)48.1( 59.3 872 .detcepxe naht yltnereffid sdnopser semitemos tcudorp sihT .839 .825

Sometimes I spend a long time searching for functions I need for my work. 278 3.84 (1.77) .799 .752

When working on tasks with this product I often need more time than intended. 278 4.77 (1.82) .814 .623

Sometimes I am surprised about the product's reactions to my entries. 278 4.11 (1.91) .812 .758

)48.1( 78.3 672 .emosrebmuc semitemos si tcudorp siht htiw krow ehT .763 .768

I tend to forget the time now and then when working with this product. 273 4.07 (1.99) .477 .455 .555

)78.1( 82.4 672 .ylevitaerc sksat ym gnihcaorppa swolla tcudorp sihT .472 .426 .497 .602

)89.1( 29.2 772 .tcudorp siht gnirolpxe yllufyalp m'I emit eraps ym nI .763 .709

Even if my actual task already is satisfactorily completed, I sometimes try to improve it )19.1( 80.4 772 .tcudorp siht fo dia eht htiw .699 .776

I can enhance my work's quality without additional effort by using this product. 272 5.00 (1.60) .494 .707 .657

This product helps me to complete my tasks better than expected without additional effort. 276 3.97 (1.57) .709 .612

With this product, I can sometimes even exceed my aim without additional effort. 275 3.73 (1.75) .686 .604

)94.1( 83.5 872 .yllautneve deen yam I snoitcnuf ynam sah tcudorp siht eveileb I .721 .689

I occasionally have "misused" this product for purposes beyond its usual range. 278 3.22 (1.99) .868 .858

Now and then I'm completing tasks with this product, it isn't really intended for. 276 3.12 (1.88) .884 .854

Occasionally I use this product in an odd manner to complete my task. 274 3.59 (1.95) .486 .684 .720

I'm sometimes using this product for tasks probably not typical for this product. 277 3.41 (2.08) .821 .828

I believe I sometimes use this product differently compared to other users. 277 3.56 (1.86) .581 .554 .519

Eigenvalue 2.35 3.16 4.00 1.90 3.10 2.39 3.27 3.19

% explained variance 13.0 17.5 22.2 10.5 17.2 13.3 18.2 17.7

Fig. 3. Structure of the behavioral items in regard to both the products (prod.) and the participants (part.) Note. prod.: sample of products (averaged across participants), n¼76; part.; sample of

participants (centered on products); n¼278.

S.

Ha

rbich

,M

.H

assen

zah

l/

Int.

J.

Hu

ma

n-C

om

pu

terS

tud

ies6

9(

20

11

)4

96

–5

08

504

Page 11: Using behavioral patterns to assess the interaction of users and product

Author's personal copy

quality (PQ) only. Execute is about accomplishing giventasks efficiently, and PQ subsumes those attributes, whichsupport users with this (i.e., classic view of usability). Inaddition, the absence of any other variable in the model forexecute emphasizes the construct validity of our measures,thereby lending credibility to the results for the other setsof behavioral pattern.

For engage, four predictors were identified, by far thestrongest of them being hedonic quality—stimulation(HQS). Sheldon et al. (2001) as well as Reiss (2004) assumestimulation and curiosity to be a basic human need.Actually, HQS captures the product’s ability to addressthose needs, through novelty and creativity. The link toengage underlines the importance of need fulfillment per se

and stimulation in product interaction to foster behaviorsbeyond the mere task execution. The users’ skill also addsto engage. Being good at using their products motivatesusers and helps them to keep up with working with theirinteractive products and engaging in their work. Another

predictor for engage was expertise with the product,although the increase in additional explained variation(R2) is rather small. The more hours participants workedwith the product per week, the more engagement relatedbehavior was reported. Note, however, that the causaldirection of effects cannot be determined here. Moreintense usage could be rather viewed as a consequence oreven an integral part of engage, assuming that specificbehavioral patterns (e.g., ‘‘In my spare time I’m playfullyexploring this product’’) already imply more intense usage.Finally, the age of the participants emerged as a predictorfor engage. The younger they were, the more extramotivation they invested.For evolve, four predictors were identified. Here, PQ

and HQS emerged as almost equally important. Whenusers were easily able to use their products withoutimpairment, they more easily implemented their tasks innew ways with their products. In addition, the morestimulating users rated their products, the more they

execute .84 4.04 (1.45) 4.04 (1.31) 0.04 .86

engage .70 4.04 (1.40) 4.32 (1.18) 2.31* .82

evolve .76 4.84 (1.13) 4.87 (1.13) 0.40 .86

expand .86 3.72 (1.42) 4.19 (1.28) 3.10** .73

Note. *p < .05. **p < .01.

Scale

Cronbach’s αn = 278

Mt1

n = 43

Mt2

n = 43

T

n = 43

rre-test

n = 43

Fig. 4. Stability of the behavioral patterns.

M(SD)

Product

1pragmatic 4.3

(1.1) (.87)a

2hedonic - stimulation

4.3 (0.9)

3hedonic - identification

4.8 (0.9)

Person

4 age 37.5 (9.9)

5 computer expertise (years)

16.4 (6.7)

6 computer usage (hours/week)

36.5 (11.9)

7 product expertise (months)

32.9 (32.2)

8 product usage (hours/ week)

15.7 (11.4)

9 self-rated product expertise (1 - 5)

3.92 (0.76)

Note. *p < .05. **p < .01.

.48** (.82)a

.72** .61** (.83)a

-.02 -.02 -.09

.02 .06 -.02 .71**

.06

.05

.10

.37** .15* .33** -.03 .09 .12 .22** .20**

.03 .08 -.04 -.08 .35** .18**

.14* .16* -.13* -.02

.04 .07 .12* .14* .11

1 2 3 4 5 6 7 8 9

aCronbach's α

Fig. 5. Intercorrelations of product attributes and user characteristics.

S. Harbich, M. Hassenzahl / Int. J. Human-Computer Studies 69 (2011) 496–508 505

Page 12: Using behavioral patterns to assess the interaction of users and product

Author's personal copy

explored them. Other than execute, evolve requires stimu-lation (e.g., novel functionality) in addition to pragmaticaspects. Its close relation to work tasks, however, distin-guishes it from engage. With a rather small increase inadditional explained variation (R2), the age of the userswas the third predictor. Younger participants showedmore evolving behavior, i.e., found it easier to exploretheir products. As fourth predictor, the participants’experience with their products facilitated establishing newways of using it.

Two predictors emerged for expand. Both of themshowed an additional explained variation (R2) of less than.10 when adding a variable to the model, which equals asmall effect size (Cohen, 1992). One predictor was self-rated product expertise. Only with good knowledge of theproduct, users may be able to extending their product’sscope and going beyond the functionality it already offers.In other words, expand requires rather out of the boxthinking which benefits from specific product expertise. Incontrast to the other behavioral patterns, expand is notrelated to any product attribute. This suggests that expandis less ‘‘designable’’ than execute, engage, and evolve or atleast that the set of product attributes the AttrakDiff2questionnaire provides does not capture according productattributes.

The most remarkable finding in this analysis is the factthat no model emerged that suggests user characteristics as

considerable predictor for any behavioral pattern exceptfor expand, but with a very small effect size. Execute evenresulted in a model with product attributes only. However,the lack of relation to general or specific expertise may alsobe due to the fact that a particular level of expertise hasalready been a requirement for being included into thestudy. By that, major problems, which may occur whenbeing new to a specific product or technology and whichcan be solved by general expertise and specific knowledgeof the product’s possibilities, may already be all resolved.For engage and evolve, the models containing the usercharacteristics specific product expertise, skill and age onlygained a small increase in effect size compared to themodel without any user characteristics. Similarly, themodels for expand showed only quite small effect sizes,assuming that even those user characteristic variables thatremained in the models rather failed to explain thevariation in the behavioral patterns. Thus, user character-istic variables and behavioral patterns are mainly indepen-dent of each other. Instead, product attributes andbehavioral patterns are strongly related. This means thatthose behavioral patterns are associated with attributes ofthe product the users are working with. It does not meanthat behavioral patterns are depending on the productalone, as the associations users have with a product aremainly based on their individual experiences with theproduct.

computer usage (hours/week) .124*

Note. *p < .05, **p < .01.

Model included variables changes in R2

**427.425.QP1etucexE

**734.191.SQH1egagnE

**114.122.SQH2

self-rated product expertise .174**

**014.732.SQH3

self-rated product expertise .151**

product usage (hours/ week) .130*

**704.942.SQH4

self-rated product expertise .146**

product usage (hours/ week) .125*

*801.-ega

**294.242.QP1evlovE

**243.513.QP2

**013.SQH

**633.433.QP3

**903.SQH

**631.-ega

**033.443.QP4

**803.SQH

*131.-ega

product usage (hours/ week) .103*

expand 1 self-rated product expertise .030 .173**

2 self-rated product expertise .045 .158*

Fig. 6. Relation of user characteristics, product attributes and behavioral patterns.

S. Harbich, M. Hassenzahl / Int. J. Human-Computer Studies 69 (2011) 496–508506

Page 13: Using behavioral patterns to assess the interaction of users and product

Author's personal copy

5. Conclusion

We suggested four sets of behavioral patterns, whichoccur in the interaction with products used at work.Together, these sets describe a holistic approach to supportusers when working with interactive products. Users mustnot only accomplish their tasks efficiently, interactiveproducts should also help them to work focused andpersistently on their tasks. Work usually requires findinga way, modifying a way or picking the best way tocomplete a task, as there is not always an establishedmethod of accomplishing a given task. To fulfill one’soverall work goals, it might even be necessary to discovernew tasks and sub-goals to supplement the overall workgoal. We argued that interactive products have the poten-tial to facilitate accomplishing tasks, persistence andmodification of tasks or even creation of novel tasks.

We asked a large group of users about their behavioralpatterns and opinions when working with an interactiveproduct they chose. All hypothesized behavioral patternsoccurred and were stable over a period of 8 weeks. Theparticipants associated these behavioral patterns with theproducts they used, thereby indicating a relationshipbetween product and behavioral patterns.

We further examined the difference in the relationbetween the behavioral patterns and characteristics of theperson and behavioral patterns and attributes of theinteractive product used. We started with an analysis ofthe structure of the behavioral items, which revealed nostriking difference in the results of an analysis based onvariance stemming from people only (i.e., subjects analysis)or products only (i.e., materials analysis). The behavioralpatterns are not attributable to either the users or theinteractive product, but to both.

We then examined the direct influence of the varioususer characteristics and product attributes. Overall, usercharacteristics had little influence on the behavioral pat-terns. Effects were found regarding participants’ self-ratedskill, their age and their expertise with the products orcomputers in general. Nonetheless, these effects were verysmall. Regarding the product side, the interactive productsdo influence the behavioral patterns. So although it is theusers’ perceptions of product attributes that correlate withthe behavioral patterns, leading to the assumption that theusers themselves play an important role in determining aproduct rating, the user characteristics do not stronglyaffect this interrelation.

To summarize, the four suggested sets of patterns occurin everyday work with interactive products. As the struc-ture of the behavioral items is stable for users andproducts, the structure holds for both and reflects theirinteraction. The behavioral patterns do not describe theusers’ work or interactive products alone, but the users’work with these products as a whole. Nevertheless, thisinteraction is influenced mainly by the product attributes,whereas user characteristics play only a minor role. Ofcourse, our four behavioral patterns do not exhaustively

describe the requirements of the work context and maywell be complemented by other aspects of the workcontext, such as helping co-workers with using theirinteractive products.Our approach to explore the users’ behavioral patterns

instead of evaluating the product alone has been successful,as it focuses on the interaction of user and product. Itemphasizes what is important in the specific context ofusage, namely how users do their work and how usersbehave when using their products. It is independent oftechnological progress that makes adjustments necessaryfor conventional methods measuring product quality and itcan be used by persons that do not have expert knowledge tojudge an interactive product. Asking users about theirbehavioral patterns and feelings when using an interactiveproduct hence gives evident information about the productused and yet addresses the interaction between users andproduct. Trying to find out about desired behavioralpatterns when using certain interactive products is thus apromising approach, which is easy to apply to research andpractice. Marketing departments might define behavioralpatterns they aim for with their newest innovation andresearchers might analyze different contexts to apply the ideaof behavioral patterns to other contexts than the workcontext, e.g., educational or recreational context, and estab-lish their own models about desired behavioral patterns.One limitation is the lack of detailed formative informa-

tion to improve interactive products, which do not supportcertain sets of behavioral patterns. This is certainly adrawback of any brief and structured method. However,the present work not only presents a questionnaire, it alsopresents a model of aspects, which are important for thework context but not always a subject of a design effort. Inthis sense, the model and the questionnaire serves as areminder of the fact that even in a work context, behaviorbeyond the mere execution of a given task needs to beaddressed by a design. Products for work certainly mustallow for the accomplishment of given tasks, but they alsoneed to address motivational issues and must allow forchanging the way one actually does the work currently.Nevertheless, deriving more detailed design strategies foreach set of pattern surely is a future aim. A furtherinteresting issue worth exploring is the temporal character-istics of the sets of behavioral patterns. In the presentstudy, we restricted our sample to people with already aminimum of 4 weeks of expertise with the product.Expanding the time frame will provide interesting oppor-tunities to take a closer look at the formation anddeveloping of the behavioral patterns over time.The present study is one of the first attempts to translate

the notion of User Experience (UX) to a work context.While certainly limited, it nevertheless reminds us of theimportance of behaviors and according product attributesbeyond the mere usability. It thereby questions the popularbut obviously limited practice of distinguishing betweenwork and leisure. In fact, people remain humans even in awork context (at least most of the time) and the particular

S. Harbich, M. Hassenzahl / Int. J. Human-Computer Studies 69 (2011) 496–508 507

Page 14: Using behavioral patterns to assess the interaction of users and product

Author's personal copy

tasks—especially when accomplished through interactivetechnologies—in a work compared to a private contextmay be less different than expected. More importantly, itrenders over-simplistic approaches, which picture workmainly as a sequence of determined, predefined tasks aslimited. People need to get their work done, but they alsoneed to be motivated and improve their ways of doingthings. This must be properly addressed—hence‘‘designed’’—by any interactive product, which attemptsto provide a good User Experience.

Acknowledgments

We would like to thank Steffi Heidecker and NadjaZimmet for contributing considerably to the completion ofthis paper and for helping to revise the questionnaire items.

References

Baard, P.P., Deci, E.L., Ryan, R.M., 2004. Intrinsic need satisfaction:

a motivational basis of performance and well-being in two work

settings. Journal of Applied Social Psychology 34 (10), 2045–2068.

Bargas-Avila, J.A., & Hornbæck, K., 2011. Old wine in new bottles or

novel challenges? A critical analysis of empirical studies of user

experiences. In: Proceedings of the CHI 11 Conference on Human

Factors in Computing Systems, ACM, New York.

Carroll, J.M., Kellogg, W.A., Rosson, M.B., 1991. The task-artifact cycle.

In: Carroll, J.M. (Ed.), Designing Interaction: Psychology at the

Human–Computer Interface. Cambridge University Press, pp. 74–102.

Carver, C.S., Scheier, M.F., 1998. On the Self-Regulation of Behavior.

Cambridge University Press.

Cohen, J., 1992. A power primer. Psychological Bulletin 112 (1), 155–159.

Cropanzano, R., Citera, M., Howes, J., 1995. Goal hierarchies and plan

revision. Motivation and Emotion 19 (2), 77–98.

Csikszentmihalyi, M., Larson, R., 1987. Validity and reliability of the

experience-sampling method. The Journal of Nervous and Mental

Disease 175 (9), 526–536.

Davis, F.D., Bagozzi, R.P., Warshaw, P.R., 1992. Extrinsic and intrinsic

motivation to use computers in the workplace. Journal of Applied

Social Psychology 22 (14), 1111–1132.

Ellis, D., 1989. A behavioural approach to information retrieval system

design. Journal of Documentation 45 (3), 171–212.

Fogg, B., Nass, C., 1997. How users reciprocate to computers: an

experiment that demonstrates behavior change. In: Pemberton, S.

(Ed.), CHI ’97 Extended Abstracts on Human Factors in Computing

Systems: Looking to the Future. ACM, New York, pp. 331–332.

Forlizzi, J., & Battarbee, K., 2004. Understanding Experience in Inter-

active Systems. Paper presented at the Proceedings of the 2004

Conference on Designing Interactive Systems: Processes, Practices,

Methods, and Techniques, Cambridge, MA, USA.

Gagne, M., Deci, E.L., 2005. Self-determination theory and work

motivation. Journal of Organizational Behavior 26, 331–362.

Hacker, W., 1986. Arbeitspsychologie. Psychische Regulation von

Arbeitstatigkeiten. Huber, Bern; Stuttgart, Toronto.

Harbich, S., Hassenzahl, M., 2008. Beyond task completion in the

workplace: execute, engage, evolve, expand. In: Peter, C., Beale, R.

(Eds.), Affect and Emotion in Human–Computer Interaction, vol.

LNCS 4868. Springer, Berlin/Heidelberg, pp. 154–162.

Hassenzahl, M., 2003. The thing and I: understanding the relationship

between user and product. In: Blythe, M.A., Overbeeke, K., Monk,

A.F., Wright, P.C. (Eds.), Funology: From Usability to Enjoyment.

Kluwer Academic Publishers, Dordrecht, pp. 31–42.

Hassenzahl, M., 2010. Experience Design: Technology for all the Right

Reasons. Morgan Claypool, Francisco.

Hassenzahl, M., Burmester, M., Koller, F., 2003. AttrakDiff: Ein

Fragebogen zur Messung wahrgenommener hedonischer und

pragmatischer Qualitat. In: Ziegler, J., Szwillus, G. (Eds.), Mensch

& Computer 2003. Interaktion in Bewegung. B.G. Teubner, Stuttgart,

Leipzig, pp. 187–196.

Hassenzahl, M., Monk, A.F., 2010. The inference of perceived usability

from beauty. Human–Computer Interaction 25 (3), 235–260.

Hassenzahl, M., Tractinsky, N., 2006. User Experience—a research

agenda. Behaviour & Information Technology 25 (2), 91–97.

Hirschmann, E.C., 1980. Innovativeness, novelty seeking, and consumer

creativity. Journal of Consumer Research 7, 283–295.

Howard, G.S., 1994. Why do people say nasty things about self-reports?

Journal of Organizational Behavior 15, 399–404.

Igbaria, M., Schiffman, S.J., Wieckowski, T.J., 1994. The respective roles of

perceived usefulness and perceived fun in the acceptance of microcom-

puter technology. Behaviour & Information Technology 13 (6), 349–361.

International Organization for Standardization, 1998. ISO 9241: Ergo-

nomic requirements for office work with visual display terminals

(VDTs)—Part 11: Guidance on usability.

International Organization for Standardization, 2006. ISO 9241: Ergo-

nomics of human–system interaction—Part 110: Dialogue principles.

Isen, A.M., Rosenzweig, A.S., Young, M.J., 1991. The influence of

positive affect on clinical problem solving. Medical Decision Making

11, 221–227.

Kahneman, D., Krueger, A.B., Schkade, D.A., Schwarz, N., Stone, A.A.,

2004. A survey method for characterizing daily life experience: the day

reconstruction method. Science 306 (5702), 1776–1780.

Kaptelinin, V., 1995. Activity theory: implications for human–computer

interaction. In: Nardi, B.A. (Ed.), Context and Consciousness: Activity

Theory and Human–Computer Interaction. The MIT Press, pp. 103–116.

Kuutti, K., 1995. Activity theory as a potential framework for human-

computer interaction research. In: Nardi, B.A. (Ed.), Context and

Consciousness: Activity Theory and Human–Computer Interaction.

The MIT Press, pp. 17–44.

McCarthy, J., Wright, P., 2004. Technology as experience. Interactions 11

(5), 42–43.

Miller, G.A., Galanter, E., Pribram, K.H., 1970. Plans and the Structure

of Behavior. Holt, Rinehart and Winston, London.

Monk, A., 2004. The product as a fixed-effect fallacy. Human–Computer

Interaction 19, 371–375.

Nielsen, J., 1993. Usability Engineering. Morgan Kaufmann, San Fran-

cisco, CA.

Norman, D.A., 2004. Emotional Design: Why We Love (or Hate)

Everyday Things. Basic Books, New York.

Oesterreich, R., 1981. Handlungsregulation und Kontrolle. Urban &

Schwarzenberg, Munchen; Wien; Baltimore.

Reiss, S., 2004. Multifaceted nature of instrinsic motivation: the theory of

16 basic desires. Review of General Psychology 8 (3), 179–193.

Rogers, E.M., 1995. Diffusion of Innovations. The Free Press, New York.

Ryan, R.M., Deci, E.L., 2000. Self-determination theory and the facilita-

tion of intrinsic motivation, social development, and well-being.

American Psychologist 55 (1), 68–78.

Shackel, B., 1991. Usability—context, framework, definition, design and

evaluation. In: Shackel, B., Richardson, S. (Eds.), Human Factors for

Informatics Usability. Cambridge University Press, Cambridge, pp. 1–19.

Sheldon, K.M., Elliot, A.J., Kim, Y., Kasser, T., 2001. What is satisfying

about satisfying events? Testing 10 candidate psychological needs.

Journal of Personality and Social Psychology 80 (2), 325–339.

Stock, J., Cervone, D., 1990. Proximal goal-setting and self-regulatory

processes. Cognitive Therapy and Research 14 (5), 483–498.

The Walt Disney Internet Group, 2009. Pirates of the Caribbean: Dead

Man’s Chest. Retrieved 2011/01/23, from /http://www.discoverpira

teisland.com/S.

Venkatesh, V., Davis, F.D., 2000. A theoretical extension of the

technology acceptance model: four longitudinal field studies. Manage-

ment Science 46 (2), 186–204.

Wilson, T.D., 2000. Human information behavior. Informing Science 3

(2), 49–56.

S. Harbich, M. Hassenzahl / Int. J. Human-Computer Studies 69 (2011) 496–508508


Recommended