+ All Categories
Home > Documents > Fine-Tuning User Research to Drive Innovation - Dray · which products to keep in a product ......

Fine-Tuning User Research to Drive Innovation - Dray · which products to keep in a product ......

Date post: 14-May-2018
Category:
Upload: phamthu
View: 218 times
Download: 3 times
Share this document with a friend
8
this work market-changing innovation research (MCIR). Here we discuss two dilemmas that confront this type of work. Then we turn to four research challenges these dilemmas give rise to, discuss the limitations of com- mon research practices in dealing with them, and describe our own approach. We also describe some of our findings to give an idea of what our approach yielded. Our project’s goal was to make business intelligence (BI) information more available, relevant, and useful to a large population of so-called casual users. The focus was on usage of quantitative information, typically but not necessarily created by others using specialized BI tools. For those not familiar with the term, BI is any information captured in the course of business operations and made available to support busi- ness decision-making. BI can also include externally generated infor- mation such as partner-generated data and competitive intelligence. Unlike BI users who are actively engaged with quantitative data as a primary part of their role, casual users were assumed to use BI in a limited way for decision-making, meaning they consume it only occasionally or with little variety or depth of exploration. For example, they would use BI only in reports defined and structured by others, without examining it on their own from different angles. A number of things supported the belief that there was a significant population of such users who were eager to do more with BI. Anecdotes about people’s frustration when trying to access BI, and complaints about problems with integrating data across systems and organiza- tional silos were common. In addi- tion, recent reports by McKinsey [1] and Forrester [2] claimed that demand for and adoption of BI is rapidly spreading beyond the ranks of professional BI analysts. Furthermore, longstanding advo- cacy of data-based decision-making practices seems to call for richer use of BI. For example, in deciding which products to keep in a product User research that attempts to discover market-changing innova- tions faces many challenges. The more ambitious the innovation goal, the more difficult it can be to decide whom to study, what to look for, and how to make sense of the findings. Our reflections here are based on our experience col- laborating on an ambitious project, in which we conducted in-depth contextual research with 54 people in eight enterprises. Its mission was to generate concepts for innova- tive solutions that would engage a large, new audience whose needs were not being addressed by exist- ing products. In many respects, this was a dream project for research- ers who wanted to introduce user- centered design into the product development process as early as possible. However, in planning the research we had to confront par- ticular dilemmas, stemming from the combined goals of innovation and major market expansion, which we suspected were generalizable to projects with similar goals. We call Fine-Tuning User Research to Drive Innovation David Siegel Google | [email protected] Alex Sorin SAP | [email protected] Michael Thompson Emailvision | [email protected] Susan Dray Dray & Associates | [email protected] interactions September + October 2013 42 COVER STORY
Transcript
Page 1: Fine-Tuning User Research to Drive Innovation - Dray · which products to keep in a product ... Fine-Tuning User Research to Drive Innovation David Siegel ... catalog and which to

this work market-changing innovation research (MCIR). Here we discuss two dilemmas that confront this type of work. Then we turn to four research challenges these dilemmas give rise to, discuss the limitations of com-mon research practices in dealing with them, and describe our own approach. We also describe some of our findings to give an idea of what our approach yielded.

Our project’s goal was to make business intelligence (BI) information more available, relevant, and useful to a large population of so-called casual users. The focus was on usage of quantitative information, typically but not necessarily created by others using specialized BI tools. For those not familiar with the term, BI is any information captured in the course of business operations and made available to support busi-ness decision-making. BI can also include externally generated infor-mation such as partner-generated data and competitive intelligence.

Unlike BI users who are actively engaged with quantitative data as

a primary part of their role, casual users were assumed to use BI in a limited way for decision-making, meaning they consume it only occasionally or with little variety or depth of exploration. For example, they would use BI only in reports defined and structured by others, without examining it on their own from different angles.

A number of things supported the belief that there was a significant population of such users who were eager to do more with BI. Anecdotes about people’s frustration when trying to access BI, and complaints about problems with integrating data across systems and organiza-tional silos were common. In addi-tion, recent reports by McKinsey [1] and Forrester [2] claimed that demand for and adoption of BI is rapidly spreading beyond the ranks of professional BI analysts.

Furthermore, longstanding advo-cacy of data-based decision-making practices seems to call for richer use of BI. For example, in deciding which products to keep in a product

User research that attempts to discover market-changing innova-tions faces many challenges. The more ambitious the innovation goal, the more difficult it can be to decide whom to study, what to look for, and how to make sense of the findings. Our reflections here are based on our experience col-laborating on an ambitious project, in which we conducted in-depth contextual research with 54 people in eight enterprises. Its mission was to generate concepts for innova-tive solutions that would engage a large, new audience whose needs were not being addressed by exist-ing products. In many respects, this was a dream project for research-ers who wanted to introduce user-centered design into the product development process as early as possible. However, in planning the research we had to confront par-ticular dilemmas, stemming from the combined goals of innovation and major market expansion, which we suspected were generalizable to projects with similar goals. We call

Fine-Tuning User Research to Drive Innovation

David Siegel Google | [email protected]

Alex Sorin sAp | [email protected]

michael thompson Emailvision | [email protected]

Susan Dray Dray & Associates | [email protected]

inte

rac

tio

ns

S

ep

tem

be

r +

Oc

tob

er

20

13

42

coVer storY

Page 2: Fine-Tuning User Research to Drive Innovation - Dray · which products to keep in a product ... Fine-Tuning User Research to Drive Innovation David Siegel ... catalog and which to
Page 3: Fine-Tuning User Research to Drive Innovation - Dray · which products to keep in a product ... Fine-Tuning User Research to Drive Innovation David Siegel ... catalog and which to

?

?Casual Users• Infrequent reliance on data, often narrow set of tables or dashboards

• Receive content with guided analysis or predefined scope

• Content often embedded in other applications, mix structured/ unstructured

Active Consumers• Need light to medium interaction with BI content to answer evolving business questions

• Slice/dice, filtering, simple calculations and charts

• Collaborate, interpret, share with other users

• Use data from different sources

Anal

ysts

/BI P

rofe

ssio

nals

• An

alys

ts a

nd b

usin

ess

repo

rt a

utho

rs

• Jo

b is

def

ined

by

use

of B

I too

l

Pro

Cont

ent D

evel

oper

s

Addressable Market of Information Workers

So

ph

istic

atio

n

<2% <8% <25% >65%

Co-evolution of human behav-ior and technology. Another thing that complicates MCIR is the fact that people’s work practices and the tools they use to carry them out have co-evolved. This can cre-ate a conservative bias. When you study people using their habitual tools for their habitual practices, it often seems that parts of the work context fit together in ways that seriously constrain the possibilities for change. Similarly, if you already have a highly innovative concept and you go into field research trying to validate it, you have a high prob-ability of finding that it does not appear to “fit” because work practic-es are adapted to the previous tool or process. Certainly, people can point you to annoyances and ineffi-ciencies; however, paradoxically, the large problems built into the struc-ture of processes and systems may not be perceived by people whose job is defined around them.

If research is too biased in the conservative direction, it’s natural to think the solution is more design creativity and imagination. This is part of the answer, of course. On the other hand, imagination can easily become self-deception. Our abil-ity to imagine future users happily adopting our products is not a very sound basis for product develop-ment decisions, although too often it seems to be used that way.

research challengesDespite these dilemmas, we know that change does happen. Innovative products do take root, and when they do, they drive change in the ecosystem that surrounds them. When we see examples of successful innova-tion, we can always retrospectively identify how the various factors lined up to enable it. The ques-tion is, how can we maximize

catalog and which to discontinue, a decision always affected by strong vested interests and intangibles, BI should enable a company to use its own data to examine the sales trends for the product in markets segmented flexibly on many dimen-sions, to examine its statistical relationships with other products in the catalog, and to bring some quantitative rigor to a risk/benefits analysis. Figure 1 shows the work-ing assumptions about the structure of the hypothesized audience.

Dilemmas of mcIrUnderlying the research challenges in MCIR are two basic dilemmas. The first affects decisions about how to target the research to opti-mize your “bets” about where the richest clues for major innova-tion opportunities lie. The second explains why it is particularly dif-ficult to turn up these clues in rela-tion to workplace tools and systems.

The circle of unknowns. In MCIR, you inevitably assume the existence of an audience experiencing unmet, addressable, and often latent prob-

lems, as well as the technological possibility of addressing them. This requires finding the intersection of an audience definition, a set of needs, and a product concept when, at the beginning, you may have only a vague idea of where this intersection lies. However, because you cannot possibly study every permutation of these interlocking unknowns, you must make some decisions about where to focus. Complicating things further, any decision about one factor will influ-ence the others, potentially biasing the research in ways you may not recognize. If any one of these fac-tors could be taken as a given—for example, a given attribute of the audience, a known “pain point,” or an identified technology you are looking to apply—the one area of certainty could help narrow the remaining unknowns in your research plans. Thus, while address-ing an untapped market potentially has huge payoffs, the challenge is much greater than when doing research aimed at finding an inno-vative solution for your existing users.

• Figure 1. Working assumptions of the opportunity to address the casual user segment.

inte

rac

tio

ns

s

ep

tem

be

r +

Oc

tob

er

20

13

44

coVer storY

Page 4: Fine-Tuning User Research to Drive Innovation - Dray · which products to keep in a product ... Fine-Tuning User Research to Drive Innovation David Siegel ... catalog and which to

the contribution of user research to prospectively improve the odds? The answer requires us to address four basic research chal-lenges that arise from the dilem-mas we have just discussed.

Whom do you study? Commonly, the population within which you hope to find a rich, new opportu-nity is very heterogeneous. If you try to cover the full spectrum of non-users looking for your oppor-tunity, the research will be either very costly or very shallow. To increase the odds of success, you need a rational way of optimizing your bets, which means neither spreading them too thinly nor concentrating them prematurely.

In our case, the definition of casual users of BI was quite abstract and could be interpreted so broadly that it could seem to cover most workers except those with the most narrowly defined or routinized jobs. We did not assume that people of interest were concen-trated in particular roles or levels of hierarchy, or that they would preferentially be found in corporate as opposed to operational roles. We believed they could be found in a wide range of occupations and industries. Our clearest criteria were ones of exclusion: We did not want BI professionals; people in analyst roles that focused on providing information to support the decision-making of others, as opposed to applying the informa-tion themselves to help guide their own decisions; or executives who had heavy analytical support. We also excluded the financial indus-try, thinking it would be so heav-ily quantitative in culture and so focused on BI (which could include any information about a customer’s finances) that it would not contrib-ute much to our efforts to under-stand casual users.

We did believe that the kinds of people we were looking for would be within the broad category of “knowledge workers,” a term origi-nally introduced by Peter Drucker [3]. While there are many interpre-tations of this concept, our working definition can be summarized as: people who do not simply follow procedures designed by others, but who use judgment in applying principles to specific complex cases, and who evaluate, modify, develop, or establish processes or policy. Their jobs are typically defined in terms of goals rather than tasks, and they have relative freedom to decide how to approach their work. Unfortunately, this was still very broad and abstract.

There are a number of com-mon ways that companies try to learn about new audiences that we will not consider here, because we are focusing on in-depth user research. This includes surveys, interviewing domain experts and “thought leaders,” and gather-ing “requirements” from business stakeholders. These all may gen-erate hypotheses about where to start, but they generally do not provide information that is detailed or contextualized enough to guide the design of solutions. However, there are some common user research sampling practices that are used in pursuit of innovation:

• Studying existing users. If you want to understand how a tool or system currently works in prac-tice, there is no better way than to study actual users in their own context doing what they normally do. Understanding their confusions, frustrations, errors, and the inef-ficiencies they experience may be relevant if you are trying to deepen their engagement. Researchers and product teams often seem to assume that increasing the satisfac-

tion of current users will tend to expand the market, at least into the population of people on the edge of adoption. However, this may not tell you much about how to serve a new group that you believe has different needs, work patterns, and perspectives than the audi-ence with whom you are familiar.

• Relying on early adopters. Research for new products often relies heavily on studying so-called early adopters [4]. Identifying actual early adopters of a new technology retrospectively is a very differ-ent proposition from identifying likely ones prospectively. The latter requires you to make some assump-tions about indicators of early adopter status relevant to predict-ing adoption of your future inno-vation. Often, the concept is used as if it refers to a personality trait implying generalized fascination with technological innovation for its own sake, either across or within domains. What was more relevant to us was the desire for more usable and useful BI because of its busi-ness value. We did not believe that a history of past technology adop-tion per se would predict this.

• Lead users. The Lead User approach to innovation research [5] assumes that innovation is not nec-essarily driven from the top down; it may even more commonly and effectively arise among end users themselves. Some end users have the latitude, resources, capabil-ity, and initiative to modify their own processes. The assumption is the solutions at which they arrive based on their personal experience may be applicable to others. The problem is that these people may have some idiosyncratic attributes, and those that specifically qualify them as lead users may make them unrepresentative of the larger audi-ence. Their solutions may work for in

tera

cti

on

s

Se

pte

mb

er

+ O

cto

be

r 2

01

3

45

coVer storY

Page 5: Fine-Tuning User Research to Drive Innovation - Dray · which products to keep in a product ... Fine-Tuning User Research to Drive Innovation David Siegel ... catalog and which to

they translated our concepts into descriptions that were more con-crete and meaningful in the partic-ular contexts of their companies—something that was more difficult in some companies than in others. Finally, it allowed us to correct the mistaken assumption that we were looking for power users of quantita-tive data or for technical people who produced data reports for consump-tion by others, as opposed to people who tried to extract meaning from the data to apply it to their work.

To help our contacts identify likely candidates, we suggested they consider people whose work involved:

• Managing or evaluating process-es, performance, resources, etc.

• Leading current in-the-trenches change initiatives

• Temporary assignment to a task force that uses data to describe the current situation and to support their recommendations

• Using data to help them make business decisions, make recom-mendations, or contribute to the decisions of others.

We also suggested some behav-ioral indicators to help our recruit-ing contacts nominate specific participants:

• Requesting custom reports • Expressing frustration with data

that is available to them

them in the context where they personally experience the problem, without being generalizable to oth-ers. Furthermore, one is likely to be more tolerant of limitations in a solution that one designs for one’s own use than one would be for a commercial solution.

• Studying dissatisfied current users or ex-users. Studying those who have experience with a solution, having used it and abandoned it, can be extremely useful, whether their dis-satisfaction is with your product or a different one in the same genre. However, it can be dangerous to assume that these people, who have already gone through the early stag-es of adoption and abandonment, are similar to non-adopters or peo-ple who have not even been exposed to products like what you envision. This is especially true in the work-place, where end users rarely have free choice about what tools are made available to them—a trap that researchers and innovators often fall into. Also, abandonment implies limited trial use followed by almost total rejection. In our case, we were not concerned about people who had completely given up efforts to find value in BI, but rather those who extract less value from BI than they could in principle, given great-er engagement. As discussed, these people might not expect more from

their tools, and therefore may not identify themselves as dissatisfied.

Because all of the above approaches are flawed and because of the limitations in our current knowledge, we did not provide tight criteria to use for recruiting; we used a flexible approach for which fuzzy logic is a good metaphor. As is common in user studies within businesses, we had to work through contacts within companies who had a broad view of their organizations and could lead us on a path toward appropriate participants. Without stating “tight” criteria, we needed to give them guidance about what we were looking for so they would not default to the people who were the easiest to recruit, perhaps for the wrong reasons (e.g., they were the most available).

In our conversations with them, we intentionally avoided identifying our target users based on job title, role, or position. Instead we shared our highly conceptual description of casual users of BI and of knowledge workers, and then discussed with them their rationales for people they suggested. This was extremely informative for us, because the rationales were essentially hypothe-ses about what could indicate a per-son was a candidate for support in doing more with quantitative data. The process also let us see how

inte

rac

tio

ns

S

ep

tem

be

r +

Oc

tob

er

20

13

46

coVer storY

Page 6: Fine-Tuning User Research to Drive Innovation - Dray · which products to keep in a product ... Fine-Tuning User Research to Drive Innovation David Siegel ... catalog and which to

• Bringing data questions to iden-tified local experts

• Challenging generally accepted interpretations of existing quantita-tive data, or introducing alternative data to present a different picture.

This resulted in a diverse sample of participants from a range of jobs that seemed very consistent with our concept of target users. The sample also included people who were in a gray zone in our minds—either because their jobs were already so inherently quantita-tive that they might be more active users than casual, and people whom we thought might be stretching the definition of knowledge worker because their jobs might be too routine in following prescribed pro-cedures. This was exactly what we had hoped for, a sample that brack-eted the boundaries of our concept.

What should you look for? In addition to deciding where in the vast universe of the potential audience you will aim your MCIR microscope, you need to decide what in the vast universe of con-tent you could potentially explore. In any complex domain, you can’t possibly study all activities, tasks, and workflows in search of the most revealing use cases or the rip-est opportunities for change. Nor would it be useful. Those common scenarios that are most central to people’s roles, or their modal tasks, are most likely to already be supported with job design and tools perceived as being in relative balance, because this is where it is most likely that jobs are designed

around the limitations of tools. What is needed is a way to priori-tize cases that do not fit comfort-ably with routine approaches.

User researchers tend to pre-fer approaches to data gathering (where feasible) in which users’ perspectives and concerns drive the exploration. This is consistent with the concept of user-centered design. However, when we are addressing non-users comfort-ably adapted to their current tools and processes, these approaches may need adjustment. Here are some of the common ones:

• Relying on known pain points or user-identified pain points. User research is often portrayed as a process of looking for so-called pain points and unmet needs. These clichés seem to imply collecting conscious grievances. However, in seeking opportunities for major innovations, this is of limited value because of the co-evolution issue described earlier. People are not necessarily aware of the ways in which they have adapted to the existing context and perceive it as normal. Their perceptions of problems are anchored by the capa-bilities and context in which they currently use their tools. This is not to deny that people can iden-tify frustrations or inefficiencies, but even if they call for creative solutions, they rarely point to opportunities for quantum leaps.

• Contextual inquiry and contex-tual design. In theory, contextual design looks for opportunities for innovation. In practice, much user research that claims to be in this tradition seems to aim at captur-ing detailed but neutral descrip-tions of how people do their work, including ways that work processes vary depending on other factors. However, looking for opportunities for positive change requires more

than just description; it requires an evaluative, diagnostic, and predic-tive orientation. While contextual design advocates looking for places where people’s work breaks down and their tools do not serve them well, the fact that it focuses on studying experienced workers doing their typical work while they expli-cate it seems to make it difficult for many practitioners to move from describing their work to critiquing it. And, of course, if you study users of your existing tools, this is not the market you are interested in.

• Ethnography. Design ethnogra-phers strive to gain insight into fun-damental dynamics of behavior and experience that have implications for design, rather than focusing on how people work given their cur-rent tasks and tools. The intent is to make it easier to envision funda-mentally new approaches by remov-ing a preoccupation with the tools themselves. There is certainly merit in this, but there are also some challenges. People’s observed behav-ior, artifacts, and ways of expressing their experience are the windows into these fundamental dynamics. However, these are all shaped by existing processes and structured by existing tools. Of course, you are also interested in people’s deep purposes, but these are abstractions that have to be inferred from broad patterns of behavior and from the rationales they express for them. That makes them several layers removed from observable behavior. But, ultimately, proposing an inno-vation in tools that you think people will adopt means predicting behav-ior with tools. Because of this, eth-nography can be vulnerable to dif-fuseness. What can look like depth to an ethnographer sometimes looks to a product planner like overly vague information, in terms of its implications for the product. in

tera

cti

on

s

Se

pte

mb

er

+ O

cto

be

r 2

01

3

47

coVer storY

Page 7: Fine-Tuning User Research to Drive Innovation - Dray · which products to keep in a product ... Fine-Tuning User Research to Drive Innovation David Siegel ... catalog and which to

gramming can be as iterative as you like, resistance to breaking an evolv-ing structure and revising it can be high, especially since many people on the team contribute to it and naturally become invested in it.

As in affinity diagramming, we used a clustering approach, but a key difference was in the type of elements we clustered: We began by iteratively clustering the more than 120 work scenarios we had gathered. The process was itera-tive, because each story was rich in implications, relevant to mul-tiple topics, and indexed in many ways (by company, role, task type, business goal, etc.). Because they required complex interrelated cat-egories, these outputs would have been very difficult to document and discuss with traditional affin-ity diagramming exercises alone.

For example, one output from the analysis was detailed descriptions for dozens of basic quantitative tasks we observed and grouped into business areas. It resulted in cases being consolidated into task group-ings structured in the following way:

• A definition of the quantitative task type (e.g., monitoring the data from a process metric and changes to it over time)

• Identification of the common quantitative thinking challenges that emerge in this task (e.g., how users assessed where a particular threshold in the data should be set)

• A collection of thumbnail sce-nario examples, based on our data, that showed the applicability of the task type to different business func-tions (e.g., observed examples from product management, merchandis-ing, budgeting, etc.).

Turning data into findings with strategic and design impact. Our research resulted in two conflicting findings that together painted a sur-prising picture of the audience and

Our basic approach resembled contextual inquiry. It began with semi-structured interviews regard-ing the participant’s role and func-tion vis-à-vis larger business pro-cesses, the range of things they did to fulfill these, their motivations, and their current challenges. This enabled us to frame the context in which BI was used. Participants naturally provided brief examples of work scenarios, enabling us to choose ones for deeper exploration. This exploration usually involved observation or walkthroughs of current or upcoming tasks, while in other cases it focused on recent work examples anchored by explor-ing artifacts from that work.

During this exploration, we were able to probe the “leading edge” of their current uses of quantitative data. That is, we looked for points where people seemed to put aside quantitative analysis and shift to other forms of thinking. A critical success factor was that our under-standing of the business and the person’s role and tasks enabled us to generate realistic, contextu-ally relevant “what if” scenarios in which we guided the participant in envisioning taking their quantitative thinking one or two steps further.

To further clarify the leading edge of the participant’s data usage, we also looked for work scenarios where the person was most highly motivated to muster data to support an argument. Examples included:

• Internal controversies• Determining when exceptions

should be elevated to rules• Internal and external account-

ability (e.g., audits)• Balancing trade-offs in allocat-

ing limited resources• Supporting and evaluating pro-

posals and high-stakes decisions• Disputes over the mean-

ing of metrics (e.g., performance

evaluations, setting quotas, tracking productivity, etc.).

The result of this was that we ended up with a collection of more than 120 fully contextualized case studies of real work challenges where people were pushed to the edge of their current practice.

Making sense: The challenge of analysis. Our research required us to “cast a wide net” in terms of both participants and the variety of use cases we explored. Each two-hour interview yielded many pages of notes and collected artifacts. The sheer volume and heterogeneity of unstructured narrative data signifi-cantly increased the challenge of identifying themes and opportuni-ties at a strategically significant and actionable level. This required many iterative passes through the data.

Another challenge was the need to understand and group specific cases and instances across partici-pants while preserving the team’s ability to contextualize them into the larger narrative of each user’s story. Our approach enabled us to group each case on multiple dimen-sions, while keeping the “story” intact. Without retaining this con-text for each instance, a team may be tempted to group superficially similar observations. We have seen this happen all too often with the affinity diagramming process, which is often the only form of analysis used for contextual field research data. It involves decompo-sition of observations into molecular “interpretive” comments, typically captured during debriefings, and then clustering these thematically. Often, clusters are developed by people who do not know the context of the comment. Decisions about how to group items can too easily become an exercise in semantic associations and lead to superficial insights. And although affinity dia-in

tera

cti

on

s

Se

pte

mb

er

+ O

cto

be

r 2

01

3

48

coVer storY

Page 8: Fine-Tuning User Research to Drive Innovation - Dray · which products to keep in a product ... Fine-Tuning User Research to Drive Innovation David Siegel ... catalog and which to

the opportunity. First, many casual users of business information dem-onstrated lack of motivation to go beyond their surprisingly rudimen-tary quantitative practices. Second, through our explorations with par-ticipants about what might be pos-sible with their data, we observed countless opportunities where organizations could seemingly have benefited from the simplest of next steps in understanding their data.

That casual users did not dis-play expected data curiosity was almost universally true across the participants we saw—even with people who frequently worked with numerical data as part of their jobs. Our data revealed several clues as to why users might lack curiosity about their data. We saw many examples where users showed limitations in their fundamental quantitative thinking about everyday business questions and tasks, and so had dif-ficulty seeing the potential value of going somewhat deeper in their analyses. For example, they did not take variability into account in making projections. They relied on unvalidated rules of thumb and other crude quantitative assump-tions. They did not evaluate their estimating practices by seeing if the data showed they were consistently over- or underestimating. As a result of limitations like these, they did not take full advantage of the potential value in the quantitative information that was available to them. Instead, they resorted to impressionistic thinking surprisingly quickly and had difficulty adopting the data-based decision-making practices that so many businesses try to promote.

In addition, casual users of BI had difficulty integrating qualita-tive with quantitative thinking. For example, they tended to overly discount the quantitative data when evaluating an outlier in a quantita-

tive trend if they could think of a qualitative fact that might partially explain it (e.g., “Our sales were down this Thanksgiving compared to last year, but the weather was bad this year”). They had difficulty thinking of a way to use related data to assess their qualitative explanations. Qualitative information that was relevant to interpreting quantitative patterns tended to live in individu-als’ heads, rather than being shared.

Our research showed that these challenges occur within a wide cross section of industries and user roles. Though this might be interpreted as a huge barrier to wider usage of BI tools, it in fact suggests that if the challenges can be addressed, they could potentially be generaliz-able to a broad audience. Likewise, our detailed categorization of chal-lenges, some of which are described here, created a list of issues that can be overcome. Because we observed opportunities for improved busi-ness thinking in even the most incrementally deeper engagement by BI users, we are confident that tools that address the quantitative thinking challenges we identified have the potential to guide users in modest but targeted steps to discover new value in their data.

conclusion We have argued that the chal-lenge in innovation research we have focused on is fairly ubiquitous where systems and jobs have co-evolved. Our strategy for addressing this was to systematically look for edge cases, both within the experi-ence of individuals and across indi-viduals. Although UX professionals often talk about the importance of understanding edge cases, it seems we often use the term as synony-mous with exceptions. In our case, we were looking for something more specific than parts of a job that are

not well supported in the existing system, creating extra work. Rather, we systematically looked for situa-tions where there is some motiva-tion for individuals to push against the limits (i.e., the “edges”) of their jobs. This is where you will find the intersection between the evolution of technology and the evolution of job and organizational design. It is at that intersection that true inno-vation always brings change.

EndnotEs:

1. Manyika, J., chui, M., brown, b., bughin, J., Dobbs, r., roxburgh, c., and byers, A. Big data: The next frontier for innovation, competition, and pro-ductivity. McKinsey Global Institute, 2011.

2. schadler, t. The state of workforce technology adoption: US benchmark 2009. Forrester, 2009.

3. Drucker, p. Landmarks of Tomorrow. Harper, 1959.

4. rogers, E. Diffusion of Innovations. Free press of Glencoe, 1962.

5. von Hippel, E. The Sources of Innovation. Oxford University press, 1994.

About thE Authors David siegel is a senior user experience researcher at Google. this work was done while he was a user experience researcher and consultant with Dray & Associates, Inc.

Alex sorin has more than 20 years of experience designing innova-tive and strategic software prod-ucts for leading software compa-nies. He authored several U.s. and E.U. design patents. currently, he is a director and

user experience architect at sAp.

Michael thompson has been working in product management, product marketing, and product design for more than 20 years in companies such as Apple, business Objects, sAp, and sev-eral startups. He currently leads

the user experience function at Emailvision, a pro-vider of cloud-based digital marketing tools. this work was done while he was director of product management at sAp.

susan Dray, president of Dray & Associates, is a practitioner and consultant carrying out both gen-erative and evaluative field research, and has taught many practitioners how to design, con-duct, and interpret field research,

among other things.

DoI: 10.1145/2503774 copyright held by authors

publication rights licensed to AcM inte

rac

tio

ns

S

ep

tem

be

r +

Oc

tob

er

20

13

49

coVer storY


Recommended