+ All Categories
Home > Documents > Maximizing respondent engagement through survey design

Maximizing respondent engagement through survey design

Date post: 14-Nov-2014
Category:
Upload: almohannad-alsbeai
View: 543 times
Download: 0 times
Share this document with a friend
Description:
Maximizing respondent engagement through survey design on Marketing Research PREPARED BYVision Critical & Angus Reid Strategies
Popular Tags:
16
CASRO PANEL CONFERENCE 2008 DEFINING & DELIVERING QUALITY FEBRUARY 5 TH & 6 TH , 2008 Maximizing respondent engagement through survey design PREPARED BY Vision Critical & Angus Reid Strategies
Transcript
Page 1: Maximizing respondent engagement through survey design

CASRO PANEL CONFERENCE 2008

DEFINING & DELIVERING QUALITY

FEBRUARY 5TH & 6TH, 2008

Maximizing respondent engagement through survey design

PREPARED BY

Vision Critical & Angus Reid Strategies

Page 2: Maximizing respondent engagement through survey design

2

Introduction

This paper is aligned with the topic “Survey Design”, specifically focused on the impact of visual and in-

teractive (user interface) design factors on respondent engagement.

Deploying online surveys presents a number of challenges and opportunities in the area of user interface

design and usability. A traditional telephone survey is a conversation, in which questions and answers are communicated as linear speech, while allowing respondents to clarify and create dialogue with the re-

searcher. However, a survey presented in a Web browser is a self-paced experience that requires thought

and discipline in interaction design and usability, in order to create an effective dialogue with the targeted respondent. An online survey is in many senses a “walk-up-and-use” application, similar to a kiosk where

guidance from the researcher is limited. The user experience for a first-time panelist needs to balanced in terms of becoming familiar with the user interface while being engaged, which translates to a more effi-

cient and predictable experience for longer-term panelists. Vision Critical has the experience in interac-

tion design, rich media and back-end development that permits us to achieve an effective balance in these respects. We believe that increased respondent engagement and improved response rates are pos-

sible with effective user interface design.

The objective of this study was to compare behavioral and attitudinal data across two identical online surveys, one of which was based on traditional user interface widgets and the other incorporated more

advanced feedback and interaction design techniques. The “traditional” user interface design utilized

components such as check boxes and radio buttons from the host operating system, presented within the familiar “flash card” navigational metaphor of a series of independent screens or pages. The “advanced”

design comprised elements of question presentation and navigational feedback designed to be more en-gaging, and to improve the context and meaning of questions and question groups. The survey incorpo-

rating rich media and advanced design shall be referred to as the “Fusion” survey and the survey using

the traditional user interface shall be referred to as the “flat” survey in this paper.

The three hypotheses we tested in this study were1:

� Respondents will report higher satisfaction by the Fusion survey that the flat survey. To evaluate this hypothesis, we have fine-tuned our approach to focus on respondent engagement as it re-

lates to the usability and user interface design of the Fusion survey.

� The Fusion survey will be perceived as shorter than the flat survey, and respondents will be will-ing to spend longer on it as a result.

� The completion rates for the advanced Fusion design will be higher than the traditional flat de-sign.

Methodology

A split sample research design was used for this survey, in which half of the sample responded to a Fu-sion survey and half responded to a flat survey. In total, 1246 individuals participated in the Fusion and

1227 participated in the Flat survey. The margin of error for each sample was ±2.8% (@ 95% confi-dence interval).

Individuals were randomly recruited from the Angus Reid Forum, our proprietary consumer online panel of approximately 50,000 Canadians. To ensure data were representative of the demographic make up of

the population, samples were balanced by age, gender and region using Census Canada figures.

1 Unless otherwise noted, any differences between the Fusion and Flat survey refer to statistically significant differences at p<.05.

We used Student’s Independent Samples T Test to test differences between point estimates (%), the Mann-Whitney Test to test differences between medians and the F-Test to test differences in the variable of responses to a scale (variance).

Page 3: Maximizing respondent engagement through survey design

3

In order to participate in the study, individual respondents were required to have the Flash Player (mini-mum version 8) installed on their computer system. Flash compatibility was evaluated by showing a Flash

image at the beginning of the survey. Individuals who could see the image qualified to take part while those who could not see the image were terminated.

Individuals participated in a survey on the topic of food. The study explored Canadians’ approach to food,

including what and where they eat, how they shop, what they know and how they feel about food. The

questionnaire included:

� Flash compatibility � Food consumption (behavior)

� Purchasing of food (behavior and knowledge)

� Attitudes toward food � Demographics

� Attitudes to participating in this particular survey such as easy to complete, fun to complete, sur-vey more enjoyable than most

� Appeal of survey topic and design � Future intentions to participate in online surveys

� Perceived length of this particular survey

User Interface Design

Several considerations were made when attempting to improve the respondent user interface through the advanced Fusion design. Our focus was on the impact on answering and interacting with a variety of

question types: single choice, multi-choice and grids.

The overall design of the Fusion survey incorporated larger controls than the operating-system native

widgets used in the flat survey, as seen in Figure 1.0.

Figure 1.0 Size of user interface controls, flat (left) vs. Fusion (right) The increased target size provides an easier means of selecting options. In some cases, a graphic was

included to further differentiate the options, as illustrated in Figure 2.0.

Figure 2.0 Example of Fusion button with graphical image

To increase the effectiveness of the survey user interface, a specific design pattern was applied to each question type. Relationships between and among questions were an area of particular focus.

There were three general design patterns used in the Fusion survey, to address traditional user interface

concerns in online surveys: combined questions, pop-up questions and sort questions.

Page 4: Maximizing respondent engagement through survey design

4

Question Type 1: Combined Questions

Combined questions utilize the entire real estate of the user interface by combining one or more ques-

tions onto one individual page. In the flat survey, a majority of these related questions were presented on sequential pages in which questions are presented one to a screen. A series of two unique flat ques-

tions regarding purchase locations and the subsequent amount spent at each location is illustrated below in Figure 3.0.

Figure 3.0 Two flat questions in series

The Fusion equivalent of this series presents these two questions on a single screen, as illustrated in Fig-

ure 4.0.

Figure 4.0 Combined Fusion questions on a single page By designing the user interface using a combined approach, the purchase location and expenditure be-

come in a sense part of the same question; there may be less perceived "narrowing" of the available choices as they are answered in pairs. The expenditure menus are enabled and disabled as the locations

are selected or de-selected.

Page 5: Maximizing respondent engagement through survey design

5

Question Type 2. Pop-up Questions

A second pattern of incorporating pop-up questions was designed to handle the occasional two individual

questions that were directly related to each other and are required to follow a linear pattern to complete.

For these types of questions, a simple subsidiary question was required conditionally upon a positive an-swer. In this example, the respondent is asked whether food was brought from home or purchased, after

the source of the meal is identified, as illustrated in Figure 5.0.

Figure 5.0 Sequential flat follow-on questions

In the Fusion design, this was presented as a pop-up modal "balloon", as shown in Figure 6.0.

Figure 6.0 Fusion follow-on question (pop-up question)

The goal of this design pattern was to provide greater context to the respondent and increase respondent engagement by associating the two questions in direct sequence, rather than first answering the location

at which the meal was consumed (e.g., "At Work") and then subsequently answering whether it was pur-

chased (e.g. “Brought it from home”, Bought it”).

Question Type 3: Sort Questions

Finally, a sort exercise was used in the Fusion survey to replace the traditional scale questions in the flat

survey (Figure 7.0). In the Fusion version, the categories were used as destinations or containers for "cards" containing the statements (Figure 8.0), the hypothesis being that the drag and drop interactivity

will encourage respondents to give greater thought to each statement given they are presented one at a time.

Page 6: Maximizing respondent engagement through survey design

6

Figure 7.0 Traditional flat grid question

Figure 8.0 Fusion drag and drop “sort” question

Respondent Engagement

Engagement with the Survey Experience

To address respondent engagement, the study anticipated that “respondents would be more engaged by

the Fusion than the flat survey”.

At the end of the questionnaire, respondents were asked to evaluate their survey experience by answer-

ing six attitudinal statements. Items were designed to evaluate questionnaire ease, enjoyment, topic in-terest, and question design.

Fusion outperformed the flat survey on all measures (Figure 9.0). Not only was the Fusion experience more engaging (i.e. fun, enjoyable) but respondents also found the Fusion survey easier to complete.

The sole difference between the two surveys was the display of the questions themselves, which indi-cates that the Fusion designed user interface was seen as far more engaging. Interestingly, there also

appears to be a halo effect potentially influencing the Fusion group to report greater enjoyment in taking surveys in general and finding the survey topic more interesting.

Page 7: Maximizing respondent engagement through survey design

7

Figure 9.0 Engagement in the survey experience

Encouraging Future Participation

Fusion is clearly a more powerful vehicle in promoting participation in future research than the more tra-ditional flat survey. Fusion survey takers are much more optimistic about their future participation in

similarly designed surveys (Figure 10.0).

Base: Total (Fusion=1246, Flat=1227) Q. If all surveys were like this I would be: More likely to take part in online surveys, No changes from my current participation in online surveys, Less likely to take part in online surveys

Figure 10.0 Future participation in research

Perceived Survey Length

The primary hypothesis of the paper was that the perceived length of the Fusion survey would be

“shorter” then the flat survey and respondents would be willing to spend longer on it as a result. How-ever, the results show that both the Fusion and flat surveys were thought to be, on average, 10 minutes

Page 8: Maximizing respondent engagement through survey design

8

in length. In reality the Fusion version actually took significantly longer to complete then the flat, but was not perceived as being longer (Figure 11.0).

Actual Time = Start time subtracted from stop time in electronic survey file Estimated Time = How long would you estimate it took you to complete this survey?

Figure 11.0 Actual vs. estimated length of survey.

Completing the Survey

Overall participation and disqualifications rates did not differ between the Fusion and Flat surveys (see Table 1.0).

Table 1.0 Participation and completion rates One of our hypotheses included higher completion rates for the Fusion survey than the flat survey. This hypothesis did not hold true although completion rates were quite high, exceeding 90% for both surveys

(Appendix: Table A1).

However, of the 7% Fusion/4% flat who dropped out, the analysis three notable stop points:

� Initial Page (11% Fusion vs. 21% Flat; p<.05) � Page Introduction (23% Fusion vs. 4% Flat; p>.05)

� Second Last Question (1% Fusion vs. 15% Flat; p<.05).

The remaining individuals dropped out at sporadic points throughout the survey with no differences be-

tween the Fusion and Flat surveys. Other research has speculated that the lower completion rates for

Page 9: Maximizing respondent engagement through survey design

9

Fusion type surveys could be due to individuals without broadband access dropping out of the Fusion survey because of difficulties loading Flash pages2. We, however, found no significant differences be-

tween the Flat and Fusion groups in terms of the Internet connection speed of respondents who dropped out after the Flash compatibility question.

One plausible explanation for the difference between completion rates could be the Flash Player detection

process. Respondents were explicitly asked if a Flash image was visible to determine whether they had

the correct Flash Player installed on their computers. However, respondents could in fact incorrectly click “Yes”, even if they could not see the Flash image. If this were the case, all subsequent Flash questions

would not be viewed and could explain the drop off rates observed immediately after the Page Introduc-tion.

Impact on Survey Data

For the majority of questions we found no significant differences between the Fusion and flat surveys. That is, the same conclusions were made across both surveys regarding Canadians’ eating and shopping

patterns. The Fusion and flat groups didn’t differ markedly in terms of demographics (income, education, children in the home, etc) and only minor differences were noted between Fusion and flat regarding the

speed of Internet connections (dial up 7% Fusion vs. 9% flat). Groups were equally like to have cable, ADSL and other high speed connections. The technical profiles of respondents analyzed from web-server

logs indicated no significant difference across the Fusion and flat sample with respect to browser type,

OS platform, screen resolution and Flash Player version.

There were three exceptions to this pattern: (1) a completely open-ended, verbatim question, (2) a num-ber of single choice food knowledge questions that included a “don’t know” option, and (3) the attitudinal

attribute question.

Open-Ended Questions

We asked individuals what they ate yesterday for one of the following meals: breakfast, lunch or dinner.

In both surveys, respondents were asked to indicate up to 4 items they ate and to be as specific as pos-

sible, as shown in Figures 12.0 and 13.0 below:

Figure 12.0 Flat version of open-ended question

2 Reid J., Morden M. and Reid A., Maximizing Respondent Engagement: The Use of Rich Media, Presented at ESOMAR Congress 2007, p.6.

Page 10: Maximizing respondent engagement through survey design

10

Figure 13.0 Fusion version of open-ended question Our results revealed that Fusion encouraged more detail than the Flat design with respondents providing

more answers (Figure 14.0).

Base: Ate Meal Yesterday (Fusion=1229, Flat=1218) Q. What did you eat for [MEAL]? (Please write up to 4 items that you ate. Be as specific as possible. Individuals had to provide at least one response.

Figure 14.0 Completeness of open ended information. Fusion encourages more detailed responses.

There are at least two possible explanations for the more detailed responses in the Fusion question,

based on user interface design considerations. Firstly, the four fields were presented in horizontal series in the Fusion questionnaire. This may have suggested to some respondents that they were more of a

“list,” perhaps implying easier tabbing or movement. For those using the mouse to move from one field to another, the shorter distance between them may have been a factor. Secondly, the presence of the

second question — “Did you finish the entire meal” — may have provided greater context and resulted in

increased understanding or engagement, as intended.

Single-Choice Knowledge Questions

In the Shopping section of our study, we asked individuals about specific types of products they bought during their last shopping trip. For us, it was important to explore respondents’ knowledge of the prod-

ucts they bought. The inclusion of a “Don’t know” response, therefore, was critical.

In the flat survey we used grid/tables and traditional radio buttons (Figure 15.0). In Fusion, we used drop

down lists, pop-up questions, and larger single-select button controls (Figure 16.0).

Page 11: Maximizing respondent engagement through survey design

11

Figure 15.0 Flat questions with “Don’t know” buttons

Figure 15.0 Fusion question with “Don’t know” buttons

Although there were no differences in behavioral information between Fusion and flat when we used drop

down lists and pop-up questions, the Fusion survey resulted in more “Don’t know” responses than the flat survey when we used larger single-choice button controls (Appendix: Figure A2).

There are a number of possible explanations for these differences. Firstly, the inclusion of a ”Don’t

know” option was somewhat downplayed in flat (i.e. the “Don’t know” column is in fact less wide then

the initial two options) (Figure 15.0), while providing equal weighting in the Fusion design of the question (Figure 16.0). Therefore, respondents may in fact be inclined to answer more honestly since the “Don’t

know” option is equally weighted in appearance. Alternatively, it is possible that the layout of the Fusion question was not as clear to the respondents as the flat, however it is difficult to isolate the exact reason

for these differences and further research will be conducted.

Page 12: Maximizing respondent engagement through survey design

12

Attitudinal Questions

Attitudes towards food were explored by asking respondents to indicate their level of agree-

ment/disagreement to a series of 18 statements covering topics such as nutrition, organics, comfort,

price, and weight. Statements were randomly divided across two questions to reduce respondent fa-tigue. The format of these questions differed in flat versus Fusion (see Figure 7.0 and 8.0) The Fusion

survey resulted in broader use of the attitudinal scale than the Flat survey (Appendix: Table A2).

Furthermore, the Fusion survey resulted in greater use of the “Strongly disagree” response than the flat survey for 11 of the 18 statements and greater use of the “Strongly agree” response for 2 of the state-

ments. A more detailed look at the results reveals that the Fusion survey moved respondents from a

moderate to a stronger position and not from agreement to disagreement overall (Appendix: Table A3).

The higher response variability of the Fusion sort exercise may be explained by the drag and drop inter-action. The design of the question draws the eye and the respondent’s attention to the individual state-

ment, putting the scale into a different visual context. We however, have found the same tendency to-

ward greater scale use with sorting exercises that are organized vertically with “Strongly agree” at the top and “Strongly disagree” at the bottom3. Individuals may be making more informed decisions and

could be somewhat more comfortable providing socially undesirable answers like “Strongly disagree”. Another possibility is that in the flat version of these questions, the “Strongly disagree” option was pre-

sented furthest from the question statement, potentially introducing some difficulty in mapping the radio button back to the statement.

Conclusions

In conclusion, our comparison study of traditional “flat” user interface design to advanced “Fusion” user interface design suggests that Fusion surveys have a positive effect on respondent engagement. Results

indicated that:

� Respondents reported higher enjoyment and engagement in the Fusion survey process particu-

larly due to the question format. � The Fusion and flat surveys were perceived to be the same length however respondents actually

spent more time on the Fusion survey. � The completion rate for both surveys was high but was lower for the Fusion survey than for the

flat.

There were very few differences between the Fusion and the flat survey in terms of behavior and knowl-

edge questions with the exception of three unique question types: open-ended verbatim, knowledge questions with “Don’t know” options and the attitudinal attributes question. We believe these results have

important and positive implications for panel research and panel health. The promise of Fusion bodes well for the future of panels by helping to keep panelists interested and engaged, which will encourage

panelists to presumably stay on panels longer and provide thoughtful, detailed answers to research ques-

tions. In addition, we feel results challenge the mantra of “shorter is better” and believe that the benefits afforded by greater engagement outweigh the burden of a longer survey.

A few key methodological considerations have been identified to further explore the significance of ad-

vanced user interfaces in online surveys:

1. Flash Player Detection. To further understand the difference in completion rates between Fu-

sion and flat surveys, we have revised the Flash Player detection process to implicitly “sniff” out the Player rather then explicitly asking respondents whether a Flash image can been viewed or

3 Reid J., Morden M. and Reid A., Maximizing Respondent Engagement: The Use of Rich Media, Presented at ESOMAR Congress 2007, p.9.

Page 13: Maximizing respondent engagement through survey design

13

not. By modifying this process, we can confirm that all respondents had the Flash Player to par-ticipate in the survey which could explain the difference in completion rates across the two ver-

sions. Since respondent engagement is higher in Fusion surveys, we need to further understand why completion rates are not relatively higher as well.

2. Usability Testing. The next step in evolving the interaction design of our question types will involve one-on-one usability testing. In-person tests with representative users should help us

better understand how respondents are interacting with both traditional “flat” question and the

advanced Fusion design. More specifically, we could learn more about the challenges they face and the benefits of each design. In order to balance efficiency and engagement, we will be ex-

amining surveys at the question level as well as the overall experience; it may be that some combination of design approaches, along with new concepts, will allow us to achieve even

greater levels of respondent engagement.

3. “Don’t Know” Variability. Further tests to evaluate the difference in responses to questions that include “Don’t know” options will increase our understanding of these results and if in fact,

respondents tend to be more “honest” with their knowledge because of user interface design.

We have identified specific areas for improvement in user interface design while opening up further dis-

cussions on how online surveys can incorporate rich media techniques. The opportunity to explore addi-

tional design patterns in the future will expand our knowledge of respondent engagement.

The Authors: Su Ning Strube is Vice President of Fusion Services, Vision Critical, Canada.

Yola Zdanowicz is Senior Vice President, Angus Reid Strategies, Canada. Chris Ryan is Director of Usability, Vision Critical, Canada.

Katrina Tough is Vice President, Angus Reid Strategies, Canada.

Page 14: Maximizing respondent engagement through survey design

14

Appendix: Tables and Figures

Table A1.0 Drop-offs by question

Base: Shopper Purchase Items During Last Shopping Trip (various) Q. Please answer each question for each of the items you purchased. How Grown. All comparisons significant at p<.05 with ex-ception of pears and strawberries

Figure A1. Behavioral questions (How Fruit is Grown). More likely to get a “don’t know” response to some questions without Fusion.

Page 15: Maximizing respondent engagement through survey design

15

Table A2.0. Attitudinal data variability4

4 Standard deviation is a measure of the variability or spread of responses. An F Test is used to test whether the variability of responses across 2

groups is equal. A significant result indicates that the variability of responses in one group is larger than the variability of responses for the sec-

ond group

Page 16: Maximizing respondent engagement through survey design

16

Table A3.0 Attitudinal scale usage


Recommended