Date post: | 15-Jun-2018 |
Category: |
Documents |
Upload: | trinhhuong |
View: | 213 times |
Download: | 0 times |
CONTINUING EDUCATION Going to the Source:
A Guide to Using Surveys in Health Care Research
Kathleen A. Fairman
OBJECTIVE: To review techniques for producing use-
ful and objective surveys about health care con-
sumers' experiences and opinions.
DATA SOURCES: Literature references.
CONCLUSION: The following principles should guide
project planning and questionnaire construction:
1) enlist the respondent's interest and trust; 2) main-
tain trust by keeping respondent burden to a mini-
mum; 3) provide an attractive product; 4) avoid con-
fusing, threatening, or biased questions; and 5)
ensure that the questionnaire is consistent with
planned data analyses. Several additional procedures
enhance the usefulness of survey results: pretesting,
... AUTHOR
KATHLEEN FAIRMAN, M.A., is Outcomes Research Manager at Express
Scripts/ValueRx, Tempe, AZ.
AUTHOR CORRESPONDENCE: Kathleen A. Fairman, M.A., Express
Soipts/ValueRx, 1700 North Desert Drive, Tempe, Arizona 85281.
ACKNOWLEDGEMENT: The author appreciates the helpful comments of
Catherine Roe, Ph. D.
I~I CE CREDIT: This is article number 233-000-99-02-H04 in AMCP's
continuing education program. It affo/'ds 2.0 hours (0.20 CEU) of credit.
.
Leaming objectives and test questions follow on page 160.
150 Journal of Managed Care Pharmacy jMCP March/April 1999 Vo1.5, No.2
follow-ups with initial nonrespondents to increase
response rate, and performing tests for reliability and
validity.
When survey results are to be used for important
organizational purposes, or when a questionnaire
deals with potentially sensitive topics, employing a
well-trained professional survey researcher is wise.
Depending on project scope and complexity, survey
researchers may be used either as reviewer/consul-
tants or project administrators.
KEYWORDS: Survey, Questionnaire, Outcomes,
Reliability, Validity
J Managed Care Pharm 1999: 150-59
A few weeks after the President Bill Clinton-Monica Lewinsky scandal became public, a caller on an
Arizona radio talk show expressed his frustration
with the results of recent polls. "I don't understand who they
talked to," he complained. "Why, I was just at the gun show
today, and everyone there thought the president ought to be
kicked right out of office. The people at that gun show-that's who they should've asked."
The caller's proposed sampling technique was flawed, be-
cause gun show attendees in Arizona do not constitute a rep-
resentative subset of the US population, particularly in an
assessment of President Clinton, a gun control proponent.
Nonetheless, the caller raised important questions that are fre-
quently asked about political polls, but also apply to surveys
performed for market research, consumer satisfaction assess-
ment, and health outcomes measurement: Can decision mak-
ers rely on survey data? Do survey findings accurately repre-
sent the target population?
These questions are becoming increasingly important
in the health care industry Several factors-including the
development of the outcomes research field, the use of
"report cards" to assess managed care organizations, and
growing competition among service providers-have led
to a burgeoning interest in objective measurement of health
care consumers' opinions and experiences. J.4 This article,
in fact, was requested by the Journal of Managed Care
Pharmacy as a result of the positive response to the recent three-part series on outcomes research measurement and
methodology. 5-'
Going to the Source: A Guide to Using Surveys in Health Care Research
Surveys contribute unique information to the health care
field because they go directly to consumers, who are the
source of many important decisions. Consumers choose health
plans and physicians, request specific drug products, decide
whether to comply with prescribed medication regimens, and
terminate treatments. Surveys can examine the reasons under- lying those decisions. In outcomes research, surveys can pro- vide critical "process measures," sometimes explaining find- ings from other types of studies such as analyses of claims
databases or medical records. For example, assuming that
Product X and Product Yare equally efficacious in clinical tri- als, a finding of better effectiveness for Product X in actual
clinical practice might initially seem anomalous. A survey might reveal that Product Y produces intolerable side effects
leading to substantial noncompliance.
Thus, survey studies are potentially powerful tools.
However, like any tool, a survey must be designed and used
appropriately to be effective. This article will review tech-
niques for developing and carrying out survey research pro- jects that produce useful and objective results. Although
health care consumers are the primary focus, the techniques
outlined can be used in surveys of executives, physicians, or
anyone else who uses or works within the health care system.
QUESTIONNAIRE PLANNING
Literature Review Assuming the investigator has developed an empirically
measurable hypothesis or research question/ the first step in
any survey project is to search the literature for previous
investigations of similar research questions. This search has
two purposes. First, if an acceptable survey tool has already
been developed, it might be unnecessary to incur the expense
and effort of writing a questionnaire. Using an already existing
survey tool is particularly appropriate if it has been subjected
to tests of reliability (the degree to which a tool measures a
phenomenon consistently) and validity (the degree to which a
tool measures what it is intended to measure).79 For copy- righted instruments, it is important to incorporate user fees, if
any, into the project budget, and to allow sufficient time to
obtain permission. If it is necessary to write a questionnaire,
however, a solid understanding of previous work gained
through a literature search will help the investigator to write
appropriate questions.
Critical Decisions
The investigator must at this phase make important pro- ject decisions, including how many people will be surveyed (sample size), the method of survey administration (telephone,
mail, or in person), and how much the survey is going to cost.
To do this, the investigator must remember the importance of
a figure that is typically calculated as the data are collected- the response rate. The response rate for a survey is defined as
.
the proportion of eligible respondents (those who meet study
criteria) who complete the questionnaire. Respondents who
are found to be inappropriate for study are excluded from the
calculation.
For example, assume that Health-R-Us HMO is conduct-
ing a member satisfaction survey Because the HMO has deter-
mined in advance that, for privacy reasons, members cannot be telephoned at work, an eligible respondent is defined as a
Health-R-Us member with an accurate home telephone num- ber. Of 1,000 members telephoned, 600 complete the inter-
view, 100 are not reached, 100 advise the interviewer that
they no longer belong to Health-R-Us, and 100 refuse to com- plete the survey For an additional 100, the telephone number provided to the health plan is determined to be a business, so
the call is terminated without conducting an interview. The
response rate is 75%, derived as follows:
600 completed interviews 1,000 attempts - (100 nonmembers +
100 no home telephone number)
Note that only those ineligible for study are removed from the denominator. Refusals and members who were not
reached are included in the denominator, since there is no
information to suggest they are ineligible for study Only com- pleted interviews are included in the numerator.
Response rates are important as one measure of the degree
to which results are representative of the target population. A
low response rate may mean that only those who felt strongly
about the survey topic took the time to responds Additionally,
nonrespondents tend to be older than respondents, a fact that
is particularly troubling in a health care survey because it sug- gests that nonresponse may introduce age bias.IO.1l
In determining the sample size (the numbe"r of people to
be surveyed), the investigator must remember that not every-
one contacted will complete the questionnaire. Assume, for
example, that a brief in-person interview with patients leaving
a physician's office is planned. All patients are eligible for
study The interviewer has used a standard formula for sample
size calculationl2 to determine that the number of cases need- ed for analysis is 300. If the expected response rate is 80%, the number of attempted interviews will be 3001.80 = 375.
Table 1 provides a summary comparison of the two most frequently used survey administration methods, mail and tele-
phone. Generally, in choosing between the two, the investiga-
tor must balance budgetary considerations and staff availabili-
ty against the anticipated use of the information. For a tele-
phone interview project to be successful, trained interviewers
must be used. To achieve optimal response rates for a mailed
survey, staff must be available to make telephone calls and/or
carry out follow-up mailings to initial nonrespondents. These
activities require money and human resources. Nonetheless, if
the study results are to be used to make critical policy deci-
sions (for example, by a Pharmacy and Therapeutics
Vol.S, NO.2 MarchlApril1999 JMíP Journal of Managed Care Pharmacy 151
Going to the Source: A Guide to Using Surveys in Health Care Research
Table 1. Key Features of Mail and Telephone Surveys
Telephone Mail
Response rate without follow-up to initial nonrespondents Low (<5% - -30%) Moderate (-40%-60%)
Response rates with well-done follow-up Moderate (-50%-80%) High (-70%-90%)
Staff effort level assuming no follow-up will be done Low Moderate
Staff effort level assuming follow-up High High
Staff specialized expertise needed Little Much
Questions presented exactly as worded by investigator Always Depends on training
Expense Low-Moderate High
Overhead Little (standard office equipment ) Same as mail plus suffi-
cient telephones
Committee), or are intended for publication, high response
rates are particularly important and the resources necessary to
achieve them should be made available.
EXERCISE #1
Assume that a satÍsfaction survey is mailed to a sample of
1,000 mail order pharmacy customers. An eligible respondent
is defined as a current plan member with a valid home
address. A total of 45 completed surveys are returned. Of
those, 20 customers complain about the service, and five indi-
cate that the member is no longer with the plan. An additional
150 surveys are returned by the post office as undeliverable
with no forwarding address.
Question 1. What is the response rate?
Question 2. What conclusions can be drawn about satis-
faction with the plan?
Answers:
1. The response rate is 4.7%. (45 returns - 5 nonmem- bers)/(lOOO - 150 with no address - 5 nonmembers) =4.7%. One caution here: Some people who failed to respond to the
survey actually might be nonmembers and therefore ineligible
for study. Thus, the actual response rate might be higher than
4.7%, but there is no way for the investigator to quantify this
possibility. The example illustrates the importance of using
mailing lists that are as accurate as possible.
2. Despite the dissatisfaction rate of 50% (20 of 40 eligi-
ble respondents), it is not appropriate to conclude that dissat-
isfaction Jith the plan is high. It is more likely that dissatisfied
customers, were more likely than satisfied customers to
respond to the survey. A response rate of less than 5% is much
too low for the findings to be useful to decision makers.
152 Journal of Managed Care Pharmacy }MíP March/April 1999 VaI.S, No.2
WRITING THE QUESTIONNAIRE
Five principles of questionnaire construction (see Figure
1) should guide the survey investigator:
. Enlist the respondent's interest and trust.
. Maintain trust by keeping respondent burden (the
degree of difficulty in completing the survey) to a mini-
mum.
. Provide an attractive product.
. Avoid terms or questions that will confuse, threaten,
bias, or annoy the respondent.
. Ensure that questionnaire content and format are con-
sistent with the planned data analysis.
If the first four of these principles sound like marketing
advice, it's because they are; conducting a survey is, in a sense, .
a sales task. The investigator wants something (information)
and in return must provide something-the opportunity to be
involved in an interesting project, the rewarding feeling of
helping others, and/or the chance to express an opinion. For the respondent to feel confident that the promised "goods" will be delivered, the investigator, associates (e.g., telephone inter-
viewers), and all written materials must be highly professional
and must appeal to the respondent.
Enlist the Respondent's Interest and Trust
The old saying that first impressions are important applies
nowhere more than in an introduction to a survey. The initial
study description provided to prospective respondents tells
them what the project and the investigator are all about. Their
decision whether to participate will be based on the beliefs . generated by that description. A number of techniques en-
hance the likelihood of enlisting the respondent's trust. It is
Going to the Source: A Guide to Using Surveys in Health Care Research
usually not feasible to use all of these, but as many as possible
should be employed. 13,14
First, it is important to make the prospective respondent
aware that the investigator is credible. If the survey is spon-
sored or commissioned by a respected organization, particu- larly one to which prospective respondents belong, this
should be mentioned ("The Pharmaceutical Treatment Asso-
ciation asked me to conduct this survey because. . ."). Re-
earchers using mailed surveys commonly put the cover letter
on the sponsoring organization's official stationary or use its
official seal (with permission, of course). Mentioning the
investigator's credentials in the introduction is helpful as well ("As a Professor of Pharmacology with Halls-of-Learning University, I know the importance of. . .").
.
Second, the cover letter should explain the importance of
the project. For example, the introduction to a survey of
enrollee attitudes toward a proposed benefit change might
mention that the change is being considered for implementa-
tion next year. The introduction to a survey measuring satis-
faction with a particular subcontractor might mention that the
contract is up for renewal. These descriptions must be com- pletely honest. Additionally, they must not bias the respondent
in any way: For example, it would be inappropriate to tell
respondents, "We would like to know what problems people
are having with We-B-PBM so that we can decide whether to
terminate their contract." Instead, it would be better to say,
"We would like to know about your experiences with We-B-
PBM, so that we can make an informed decision when it is
time to reissue our contracts on July 1."
Third, the cover letter should be personalized with the
respondent's name if possible, and should emphasize the res- pondent's value to the project ("I know that your time is valu- able, but hearing your point of view would help us greatly").
This lets respondents know that you respect them, and would
not allow them to incur unnecessary burdens. The respon- dent's individual contribution should be emphasized ("We must
have your answers to these important questions if we are to develop sound policy"). If the respondent is part of a carefully
selected sample of, for example, leaders in the physician com-
munity or successful health plan executives, emphasize this8.13,14
Fourth, the cover letter should never imply in any waY that the respondent is obligated to complete the survey:
Ethical principles dictate that all research participation is vol-
untary. In addition, attempting to coerce respondents may
annoy them. It is better to inform the respondent fully, includ- ing a truthful assessment of how long it takes to complete the
survey: Let the respondent know of the right to refuse in a
way that emphasizes the respondent's importance to the pro- ject ("You do not have to answer my questions, but your
answers will help us to better understand the experiences of
people who belong to the Health-R-Us Healthplan.")'" Fifth, ethical principles also require that assurances of
confidentiality be provided. Investigators must maintain strict
control over study data to ensure that these promises are kept'"
Finally, providing a material incentive to participate is a
proven technique for enlisting respondents.13.15.17 Some re-'
searchers include one or two dollars in every mailed survey:
Others enter participants in a drawing for a prize. Addition- ally, respondents may be offered a copy of the report of study
results, as long as the report will present no information that
would violate the confidentiality of any other respondents.
Maintain Trust by Minimizing Respondent Burden
Several techniques strengthen the trust and interest of
respondents by minimizing their cost of participation. First,
the survey should be as brief as possible. However, the need
for a longer survey in order to cover a topic adequately is not
cause for alarm. Respondents will complete lengthy surveys if
they are interesting and attractively presented. It is not
uncommon for good survey research organizations to achieve
response rates in excess of 80% for interviews of one half hour
or longer. However, questions should not appear repetitious, a
flaw that makes respondents feel that they are wasting time.
"
.. ~} I
i
Figure L Principles of Questionnaire Construction
Respondent's interest and trust
... Investigator's credentials, organizational sponsorship
... Importance of project
... Personalized communications
... Ethical principles of voluntarism and confidentiality
... Material incentives
Minimizing respondent burden
... Survey as brief as possible, without repetition
... Minimal effort with no financial cost for respondents
... Contingency questions wheri appropriate .,..,.
'Ø Attractive, professional product
... Weight, color, postage of mailed materials
... Plenty of blank space in written materials
... Carefully trained interviewers .......
Clarity and ease of completion
... Terms, questions which respondents are competent to
understand
... Refer to specific time frames, events
... Response choices mutually exclusive and appropriate for
population
... No "double-barrelled" items
... Questionnaire in logical order
... Introduction to every major section
... Items with similar topics, response choices grouped together
... Attention to "order effects"
,- "r"
:r-
Consistency with data analysis
... Open-ended items "precoded" if necessary
... Response scales consistent with planned analytic techniques
Vol.S, No, 2 March/April 1999 fMíP Journal of Managed Care Pharmacy 153
Going to the Source: A Guide to Using Surveys in Health Care Research
Second, respondents should never incur any financial cost
for participation, and time expenditures for administrative
tasks should be kept to a minimum. Mailed surveys should
include a pre-addressed, postage-paid envelope for easy
return. If respondents need to return calls to participate in a
telephone survey, a toll-free number should be provided. The
number of interviewers should be sufficient to ensure that
respondents do not spend time on hold. Finally, respondents should never have to expend effort
figuring out whether a question applies to them. To avoid this
problem, surveys use "contingency questions." For example, in a satisfaction survey of HMO members, there might be a series
of items regarding pharmacy services. The investigator might
want answers from those members who have used the phar-
macy within the past month. Asking, "If you used a pharmacy
within the past month, did you find the pharmacist to be
friendly? If you used a pharmacy within the past month, was it
conveniently located?" is incorrect. This question format is
repetitious, and requires respondents to make decisions about
whether to answer the questions, which can be confusing and
time-consuming, particularly in a multiquestion series. In con-
trast, the correct format tells the respondent exactly what to
do. For example:
4. Have you used the Health-R-Us pharmacy within the past month?
[ ] Yes (Continue with Question 5) [ ] No (Skip to Question 10)
5. I'd like you to think about your most recent visit to
the Health-R-Us pharmacy. Would you rate the
pharmacist as Very Friendly, Somewhat Friendly,
Somewhat Unfriendly, or Very Unfriendly?
Provide an Attractive Product
To promote respondents' trust and willingness to partici-
pate, a professional appearance is essential. Surveys mailed
using first-class postage generate higher response rates than
bulk-rate mailings. Colored paper generates higher response
rates than white. The weight of the paper should feel profes- sional, not flimsy An attractive logo often is helpful.B.13.17 For the return envelope (in which the survey will be mailed back),
sometimes researchers use a first-class stamp instead of busi- ness-reply postage. The advantage of this approach is that the
stamp may prompt the prospective respondent to feel more obligated to complete the questionnaire. The disadvantage is
the higher cost of purchasing a first-class stamp for every prospective respondent, because business-reply postage is paid
by the addressee only if the questionnaire is returned. The organization of the questionnaire itself makes a par-
ticularly important contribution to a professional appearance.
It is sometimes tempting to think that cramming as many questions as possible into a small number of pages will make
the questionnaire appear shorter and thus enhance response
154 Journal of Managed Care Pharmacy }MCP March/April 1999 Vol.S. No.2
rate. In fact, the opposite is true; respondents are likely to be
irritated or overwhelmed by a messy or difficult form. Ques-
tionnaires should have plenty of blank space between each
question, with easy-to-see brackets for marking response
options. Booklet formats are easy to construct and make an
attractive presentationB,13.17
In telephone interviews, the primary way to enhance pro- fessional appearance is using an effective interviewer. The
interviewer's demeanor affects the quality of information obtained in several ways. First, the interviewer's voice charac-
teristics, including the speed and pitch of speech as well as
accuracy of pronunciation, affect the likelihood that a prospec- tive respondent will participate in the survey The interviewer's
attitude about the project and expectations about the interview affect both the likelihood and quality of response to questions. The interviewer's technique for "probing" (asking follow-up
questions when necessary) also affects results. For example, in
response to the question, "How would you rate your health
today-would you say it is excellent, good, fair, or poor?" a
respondent might answer "Oh, it's all right." A correct inter-
view technique would be to politely probe by repeating the
response options: "But would you say it is excellent, good, fair,
or poor?" It would be incorrect technique to assume that the
respondent meant to say "fair" or to ask, "So do you mean
your health is fair?" These approaches could bias study results. Findings are particularly affected by interviewer technique
when sensitive questions are being asked, as is common in
health care surveys.1B.21
"Interviewer effect"22 refers to the unhappy situation in
which the findings of an interview are the result not only of
the questions asked, but also of the demeanor of the particular
interviewer who happens to be on duty that day To avoid
interviewer effect, it is essential for a qualified instructor to
provide formal structured training of every interviewer. Train-
ing should include methods for reading questions and for ask-
ing follow-up questions in an unbiased and nonjudgmental
way A detailed review of each survey question should be in- cluded, and practice interviews are also important. Although
training may appear to be time consuming, it is better to spend
the time in training than to struggle to interpret baffling results.
after data collection, only to find that those results are attribut- able to the effect of one or more poorly-trained interviewers.
Avoid Confusing, Threatening, Biasing, or Annoying the Respondent
The most important principle to remember in writing questionnaire items is that respondents must be asked only
those questions that they are competent to answer, and pre- sented with only those terms that they are competent to
understand. A typical health care enrollee is capable of telling
you whether the waiting time in an emergency room was satis-
factory, but likely has no knowledge of whether the triage sys-
tem was medically appropriate. Typical patients can answer
Going to the Source: A Guide to Using Surveys in Health Care Research
questions about how they have been feeling, but most cannot
explain the medical reasons for their sensations. Questions
should be worded in a way that respondents understand them,
and terms should mean the same thing to all respondents.23
For example, the question "Did you ever experience a situa-
tion in which you stood up and suddenly felt dizzy or light-
headed?" is more appropriately worded than "Did you ever .
experience orthostatic hypotension?" When asking about diag-
noses, it might be helpful to review language from the National
Health Interview Survey, which is extensively validated24
Questions should be as specific as possible. For example,
instead of asking about satisfaction with physician office visits
in general, it's better to direct the respondent's attention to a
particular visit: "I'd like you to think back to your most recent
visit to your Health-R-Us primary care provider." If there is a
concern that the most recent visit might be atypical, respon-
dents can be askedif it was a typical visit. Since respondents'
recall diminishes with time, it is better to ask about recent
rather than distant events, particularly when eliciting detaiPS
For clarity and accuracy, response choices must be mutu-
ally exclusive. For example, the following response choices for
age are incorrect: 1-20 years; 20-30 years; 30-40 years; 40
years or older. Respondents who are 20, 30, or 40 years old fit
into two categories. Correct response choices are: 1-20 years;
21-30 years; 31-40 years; 41 years or 01der.8
Additionally, the range of response choices should be
appropriate for the population. For example, the age break-
downs for a survey of retirees might be: <65 years; 65-<75
years; 75-< 85 years; 85 years or older. In contrast, the age
breakdowns for an employed population probably will include
only one or two categories for those age 65 or older, but mul-
tiple age categories for those younger than age 65. Although
this is a relatively straightforward example, it is sometimes
more difficult to identify the appropriate range. For example,
in measuring income in a Medicaid population, it might be
helpful to know the financial requirements for eligibility so
that the range of possible incomes is understood. Failure to
identify the appropriate range can result in either confusing
the respondents or missing potentially important information.
For example, in a senior citizens' survey, all respondents
would check a box marked "age 65 or older."
It is important to avoid "double-barrelled"8 questions,
which confuse and sometimes frustrate respondents by includ- ing more than one concept in the same question. For example, the question, "Do you believe that Health-R-Us Healthplan's
laboratory test results are accurate and timely?" not only calls
for respondents to assess factors of which they probably have
no knowledge, but also asks them to answer two questions at
once. A respondent who believes that test results are accurate
but not timely, or timely but not accurate, does not know how
to answer this question. Although separating this question
into two items takes more time and space, it yields much
more interpretable results.
In addition to being clear, terms used in questionnaires
must be nonthreatening and nonbiased. The question, "What
do you think of the outrageously excessive dispensing fees
being charged by pharmacies today?" is both threatening in
tone and unclear. What are the definitions of the terms "exces-
sive" and "outrageous?" Does the respondent have any way to
know what fees are charged by pharmacies? The question, "What is your opinion of a policy which compels physicians
to prescribe generic products even if their medical judgment
tells them that the brand product is better?" uses value-laden
terms (compels and medical judgment), and biases the
respondent toward a negative assessment of a generic substitu-
tion program. A better approach would be to describe the pro-
gram briefly and neutrally, including a description of prior
authorization options if they are available, and then ask,
"What is your opinion of a program like this one?" Similarly,
the question, "How many problems have you experienced
with the We-B-PBM customer service officeì" biases the
respondent to indicate that problems have been experienced.
A respondent who has had no problems would be unsure of
how to answer the question. A better approach would be to
say, "We would like to know about your experiences in using
our PBM's services. During the past month, have you tele-
phoned the We-B-PBM customer service office?" Respondents
who have contacted the office can then be asked about their
satisfaction with that call. 8,26.28
To further enhance respondents' willingness and ability to
complete the survey, the organization of the questionnaire
should be logical, interesting, and nonthreatening. To be logi-
cal, the questionnaire should present items in an order that
will make sense to the respondents. For example, in asking
about satisfaction with Health-R-Us primary care services,
items might progress chronologically through questions about
making appointments, spending time in the waiting room, having conversations with the nurse and doctor, and filling
prescriptions at the in-house pharmacy following the visit.
Questions about each major topic area (satisfaction, utiliza-
tion, compliance, symptoms, subjective well-being) should be
grouped together. When possible, items with similar scales
should be grouped together as well, because respondents
become confused by repetitive changes from one scale format
(Excellent, Good, Fair, Poor) to another (Very Satisfied,
Somewhat Satisfied, Somewhat Dissatisfied, Very Dissatisfied).
Major sections should begin with a brief description of
what the group of questions is about and how it fits into the
overall theme of the questionnaire, For example, the opening
section might begin with: "First, 1 would like to ask you about
your opinions about the services provided by Health-R-Us." The next section might begin: "Now, I would like to know
about some of our specific services. To begin, I have some
questions about your opinions of our pharmacy." The next
section might begin, "My next set of questions asks about your use of Health-R-Us services during the past year."
VoLS, NO.2 MarchiApril1999 jMCP Journal of Managed Care Pharmacy 155
Going to the Source: A Guide to Using Surveys in Health Care Research
Generally, the questionnaire should progress from easier
to more difficult, less threatening to more threatening, less
specific to more specific, and more interesting to less interest- ing items. It is often helpful to open the questionnaire with an
easy item that will apply to and interest every respondent, such
as, "We would like to begin by asking your opinion about the
pharmacy benefits provided by our company Would you rate
them as excellent, good, fair, or poor?" Questions such as this
suggest to respondents that you are interested in hearing their
points of view. They also encourage respondents to begin
thinking about the survey topic. Typically, demographic items
such as age and gender should be in the final major section of
the questionnàire, because demographic items are boring and occasionally can be threatening, as in questions about income. A single, final open-ended item asking for comments leaves a
pleasant impression with the respondents. It is also important to consider "order effect," a circum-
stance in which the answer to a question is based not only on
the wording of the question itself but also on where the ques-
tion is placed in the survey For example, if a survey asks post-
partum women 15 questions about difficulties they might have
experienced getting prenatal care, and then asks their opinion
about whether prenatal care is important, the women's response
to the last question will be biased. Respondents could con-
clude, "After all, if prenatal care wasn't important, why did
that interviewer just ask me all these questions about it?"
Similarly, respondents' ratings of general satisfaction (e.g., with services provided to them) will sometimes differ depend- ing on where in the questionnaire the general satisfaction item is placed. If specific satisfaction are asked first, before the gen- eral satisfaction question, respondents will tend to frame their
assessment of general satisfaction based on the specific items29-31
For instance, if asked, "How satisfied are you with the speed
of our mail-order delivery service?" before being asked about
their overall satisfaction, respondents will tend to consider
mail-order delivery speed in making their overall assessment
of service. Whether this tendency is harmful depends on the
circumstances. If respondents' memories on a particular topic
might be weak, it is appropriate to mention specifics before
trying to measure a general opinion. In other circumstances, it
may be important for the respondents' opinions to be com- pletely fresh and unaffected by the investigator's preconceived
notions of which specific items lead.to general satisfaction.
Questionnaire Consistency with the Planned Data Analysis
In writing questions and response choices, it is helpful to
consider how the information will be analyzed and, if applica-
ble, communicated to decision makers. As with other choices
made in this process, it usually is necessary to balance staff
availability against the intended use of the information. Open-ended items (questions without response choices)
sometimes pose a challenge. There are a number of possible
156 Journal of Managed Care Pharmacy jMCP March/April 1999 VoI.S, No.2
uses for the information gleaned from an item such as, "We
welcome any comments you have about our services." If the
information is merely to be used for quotations that will illus-
trate some of the major points made in the data presentation, clerical staff are needed to type and organize the quotations. Even with this minimal use, a senior researcher should be
responsible for reviewing all quotations intended for presenta-
tion, to ensure that none reveals the identity of any participant ("I was the only employee in my company to have triplets last
year, and I thought the OB service was terrificl"). A more sophisticated use of the open-ended questions
requires more human resources. For example, if the investiga-
tor wishes to quantify the responses received, two qualified staff members should independently assign the open-ended
response to categories (precode) for data analysis. To ensure
that the precoding is reliable, the codes assigned by the staff
members should be compared for consistency
It also is important to consider the data analytic tech-
niques that will be used. If no multivariate techniques are
planned, then the choice between nominal, ordinal, interval,
or ratioï scales can be based solely on how respondents are
likely to understand or respond to the question. For example,
respondents will likely respond more favorably to a question
about annual income when it is measured in broad ordinal
categories (<$20,000; $20,000-<$40,000; $40,000-<$60,000; or $60,000 or more) than when an exact (ratio scale) figure is
requested ("What was your annual family income last year before taxes?") For other questions, an interval scale ("What is
your age in years?") will make more sense than a nominal one
("Are you under or over age 207").
If multivariate analysis is to be used, the task of writing
response choices becomes a bit more complex. If software or staff are not available to use techniques such as logistic regres-
sion or discriminant function analysis (which can handle nom- inal scale dependent variables) then the dependent variable"
must be measured on an interval or ratio scale so that linear regression can be used. Similarly, independent variablesï
should be measured on a scale that is appropriate for the data
techniques which will be employed to analyze them.
EXERCISE #2
Critique the following questionnaire items:
1. "Five people in our company have complained
frequently to us about the service provided by
We-B-PBM. What problems have you had?"
2. "Taking all antibiotic medication until it is gone is one of the most important determinants of suc-
cessful bacterial eradication. People who prema- turely terminate antimicrobial treatment risk
expensive and often painful recrudescence. Did
you take all your medication?"
Going to the Source: A Guide to Using Surveys in Health Care Research
3. "Please provide the dates and a detailed description of every physiCian office visit you have had since
1982."
Answers:
1. The question is biased because it begins by advising
the respondents of other employees' complaints. Note that the
word "frequent" is used, even though five people are perhaps
not very many, depending on the size of the company. The
question also asks what problems have been experienced
instead of first asking whether problems have been experi-
enced. A respondent who has had no problems will be unsure of how to respond.
2. The terms used (e.g., bacterial eradication, antimicro- .
bial) will baffle a typical patient. The question essentially tells
the respondent that people who fail to take antibiotic medica-
tion are irresponsible and/or not very bright. This approach
puts considerable pressure on the respondent to' answer that
all medication has been consumed. A better approach would be to begin the item with: "Some patients take medicine exact-
ly as prescribed, while others do not. We would like to know about your experiences."
3. The question is excessively burdensome and asks for
an impossible level of recalL
PRETEST
A pretest or pilot test is an assessment of a questionnaire made before full-scale implementation to identify and correct
problems such as faulty questions, flawed response options, or
interviewer training deficiencies. To conduct a pretest, the in- vestigator administers the questionnaire to a small sample- 10 to 50 cases, depending on the size and complexity of the
project. Different approaches are used to determine whether changes are needed. Sometimes, the clarity, intrusiveness, and
meaning of the items are discussed directly with the pretest sample members. Sometimes, in debriefings following comple-
tion of the pretest interviews, interviewers are asked about respondents' reactions. A pretest is often a good way to resolve disputes that arise during questionnaire construction. For example, a disagreement over whether a particular item is con- fusing can be resolved easily by asking pretest respondents or
interviewers their opinions about how the item was perceived.
Pretest surveys should never be used in calculating the
final project results. These surveys were obtained under differ-
ent circumstances, and in some cases using different ques-
tions, than the production interviews.
FOLLOW-UPS TO INCREASE RESPONSE RATE
Repeated attempts to obtain completed questionnaires
from initial nonrespondents will yield higher response rates
and more accurate results than if no follow-ups are performed.
For telephone interviews, a majority of a project's interviewers
should be available to work at night or on weekends, since
many respondents will be home at those times. Attempts to
reach respondents should be made at different times of day, to
allow for a variety of schedules32 If a respondent is reached
but is unable to conduct the survey at that time, the inter-
viewer should attempt to make an appointment to complete the interview. Consistent with the principle of maintaining
respondent trust, and as a matter of basic courtesy, the inter-
viewer or a colleague must keep that appointment at any cost.
For mailed surveys, it is sometimes helpful to send a
reminder postcard to all respondents one to two days after the
original mailing. The postcard can remind respondents that a
survey was recently sent to them, briefly reiterate the impor- tance of the project, and offer a telephone number to call with
any questions, concerns, or requests for a replacement ques-
tionnaire. The first set of initial responses usually comes in a
cluster sometime during the first week or two after mailing. A
second mailing, including another copy of the questionnaire if the budget permits, can be sent when that first set of returns begins to taper off. To avoid annoying respondents who have
already returned their questionnaires, it is important to
include in these materials a statement such as, "If this letter and your completed questionnaire have crossed in the mail, please accept this as our sincere thank you for your participa- tion!" Another option is to telephone nonrespondents and
offer to send another copy of the survey to those who have
discarded or mislaid the originaL The telephone call is often
an opportunity to answer questions that potential respondents
might have, or to provide reassurances about confidentiality,
use of information, or other concerns.
RELIABILITY ASSESSMENTS
Several types of reliability' (consistency) are potentially
important in health care surveys, depending on study meth-
ods and questions. In studies that will document trends such
as changes in participants' perceived well-being, "test-retest"
reliability-the consistency of measurement at two different
points in time-is important. Measuring "inter-rater" reliability is important in interview studies. Because an interviewer's
demeanor affects the likelihood and quality of responses, it is
important to compare response rates and key findings, such as
overall satisfaction levels or reported compliance rates, across
interviewers. Additionally, if interviewers are responsible for
making assessments, such as a respondent's ability to perform
certain tasks, it is important to have two or more interviewers
independently assess a subsample of respondents, and make
inter-rater comparisons of the findings. Internal reliability refers
to consistency between conceptually related items, such as com-
ponents of a scale, within a survey Statistics to measure test-
retest and inter-rater reliability include Pearson correlations,
VoI.S, No.2 MarchJApril1999 jMíP' Journal of Managed Care Pharmacy 157
Going to the Source: A Guide to Using Surveys in Health Care Research
intraclass correlations, kappa, and weighted kappa. Internal reliability is commonly assessed using the Cronbach's alpha
statistic.33.34
VALIDITY ASSESSMENT
As in other types of research, the validity' of the survey
questions is critical. If survey questions are not measuring
what they are intended to measure, misleading information
may be disseminated. For example, a survey might ask
patients who filled antibiotic prescriptions whether they took
all medication as prescribed. Some patients may believe that
missing doses does not remove them from the "as prescribed"
category; in this case the item does not mean to the patient
what the investigator thinks it means. To assess the extent of
this validity problem, patients who answer "yes" to the "as pre- scribed" question can be asked another, more specific ques-
tion: "Does that mean that you finished all the medication and did not miss any doses?"
Validity also can be assessed by comparing results based
on patient reports against information about actual behaviors
and outcomes. For example, investigators have compared
patient-reported perceptions of distress or health status with
medical utilization,35 subsequent mortality,36 laboratory val-
ues,37 medical records,38 or other clinical criteria39 Depending
on the project and its budget, validation of this type could be
done in the pretest, or on a small subsample. Validation of
survey findings can be performed even for an anonymous sur-
vey (one in which respondents' identities are completely
unknown even to the investigator). For example, if survey findings suggest that a particular group reported increased uti- lization in response to a benefit design change, a claims analy- sis could confirm that, in the aggregate, utilization grew for
that group.
WHEN TO EMPLOY A PROFESSIONAL SURVEY RESEARCHER
A review of any survey study by a well-trained profession-
al survey researcher will likely enhance the quality of results
obtained. However, such a review may be unnecessary or too costly, especially for small or lower budget projects. How does
a decision maker know whether the expenditure and effort to
employ a professional are necessary? Generally, the services of a survey researcher should be employed when results are to be
used to make particularly important decisions, such as those
made by a Pharmacy and Therapeutics Committee, or to fulfill
statutory, contractual, or external quality review requirements,
such as for accreditation. Obtaining professional services is
prudent when results are intended for publication, and critical
when the questionnaire will discuss sensitive or potentially threatening topics. Services could range from full-scale project
administration to brief review/consultant arrangements, de- pending on project budget, complexity, and scope.
158 Journal of Managed Care Pharmacy jMCP MarchiApril1999 Vol.S, No.2
CONCLUSION
Objectively measuring health care consumers' experiences is a time-consuming, difficult, and often expensive task.
However, in today's rapidly changing health care environ-
ment-characterized by a seemingly infinite number of pro- posals to provide better quality health care at a lower cost- the consumer's perspective has become increasingly important.
Decisions will be better informed if we remember that the best
way to learn what consumers want from their health care sys-
tem is to ask them.
.... References
I. Davies AR, Ware jE. Involving consumers in quality of care assessment.
Health Affairs 1988; 7: 33-48.
2. Lohr KN. Outcomes measurement: concepts and questions. Inquiry 1988;
25 37-50.
3. MacKeigan LD, Larson LN. Development and validation of an instrument to
measure patient satisfaction with pharmacy services. Medical Care 1989; 27:
522-36.
4. Ware jE. Conceptualizing and measuring generic health outcomes. Cancer
1991; 67: 774-79.
5. Schafermeyer KW, Hurd PD. Research methodology: designing a research
study. journ Managed Care Pharmacy 1998; 4(5): 504-16.
6. Hurd PD. Research methodology: some statistical considerations. journ Managed Care Pharmacy 1998; 4(6): 617-21.
7. Motheral BR. Research methodology: hypotheses, measurement, reliability,
and validity. journ Managed Care Pharmacy 1998; 4: 382-88.
8. Babbie E. The practice of social research, 4th edition. Belmont, CA:
Wadsworth Publishing Company; 1986.
9. Bartko jj, Carpenter WT On the methods and theory of reliability. journal of Nervous and Mental Disease 1976; 163: 307-17.
lO. Cohen SB, Carlson BL. An analysis of the characteristics of reluctant respondents in the National Medical Expenditure Survey. Proceedings of the
Social Statistics section, American Statistical Association, Alexandria, VA, 1992
II. O'Neil M. Estimating the nonresponse bias due to refusals in telephone
surveys. Public Opinion Quarterly 1979; 43: 218-32.
12. Blalock HM. Social Statistics. New York: McGraw-Hill Book Company;
1972.
13. Dillman DA. The Total Design Method. New York, New York: john Wiley and Sons; 1978.
14. Dillman DA, Sinclair MD, Clark jR. Effects of questionnaire length,
respondent-friendly design, and a difficult question on response rates for
occupant-addressed census mail surveys. Public Opinion Quarterly 1993; 57:
289-304. 15. Church AH. Estimating the effect of incentives on mail survey response
rates: a meta-analysis. Public Opinion Quarterly 1993; 57: 62-79.
16. Yammarino Fj, Skinner Sj, Childers TL. Understanding mail survey
response behavior: a meta-analysis. Public Opinion Quarterly 1991; 55: 613-
39.
17. Fox Rj, Crask MR, jonghoon K. Mail survey response rate: a meta-analysis
of selected techniques for inducing response. Public Opinion Quarterly 1988;
52: 467-91. 18. Billiet j, Loosveldt G. Improvement of the quality of responses to factual
survey questions by interviewer training. Public Opinion Quarterly 1988; 52:
190-211.
19. Oskenberg L, Coleman L, Cannell CF. Interviewers' voices and refusal rates
in telephone surveys. Public Opinion Quarterly 1986; 50: 97 -Ill. 20. Singer E, Frankel MR, Glassman MB. The effect of interviewer characteris-
tics and expectations on response. Public Opinion Quarterly 1983; 47: 68-83.
Going to the Source: A Guide to Using Surveys in Health Care Research
21. Singer E, Kohnke-Agurre L. Interviewer expectation effects: a replication
and extension. Public Opinion Quarterly 1979; 43: 245-60.
22. Tucker C. Interviewer effects in telephone surveys. Public Opinion Quarterly 1983; 47: 84-95. 23. Fowler Fj. How unclear terms affect survey data. Public Opinion
Quarterly 1992; 56: 218-31. 24. Centers for Disease ControVNational Center for Health Statistics. Vital and
Health Statistics. US Department of Health and Human Services, Hyattsville,
MD, 1992.
25. Bachman JG, O'Malley PM. When four months equal a year: inconsisten-
cies in student reports of drug use. Public Opinion Quarterly 1981; 45: 536-48.
26. Bishop GJ, Oldendick RW, Tuchfarber Aj. Effects of presenting one versus
two sides of an issue in survey questions. Public Opinion Quarterly 1982; 46:
69-85.
27. Bradburn NM, Sudman S, Blari E, Stocking C. Question threat and
response bias. Public Opinion Quarterly 1978; 42: 221-34.
28. Smith TW That which we call welfare by any other name would smell
sweeter: an analysis of the impact of question wording on response patterns. 'Public Opinion Quarterly 1987; 51: 75-83.
29. BentonJE, Daly JL. A question order effect in a local government survey.
Public Opinion Quarterly 1991; 55: 640-42. 30. McClendon MJ, O'Brien Dj. Question-order effects on the determinants of
~ubjective well-being. Public Opinion Quarterly 1988; 52: 351-64.
31. McFarland SG. Effects of question order on survey responses. Public Opinion Quarterly 1981; 45: 208-15.
32. Weeks MF, Kulka RA, Pierson SA. Optimal call scheduling for a telephone
survey. Public Opinion Quarterly 1987; 51: 540-49.
33. Feinstein AR. Clinimetric perspectives. Journ Chronic Disease 1987; 40:
635-40. 34. Main DS, Pace WD. Measuring health: guidelines for reliability assess-
ment. Family Medicine 1991; 23: 227-30.
35. Ford DE, Anthony JC, Nestadt GR, Romanoski Aj. The General Health
Questionnaire by interview: performance in relation to recent use of health
services. Medical Care 1989; 27: 367-75.
36. Idler EL, Kasl Sv, Lemke JH. Self-evaluated health and mortality among
the elderly in New Haven, Connecticut, and Iowa and Washington Counties,
Iowa, 1982-1986. American Journal of Epidemiology 1990; 131: 91-103. 37. Kaplan SH. Patient reports of health status as predictors of physiologic
health measures in chronic disease. Journ Chronic Disease 1987; 40 (Suppl
1): 27S-35S.
38. Burns RB, Moskowitz MA, Ash A, Kane RL, Finch MD, Bak SM. Self-
report versus medical record functional status. Medical Care 1992; 30: MS85- MS95.
39. McHorney CA, Ware JE, Raczek AE. The MOS 36-item short-form health
survey (SF- 36): [i. Psychometric and clinical tests of validity in measuring physical and mental health constructs. Medical Care 1993; 31: 247-63.
Vo1.5, No 2 March/April 1999 jMCP Journal of Managed Care Pharmacy 159
CE EXAM Going to the Source: A Guide to Using Surveys in Health Care Research
Upon completion of this article, the suc- cessful participant should be able to:
1. Calculate survey response rates and
understand their importance.
2. Elicit prospective respondents' partici-
pation in a survey.
3. Write nonthreatening and nonbiased
questionnaire items.
4. Understand how to order and intro- duce questionnaire items.
5. Appreciate the need to consult a pro- fessional survey researcher [or sensitive
or important survey projects.
SELF-ASSESSMENT QUESTIONS
1. Which of the following is not an
appropriate purpose for a survey of an HMO's patients?
a. Assess accuracy of physicians'
diagnoses.
b. Measure member satisfaction
with wait time for specialist
appointments.
c. Assess patients' perceptions of
most recent physician office
visit.
d. Gauge enrollee reaction to pro- posed increase in copayments.
2. Achieving a high response rate is
important because:
a. a high response rate suggests
that results are representative of
the target population.
b. nonrespondents may differ
demographically [rom respondents.
c. respondents may feel more strongly
about the topic than nonrespon-
dents.
d.All o[ the above.
3. An investigator conducts a study of
users of "Fountain of Youth," a new anti- aging tablet. Interviewers telephone
1,000 patients identified as "Fountain of
Youth" users based on prescription
records. Of these, 825 complete the
interview, 50 refuse the interview, 100
are never reached, and 25 say that they
never took "Fountain of Youth"-the prescription records were incorrect. What is the response rate?
a.82.5% b.84.6% c.97.1% d.92.5%
4. How many cases should be contacted if the target number o[ completions is
200 and the anticipated response rate is
90%?
a.180 b.220 c. 222
d. None of the above
5. Which of the following is/are effective
in enlisting survey respondents' trust?
a. Advising prospective respondents
that other respondents' answers will be made available to them upon request.
b. Discussing survey sponsorship
and the principal investigator's
credentials in the cover letter
accompanying the survey.
c. Offering respondents money for
completing the questionnaire.
d. band conly.
e. a, b, and c.
6. Which of the following is not effective
in minimizing respondent burden in a
mailed questionnaire 7
a. Using nonthreatening terms in
writing the questionnaire items.
b. Fitting as many questions onto
each page as possible, so that the
questionnaire will appear short.
c. Keeping the questionnaire as brief
as possible.
d. Providing a preaddressed postage
paid envelope for returns.
7. The presence of "order effect":
a. depends on the sequence in which
questions are asked.
b. ensures that questions are asked in
the correct order.
c. is always harmful.
d. is always beneficial.
8. Which of the following describes an
inappropriate progression of items in a
questionnaire?
a. Begin with easier questions, end
with harder questions.
b. Begin with less-threatening ques-
tions, end with more-threatening
questions.
c. Begin with demographic questions,
since these are easy.
d. Begin with less-specific questions,
end with more-specific questions.
9. If you were asked in a survey, "Did
this article provide clear and interesting
information?" you would have trouble
answering the question because:
a. of interviewer effect.
b. the question uses technical ter-
minology.
c. the question is "double barreled."
d. you did not read the article.
10. A pretest:
a. is unnecessary if the range of
response choices is appropriate for each question.
b. is closely linked with the post-test.
c. may identify problems with
questionnaire items or interviewer
training.
d. is a better technique than inter-
viewer training.
I~I See ,,~ of ""id, b'gieme, "" P"g' 150 of ,hi, i~o' pf JMC' This article qualifies for 2 hours of continuing pharmaceutical education (0.20 CEU). The Academy of Managed Care Pharmacy is approved by
the American Council on Pharmaceutical Education as a provider of continuing pharmaceutical education. This is program number 233-000-99-
02-H04 in AMCP's educational offerings.
'"
160 Journal of Managed Care Pharmacy }MíP MarchlApril1999 Vo1.S, NO.2
CEEXAM
DEMOGRAPHIC INFORMATION (not for scoring)
11. In what type of setting do you work (leave blank if none of the responses
below applies)?
a.HMO. b.PPO.
c. Indemnity insurance.
d. Pharmacy benefits management.
e. Other.
12. Did this program achieve its educa- tional objectives?
a. Yes.
b. No.
13. How many minutes did it take you
to complete this program, including the
quiz (fill in on answer sheet)?
14. Did this program provide insights
relevant or practical for you or your work?
a. Yes.
b. No.
15. Please rate the quality of this CE
article.
a. Excellent.
b. Good.
c. Fair.
d. Poor.
INSTRUCTIONS This quiz affords 2.0 hour (0.20 CEU) of continuing pharmaceutical education in all states that recognize the American
Council on Pharmaceutical Education. To receive credit, you must score at least 70% of your quiz answers correctly. To record an answer, darken the appropriate block below. Mail your completed answer sheet to: Academy of Managed Care Pharmacy, 100
N. Pitt Street, Suite 400, Alexandria, VA 22314. Assuming a score of 70% or more, a certificate of achievement will be mailed to you within 30 days. If you fail to achieve 70% on your first try, you will be allowed only one retake. The ACPE Provider Number for this lesson is 233-000-99-02-H04. This offer of continuing education credits expires April 30, 2000.
A B C D E A BCD 1.0000 6. 0 0 0 0 11.0 A DB DC OD DE 2. 0 0 0 0 7. 0 ODD 12. 0 Yes DNo 3. 0 0 0 0 8. 0 0 0 0 13. Minutes
4. 0 0 0 0 9. 0 D 0 0 14. DYes o No
5. 0 0 0 0 0 10. 0 0 0 0 15.0 A DB DC OD
Participant Identification: Please type or print
Social Security #
Date
Name
For Identification Purposes Only
Work Phone #
Last First Middle
Company
Address
State Street (with Apt No.) or PO. Box Zip
State and Lic. No.
City
State No.
Member Type: o Active o Student
o Supporting Associate D Nonmember
Signature
Val.S, No.2 March/April 1999 }MlP Journal of Managed Care Pharmacy 161