25Issues in Survey Design: Using Surveysof Victimization and Fear of Crimeas Examples
Sue-Ming Yang and Joshua C. Hinkle
25.1 Introduction
Surveys are one of the most used tools in socialscience research due to their versatility, effi-ciency, and generalizability (Bachman andSchutt 2008). The history of surveys can bedated back as far as ancient Egypt (Babbie1990). Early surveys tended to be used forpolitical purposes as a tool to understand citi-zens’ attitudes (e.g., Marx 1880), a task they arestill widely used for today. Surveys are versatileand can be used to study a wide variety of topicsand, compared to many other types of research,collecting survey data is quite efficient and lesstime-consuming. Additionally, when combinedwith a random sampling strategy, results from
survey research tend to have high generaliz-ability (Schuman and Presser 1981; Tourangeauet al. 2000). While surveys clearly have manypractical benefits for social science research, thisis not to say survey construction and adminis-tration is a minor undertaking. The use of self-report surveys in criminal victimization hasdramatically changed the definition of crime andthe availability of information about crime(Cantor and Lynch 2000, p. 86). Prior to thelaunch of the National Crime VictimizationSurvey (NCVS), information about crime wasprimarily obtained through official crime infor-mation collected by the FBI in their UniformCrime Report. The application of victimizationsurveys within our field has allowed a moredirect measure of the prevalence rates of crimeand victimization.
With the rise of victimization surveys as ameasure of crime, it is important to realize thatthe design of surveys can affect the results.Deming (1944) first cautioned scholars that theusefulness of surveys could possibly be under-mined by 13 types of errors, thus, ‘‘…the accu-racy supposedly required of a proposed survey isfrequently exaggerated…and is in fact oftenunattainable’’ (p. 369). Among these potentialerrors, he argued that imperfections in the designof questionnaires are one major source of bias.Ever since Deming raised the flag on the utilityand accuracy of surveys in obtaining results,more scholars have turned their attention todifferent components of survey errors (Cochran1953; Kish 1965; Dalenius 1974). Andersenet al. (1979) systematically summarized studies
We would like to thank Charles Hogan and DanielJarvis for their hard work in obtaining and codingsurveys of fear of crime and victimization. We alsowant to thank Lior Gideon for his thoughtful commentson an earlier draft.
S.-M. Yang (&)Department of Criminology,National Chung Cheng University,168 University Rd., Min-Hsiung,Chia-Yi, Taiwane-mail: [email protected];[email protected]
J. C. HinkleDepartment of Criminal Justice and Criminology,Georgia State University, P.O. Box 4018, Atlanta,GA 30302, USAe-mail: [email protected]
L. Gideon (ed.), Handbook of Survey Methodology for the Social Sciences,DOI: 10.1007/978-1-4614-3876-2_25, � Springer Science+Business Media New York 2012
443
examining survey errors associated with indi-vidual non-response, measurement, and pro-cessing. His results were published in a booktitled Total Survey Error: Applications toImprove Health Surveys. The term ‘‘total surveyerror’’ has been used ever since to indicate allsources of errors associated with both observa-tional and non-observational errors in surveyresearch (see Groves and Lyberg 2010 for acomprehensive review of the history of totalsurvey error).
Gradually, the field of criminology andcriminal justice also began to notice the poten-tial impacts survey designs can impose on studyoutcomes. For example, findings from the NCVSin 1975 demonstrated that if a fear of crimequestion was asked prior to the screening inter-view, the likelihood of reporting victimizationincreased drastically (Murphy 1976; Gibsonet al. 1978). This result signifies a ‘‘warm-up’’effect, where the initial question reinforced theanswers given for subsequent questions (Lynch1996). Cantor and Lynch (2000) cautioned vic-timization scholars who use survey instrumentsthat ‘‘[d]ata from victim surveys are heavilyinfluenced by their design’’ (p. 128) and pointedout that a better understanding of the impact ofsurvey design is imperative for advancing thefield. In order to achieve high reliability andvalidity, it is important to take caution in everystep of survey research.
The example cited above is just one illustrationof how survey designs can result in differentresearch outcomes. Response effects have manyplausible sources including problems in under-standing the questions, difficulties in recallinginformation, producing appropriate answers, andother cognitive processes (Groves and Lyberg2010; Walonick 1993; Tourangeau et al. 2000).Specifically, researchers could influence surveyresults in a variety of ways related to how theychoose to construct their questionnaires (Kury1994). In this chapter, we will discuss someimportant, but often overlooked, topics related tosurvey research. Three sources of possible errorsin survey design will be reviewed and discussed:(1) design of response options; (2) wording of thequestions; (3) question-order (context) effects.
The first relates to what options surveyrespondents are given to answer the questions. Inparticular we will focus on whether or notrespondents are given a neutral (or middle) optionon Likert scale-type questions, and whether a‘‘don’t know’’ (DK) option is provided. The sec-ond issue relates to wording of the questions. Theways in which a question is worded can biasfindings, or even lead to a situation where thequestion is not actually measuring the constructthat the researcher was intending to measure.Finally, the third issue deals with bias that may beintroduced by the order in which the questions areasked. That is to say that sometimes asking certainquestions first may influence the answers given tolater questions. This occurs when the questions inthe survey are interrelated but not independent—responses to later questions in the survey aretherefore conditioned by answers to the formerones. Thus, the order of the questions would resultin different responses (Walonick 1993; Bishop1987; Tourangeau and Rasinski 1988; Ayidiyaand McClendon 1990; Lorenz et al. 1995; Kury1994). This chapter will discuss each of thesethree survey design issues in detail and will useexamples from victimization and fear of crimesurveys as illustrations. Thirty-five surveys ofvictimization and fear of crime collected as part ofan ongoing study (Yang and Hinkle, In Progress)were examined and used as examples for thischapter (see Appendix A).1 A handful of state-level crime victimization surveys are used inaddition to these as examples as well.
25.1.1 I. Response Options
The formatting of the response options is oneelement of survey design that may affect theoutcome. For instance, one could get very
1 As part of this ongoing study, these surveys wereidentified through database searches for studies of fear ofcrime and victimization. To be eligible for inclusion, thestudies must have tested the relationship between victim-ization and fear of crime (i.e., at least reported thecorrelation between the two). However, that did not haveto be the main focus of the study, it was just necessary forthe published results to statistically examine the relation-ship between fear and victimization in some way.
444 S.-M. Yang and J. C. Hinkle
different estimates of fear of crime from answersto open-ended versus close-ended questions.Fattah (1993) pointed out that ‘‘the element ofsuggestibility inevitably present in close-endedquestions is probably responsible for yielding ahigher percentage of respondents declaringbeing afraid than when open-ended questions areused’’ (p. 53). Other recent research has alsofound discordance between fear of crime (andother emotions like anger about crime) whencomparing close-ended survey measures toresponses from the same respondents in open-ended interviews (see Farrall et al. 1997; Farralland Ditton 1999). Like Fattah, these studies alsotend to find higher levels of fear and otheremotional responses in data collected with close-ended surveys than in open-ended interviews.This is just one example of how the design of asurvey can change the findings to a great extent.Considering that most victimization surveys useclose-ended questions, our discussions belowwill also focus on response options related toclose-ended questions only.
Among close-ended questions, the responseoptions provided naturally affect the results. Thisissue is obvious, as the majority of surveyquestions are multiple-choice type questionswhich ask respondents to choose an answer froma list provided. For example, the most commonfear of crime question over the years has beensome variation of a question like ‘‘How safewould you feel walking alone outside at night inyour neighborhood?’’ Response options pro-vided to this question tend to be ratings ofsafety along the lines of ‘‘very safe,’’ ‘‘somewhatsafe,’’ ‘‘somewhat unsafe,’’ or ‘‘very unsafe.’’The first impact of response options provided isthat they clearly limit the responses surveyparticipants can give. As such, it is important toselect meaningful and appropriate responseoptions for each question.
Beyond that broad concern, the three mostcommonly seen issues relating to responseoptions involve offering the middle alternativeor not, providing a don’t know (DK) category,and the arrangement of response options. Theimpact of these factors is often difficult to detect,because all the response options tend to remain
the same across questions within the same sur-vey. Nonetheless, this does not mean that theimpact of these issues should be ignored. Theissues related to the arrangement of responseoptions (e.g., logical order) have already beenreviewed extensively in every research methodstextbook, and many other works, and are thuswell understood (e.g. see, Maxfield and Babbie2007; Bachman and Schutt 2010; Kury 1994;Schuman and Presser 1981; Tourangeau et al.2000).2 As such, we will only focus on the firsttwo issues and discuss how they might affectscholars who study criminal justice topics.
Starting with the middle alternative issue,scholars have different opinions on whether tooffer a middle alternative when asking attitudi-nal questions. Scholars who support providing amiddle option believe that very few people aretruly indifferent about any issue (Sudman andBradburn 1982). Thus, offering a middle optionshould not change the survey results in anysubstantial way. On the other hand, some criticsworry that respondents might not be willing toreveal their true feelings if the middle option isavailable as an easy way out. For example,Bishop (1987) has argued that offering middlealternatives to subjects would result in a sub-stantial change in conclusions compared to datacollected with the exclusion of a middle alter-native, because people tend to choose middlealternatives when they are available. This ten-dency is especially true when the middle alter-native is shown as the last response option of thequestion. Ayidiya and McClendon (1990) alsofound that more people tend to choose middleoption when provided. However, they pointedout that the overall conclusions drawn from theresearch are unlikely to be affected as the ratioof respondents choosing either end of theresponse spectrum does not change. That is, theavailability of a middle alternative was found to
2 We are not saying that the study of response sequenceis unimportant. For example, research has also found thatsimple changes in the sequencing of close-endedresponses produced very different results (see Kury andFerdinand 1998). However, it is more common for onesurvey to arrange all response options in a similar fashionacross questions, and thus the effects are hard to detect.
25 Issues in Survey Design 445
draw about the same amount of people whowould otherwise have chosen a positive ornegative answer. As such, offering a middlealternative has no substantial impacts on con-clusions drawn from the survey results. None-theless, many researchers or practitioners stillprefer not to provide middle alternatives asresponse options. In general, most victimizationsurveys that are adapted from NCVS (likethe Utah Crime Victimization Survey 2006,Minnesota Crime Survey 1999, and Maine CrimeVictimization Survey 2006) tend not to includemiddle alternatives in their response options.
Moving on to the second response optionissue, the provision of a DK option is anothercommon choice survey designers regularly face.A common worry regarding the inclusion of aDK option is that it will cause people to refrainfrom disclosing their true feelings. However, theresearch findings regarding the inclusion of aDK option are very similar to those for themiddle alternatives. Studies show that providingrespondents with an option of answering DKdoes reduce the number of people who wouldotherwise choose other substantive responses.Even excluding the DK category, the overallpattern is not affected (Sudman and Bradburn1982). That is, providing respondents with DKas an option does not necessarily pull respon-dents out of other possible response categories.Thus, the concern related to the DK optionbiasing survey results might not be warranted.For example, the Utah Crime VictimizationSurvey 2006 began including DK as a possibleresponse option for many questions, for which itwas not offered in previous versions of the sur-vey. A breakdown of the collected data showsthat only less than 2% of respondents (out of1,199 subjects) chose DK rather than otherpossible options when asked to describe theiroverall perception of fear. For questions thatwere related to their perception of safety in thecommunity, the percentage of respondentsselecting DK was even lower. Based on theresponse patterns across four biannual surveys,providing the DK option did not really pullthose potential ‘‘fence-sitters’’ (or ‘‘floaters’’ asSchuman and Presser 1981 call them) from their
original response category. Rather, only lessthan 1% of respondents chose DK when theywere asked to report their fear of crime andlikelihood of victimization.
A review of surveys on victimization and fearof crime shows no consensus on whether toinclude DK as a response option (see Appendix Afor surveys used in the review). For instance, avictimization survey in Maine (2007 version) didnot include the DK as a possible option forresponse. A crime victimization survey in Alaskaallowed the respondents to answer DK; however,the DK responses were not coded in the databaseand were simply treated as missing values. In theMinnesota Crime Survey (1999 version), a DKoption was not provided to the questions related toperceptions of fear of crime; however, respon-dents could choose ‘‘unknown’’ when answeringfactual questions related to victimization details.
One additional concern that has been raised isthat the offering of the middle category and theDK option can interact with each other. Mean-ing, the offering of a middle option might divertpeople who would otherwise have chosen theDK option, because choosing the middle optionseems to be a more substantive take on the issuethan giving the DK answer. Thus, for scholarswho use surveys it is important to design surveyresponses in ways which will generate the mostcomplete and accurate information. Doing sorequires tailoring things like response options tothe topic at hand. For questions that ask for pureinformation (e.g., questions asking ‘‘how manypast victimization experiences’’ the respondenthad or asking about their level of ‘‘fear ofcrime’’), you might not want to provide the DKoption as the respondents should be able to makea judgment or recall their personal experience.However, when questions of interest are relatedto knowledge that may not be accessible in everycase—such as details regarding victimization, orvalue judgments—then it might be reasonable toallow respondents to choose the middle alter-native or answer with DK. In such cases, a DKresponse may be the true answer of a respondentwho lacks knowledge related to the givenquestion. Thus not providing DK as an optionmay force them to merely guess at a response.
446 S.-M. Yang and J. C. Hinkle
In sum, the choice of whether to includemiddle or DK responses depends upon the focusof the particular study and the nature of inquiry.Research has generally shown that includingthese response options does not bias conclusionsin any substantive way. People choosing themwere found to have been equally likely to havechosen positive or negative response optionsotherwise (see Schuman and Presser 1981).However, there are other concerns related toresponse options. For example, it is important totake into account potential interaction betweenmiddle and DK responses, and whether to treatDK responses as missing values. The latter couldbe a concern for data analysis as removing thosecases risks lowering sample size and lesseningstatistical power. That said, the number oneconcern of surveys is collecting accurate infor-mation, thus researchers should not exclude DKresponses solely to force respondents to chooseanswers for analysis’ sake. Researchers mustexamine each question on a case-by-case basisand decide whether DK and middle responseoptions are necessary to collect accurateinformation.
25.1.2 II. Question Wording
A second major issue in survey design comesfrom the wording of the questions. Schuman andPresser (1981) suggest that question wording canaffect results in two different ways: it couldaffect the marginal distribution (i.e., responsetendency), or it could affect the size or directionof the relationships between responses given toquestions. More specifically, the wording ofquestions often serves as a way to frame theissue of inquiry, and thus the way in which aquestion is worded could influence the responsesto the given question (Kahneman and Tversky1984).
As an example of the influence wording canhave, customers are much more positive aboutground beef labeled ‘‘75% lean’’ rather than‘‘25% fat,’’ despite the fact they mean exactlythe same thing (Levin 1987). Another classicexample of the potential influence of wordingchoices on survey responses comes from studies
that asked respondents whether they agreed ordisagreed with banning public speeches againstdemocracy. These studies found that Americanswere much more likely to agree with statementsthat did ‘‘not allow’’ the speeches than they wereto agree with statements that ‘‘forbid’’ them (seeRugg 1941; Schuman and Presser 1981). While‘‘forbid’’ and ‘‘not allow’’ are synonymous atface value, these studies clearly show they arenot equal when it comes to generating responseswith a survey question—at least in the caseof the speech against democracy example.Numerous studies have replicated the wordingeffect and consistently found that questionwording affects people’s decisions (for example,see Levin et al. 1998; Walonick 1993; Heapsand Henley 1999). Additionally, questionwording effects could be conditioned by back-ground factors such as the education level ofrespondents (Schuman and Presser 1981, p. 6),which could further affect relationships betweenattitudinal variables and demographic variablesand thus bias conclusions drawn from the data.
A good practical example of the importanceof question wording comes from recent surveyresearch on fear of crime. The most commonlyused survey measures of fear of crime are vari-ations of the fear question used by the Bureau ofJustice Statistics’ National Crime Survey. Thesequestions ask respondents to report their levelsof fear of crime with items along the lines of‘‘How safe do you, or would you, feel walkingalone outside at night in your neighborhood.’’Response options are typically ‘‘very unsafe,somewhat unsafe, somewhat safe, and verysafe.’’ In short, these questions assume thatasking someone ‘‘how safe’’ they feel is analo-gous to asking them ‘‘how fearful’’ or ‘‘howafraid’’ they are. However, as respondents areeasily affected by the exact wording used insurvey questions just like the previous ‘‘forbid’’and ‘‘not allow’’ example shows, such assump-tions are not always correct.
These types of questions have recently comeunder attack, with critics arguing they are poormeasures of fear of crime due to issues with theirwording. Some have argued that this questiontaps more into perceptions of risk/safety, than
25 Issues in Survey Design 447
fear of crime (e.g., Ferraro 1995; Farrall 2004;Farrall and Gadd 2004a, b). The critique seemsreasonable as the wording of these questionsclearly asks people to rate their perceived levelof safety and makes no mention of ‘‘fear’’ orbeing ‘‘afraid.’’ Thus, the key concern here iswhether asking people ‘‘how safe’’ they feelleads to different responses than if the questionshad asked them ‘‘how afraid’’ or ‘‘how fearful’’they were.
Research thus far has suggested that thesewording differences do indeed lead to differentresponse distributions of fear of crime. In thepast, studies using these types of questions haveconsistently found that between a quarter to two-thirds of citizens are fearful about crime (seeFarrall 2004). This paints a dire picture ofmodern society being an era where a significantportion of the population is living in fear ofcrime and are staying home behind locked doors(especially at night!). However, some haveargued that the high prevalence of fear of crimeis likely just an artifact of question wordingproblems embedded in the standard fear of crimemeasure. Thus, Farrall and colleagues havesuggested that the use of such questions has ledto an overstating of the actual level of fear ofcrime in society (Farrall 2004; Farrall and Gadd2004a, b).
As one example, Farrall and Gadd (2004a, b)asked one sample of 995 individuals a safetyquestion—‘‘How safe do you feel walking alonein this area after dark?’’ The question had typicalresponse options of ‘‘very safe,’’ ‘‘fairly safe,’’‘‘a bit unsafe’’ and ‘‘very unsafe.’’ To demon-strate the effects of question wording, anothersample of 977 individuals were given a new fearquestion—‘‘In the past year, have you ever feltfearful about becoming a victim of crime?’’ Thisquestion was a simple yes/no question, and if therespondent answered yes, they were asked twofollow-up questions dealing with the frequencyof fear of crime (‘‘How many times have you feltlike this in the past year?’’) and the magnitude offear (‘‘And on the last occasion, how fearful didyou feel?’’). The magnitude question hadresponse options of ‘‘not very fearful,’’ ‘‘a littlebit fearful,’’ ‘‘quite fearful,’’ and ‘‘very fearful.’’
In short, their study found that while both typesof questions showed similar proportions of thesample had felt fearful or unsafe, the responsesto frequency and magnitude questions suggestedthat only looking at those proportions couldoverstate the severity of the fear of crimeproblem in society. Specifically, responses to thesurveys showed that one-third of the sample whowere asked the safety question reported feelingunsafe (a bit unsafe and very unsafe combined).This proportion is fairly typical of research on‘‘fear’’ which uses such safety questions. Turn-ing to their specific fear question, they alsofound that 37% of the sample reported havingfelt fearful of becoming a crime victim in thepast year on the initial yes/no question. Thus,when looking only for a simple wording change,there does not appear to be much bias inthe marginal distribution as relatively similarnumber of people reported feeling ‘‘fearful’’ asthose who reported feeling ‘‘unsafe’’ in the othersample.
However, when they turned to the follow-upquestions, they found that questions worded onlyto ask if a person has felt afraid or unsafe mayoverstate the level of fear in daily life. Specifi-cally, 20% of the sample experienced only onefearful event in the past year, while nearly halfof the same had only 1-4 fearful experiencesduring that time frame. Furthermore, the mag-nitude question revealed that those who hadreported at least one fearful event did not ratethe experience as particularly frightening, withonly 15% of the sample reporting their experi-ence to have been ‘‘quite fearful’’ or ‘‘veryfearful.’’ Perhaps more importantly, the studyalso found several other differences between thetwo types of fear measures when examiningresponses by demographic groups. As is typicalin past fear of crime research using the perceivedsafety at night question, the data showed thatolder people and females are more fearful (seeFerraro 1995 for a review of studies on gender/age and fear). However, using their new fearmeasure the age gap disappeared and the gendergap was reduced from females being 28% more‘‘unsafe’’ than males to only 13% more fearfulwhen looking at their new fear question.
448 S.-M. Yang and J. C. Hinkle
This is a good illustration of the importanceof carefully considering the impacts of questionwording. While sometimes the bias is readilyapparent, as in the aforementioned examplesinvolving ‘‘forbidding’’ or ‘‘not allowing’’speeches or forming opinions about ground beefthat is ‘‘75% fat’’ or ‘‘25% lean,’’ they may alsobe more subtle and only reveal themselves uponcloser examination. In the case of the fear ofcrime example, one could easily mistakenlyconclude that asking respondents ‘‘how safe’’ or‘‘how fearful/afraid’’ they are has little conse-quence since Farrall and Gadd (2004a, b) foundthat nearly identical percentages (33% and 37%,respectively) reported being unsafe or fearful.However, further inspection showed that suchgenerally worded questions likely overstatelevels of fear in a sample by ignoring frequencyand magnitude of fear of crime events. Mostpeople who reported feeling fearful at least oncehad only experienced 1-4 fearful events in thepast year and on average they did not rate theevent(s) as particularly frightening.
More importantly, their examination ofdemographic variables showed important dif-ferences in levels of fear compared to levels ofsafety by age and gender as discussed above.This suggests that even though similar propor-tions of the sample were identified using eithermeasure, these two questions still may measuretwo distinct constructs. Indeed other studieshave found this to be the case. For example, in areview of past work in this area, Ferraro andLaGrange (1987) found that general correlationsbetween perceptions of safety/risk and fear ofcrime ranging from 0.32 to 0.48. Thus, whilethere is some relationship between the two, thelow correlations and differential demographiceffects on responses to fear of crime and per-ceived safety questions suggest that they aredifferent constructs. Thus, one has to carefullydecide on the wording of questions in order toreflect what they truly want to measure in sur-veys. The findings also point to problems withthe past research on fear of crime on usingmeasures of perceived safety/perceived riskwhile interpreting the findings under a generalframework of fear.
Moreover, the above discussion focused onlyon one particular critique of the wording of thestandard NCVS ‘‘fear’’ question and variants ofit. This illustrates the complexities of questionwording issue. Fear of crime is by no means aclearly operationalized concept. In addition tothe ‘‘fear’’ versus ‘‘safety’’ issue discussedabove, research has shown that the fear of crimemust be distinguished from more global fearsand that levels of fear vary across different typesof crime. Thompson et al. (1992) utilized threemeasures of the fear of crime: ‘‘global fear,’’‘‘fear of property crime’’, and ‘‘fear of violentcrime.’’ They found that fear varied across thesedifferent measures and argued that differentmethods of measurement could be a source oferror. In another study, Keane (1992) drew adistinction between ‘‘concrete fear’’ and‘‘formless fear’’ by using the standard NCVSfear question to assess these concepts.
For each type of crime listed below, please indi-cate how likely you think it is to happen to youduring the next year (or in the future). If you feelcertain that it will not happen to you, then circlethe number 0 beside the crime. If you feel certainthat it will happen to you, then circle the number10.
This item was followed by a list of specificcrimes. Keane’s (1992) findings of fear varyingby crime type point out that the contradictoryresults in fear of crime research are at leastpartially due to different methods of operation-alizing the concept of fear of crime.
In short, these studies, along with others, sug-gest that fear of crime is not a uniform constructand that many of the inconsistencies in fear ofcrime studies may come from different ways ofwording the questions. In particular, several havesuggested that it is preferable to ask about fear ofspecific types of crime rather than just ‘‘crime’’ asan omnibus construct given that levels of fear (orperceptions of safety and risk) have been found tovary across types of crime (see Farrall et al. 1997;Ferraro and LaGrange 1987). As such, these dif-ferences of magnitudes of fear by crime type areignored if one only asks about ‘‘crime’’ rather thanasking about specific types of crime such as rob-bery, rape, burglary, etc.
25 Issues in Survey Design 449
Another issue regarding wording of surveyquestions deals with problems that can arise ifquestions are too vague. Vagueness in questionwording can have general effects on the validityand reliability of survey data as well. Several ofthe critiques of the standard NCVS fear questionhave pointed out problems related to its vague-ness (e.g., see Ferraro and LaGrange 1987). Inparticular, the question is criticized as being toovague in three ways. First, in many cases, thereference to ‘‘crime’’ tends to be left out in thequestion wording; as such, it merely asksrespondents how safe they feel walking alone atnight in their neighborhoods. Thus, it becomesentirely subjective for the respondents to decideon whether or not to associate their fear withcrime or something else. Second, the contextualreference is not always specified and often is notwell defined. For example, the term ‘‘neighbor-hood’’ could mean different things to differentpeople. Thus findings that levels of fear varyamong respondents living in the same neigh-borhood may be an artifact of them imaginingdifferent geographic areas. As such, validity offear of crime measurements could likely beenhanced by providing specific wording aboutcrime and using less ambiguous geographicareas as the reference for the questions. Lastly, itis also noted that the time frame of the sce-nario—‘‘at night’’—is also vague and misses thepossibility that levels of fear may be different,for example, right after sundown than at 1 a.m.
Lack of clear context in question wording canalso lead to biased results. Yang and Wyckoff(2010) in a study surveying college students’victimization experiences and fear of crime foundthat questions with clearer time and geographicreference (e.g., walking on campus at night) ten-ded to extract more specific responses, whilequestions with no reference were more likely to besubject to question order effects (which we willdiscuss in detail in the following section).
Another critique of the standard NCVS fearquestion is the reliance on hypothetical situa-tions (Ferraro and LaGrange 1987). Specifically,these critiques note that questions often askrespondents how safe they ‘‘would’’ feel ifwalking alone at night, rather than how safe they
actually feel when doing so (and some variantsof the question mix the two together and ask‘‘How safe do you, or would you, feel…’’). It isargued that this risks overstating levels of fear byasking people to imagine situations they werenot actually in. The critics also note that thisproblem of using hypothetical scenarios may beworsened by asking about an activity—likewalking alone at night—that many people mayseldom or never engage in. Thus it is suggestedthat more accurate measures of fear should aimto ask about episodes of fear of crime that wereactually experienced. For instance, asking therespondents whether they were ever fearful ofbeing a crime victim in the past year as Farralland Gadd (2004a, b) did, or grounding thequestion in actual experience by starting it with‘‘In your everyday life…’’ as suggested byFerraro and LaGrange (1987).
25.1.2.1 Examples of Fear of CrimeWording in Actual Surveys
A key concern that arises after this discussion isjust how much current research on fear of crimehas possibly been plagued by surveys usingquestions subject to the wording problems out-lined above. As part of an ongoing study (Yangand Hinkle, in progress), we have collectedsurveys from studies exploring the relationshipbetween fear of crime and victimization (seeFootnote 1 and Appendix A). Currently we haveidentified 35 unique surveys used across thestudies we have collected to date, and anexamination of these surveys allows us to gaugehow many studies of ‘‘fear’’ of crime have reallyasked about fear of crime and how many haveasked about perceptions of safety/risk or otherconstructs like ‘‘worry’’ about crime.
Out of these 35 surveys, only 14 (40%) includea measure of ‘‘real fear’’—defined as the questionspecifically including the terms ‘‘fear,’’ ‘‘fearful,’’‘‘afraid’’, or ‘‘scared.’’ Of these 14 items, sixasked respondents how fearful or afraid they werewalking alone at night, four asked about fear ofbecoming a victim of a series of specific crimes,two asked about fear of crime/becoming a victimin general (no reference to specific crime types),one asked students about fear of being attacked in
450 S.-M. Yang and J. C. Hinkle
school and one asked respondents whether peoplein their neighborhoods could leave property outwithout fearing that it would be damaged orstolen.
The key takeaway is that 60% of the surveyscollected above did not include a real measure ofemotional fear, despite these surveys being iden-tified through a specific search for academicstudies of fear of crime and victimization. Thenext step is to see what types of questions wereasked in place of true fear of crime measures. Themost commonly asked type of questions weremeasures of perceptions of safety. A total of 21surveys included some form of perceived safetyquestions—15 of these were studies that did notask a real fear question, while six of these weresurveys that asked a safety question in addition tothe fear question. The most common safety itemswere variations of the standard NCVS questionwhich asked respondents how safe they feltwalking alone at night. There were a total of 13surveys including such a question. In terms of thegeographic context of these safe walking at nightquestions, six referred to the respondent’s neigh-borhood and two provided no geographic refer-ence, and are thus subject to the issue of vaguenessdiscussed earlier. The other five safe walking atnight items referred to how safe the respondents’felt in their ‘‘area’’ or ‘‘the environment in whichyou live in.’’ The other eight safety questionsincluded three items which generally asked ifrespondents felt their neighborhood was safe andfive items that asked about specific circumstancessuch as how safe they felt when home alone atnight or whether they avoided unsafe areas duringthe day because of crime.
The other two types of questions we identi-fied as related to the ‘‘fear’’ issue were measuresof perceived risk of victimization and level of‘‘worry’’ about crime or victimization. Therewere 10 surveys which included measures ofperceived risk. Nine surveys featured questionsasking respondents how likely they felt theywould become victims of crime in the future,and one question asked them to rate the level ofrisk in the area in which they lived. Perceivedrisk questions tended to be asked in combinationwith fear or safety questions, with only three
surveys including a risk measure with no othermeasures of fear, safety, or worry.
There were also 10 surveys which includedmeasures of respondents’ ‘‘worry’’ about crime.Seven questions asked respondents to rate theirlevel of worry about specific types of crime (one ofwhich asked how ‘‘anxious’’ they were rather thanhow ‘‘worried’’), one referred to worry about crimein general, one asked how worried they would bestaying home alone at night, and one asked howworried they were about their personal safety whenwalking alone at night. As with risk, questionsabout worry tended to be asked in combinationwith the other constructs examined here, with onlythree surveys including a question about worry.
In sum, our review of ‘‘fear’’ questions usedshows a wide variety of question wordings used tomeasure the construct of fear of crime in studies offear and victimization. Only 14 out of 35 studiesincluded measures of fear that specifically askedabout being fearful or afraid. Not surprisingly,given the impact the NCVS has on the develop-ment of other surveys, the most commonly usedquestions asked respondents how safe they feltwalking alone at night. A number of questionsalso asked respondents to rate their perceptions ofrisk of becoming victims of crime or their levelsof worry about crime and victimization. Thesequestions tended to be asked in combination withsafety or fear items; however, there were threecases for each where perceived risk or worry wasthe only ‘‘fear’’-related question asked. In additionto having four potentially different constructs(fear, safety, risk, and worry) measured by theseitems (all of which were interpreted as a measureof fear of crime by at least some studies usingthese surveys), there are also numerous wordingdifferences within each category as outlinedabove. With so many variations in how ‘‘fear’’ ofcrime is measured in surveys, it is not surprisingthat findings on the topic have long been equiv-ocal and widely debated.
25.1.3 III. Question Ordering
Another equally important issue in survey con-struction is question order effects. This is a formof bias which may be introduced by the temporal
25 Issues in Survey Design 451
order in which the material presented potentiallyleading to different responses. The cause of thequestion order effects is not due to the ambiguityof questions themselves discussed above. Evenquestions which avoid the problems with ques-tion wording and response options discussedearlier may be affected by question order effects.This is because the order in which questions areasked may influence responses given to itemswhich are placed later in the survey. Forinstance, keeping with the fear of crime andvictimization example we have been using, it ispossible that asking victimization questions firstcould influence the level of fear reported, or viceversa (see Yang and Wyckoff 2010 for review).
Question order effects (sometimes referred toas context effects) are due to the nature of lan-guage and how people make sense of conversa-tion. Specifically, Schuman and Presser (1981)argue that ‘‘… words and sentences take part oftheir meaning from the context in which theyoccur’’ (p. 30). For surveys, one important contextis the order in which questions are asked. Theexperiences a person is asked to recall earlier in asurvey, or the scenarios one is asked to considerfirst, form the context for answering later ques-tions. The earlier questions may bring to mindmemories or imagined situations that in turninfluence responses to subsequent questionsbecause those may be memories/situations theywould not have considered otherwise. By thinkingabout the information once in previous questions,this information becomes readily available to therespondents (the recency principle). McClendonand O’Brien (1988), in proposing the recencyprinciple, noted that the location of specificquestions (i.e. past victimization experience)relative to general questions (i.e., opinion on howsafe your neighborhood is) also makes a differ-ence. They argued that when the general questionsare asked last there are larger effects for the spe-cific items closest to the general question andsmaller effects for those further away.
The mechanism of the question order effectcan be explained through a chain of humancognitive and memory retrieval processes. Whenwe answer a question, the cognitive processinvolves many stages: interpretation of the
questions; retrieval of the relevant informationfrom long-term memory; making judgmentsbased on the information retrieved; and finally,deciding on a proper answer (Tourangeau andRasinski 1988). The same procedure is usedwhen people complete a survey, a respondentwill retrieve information from his or her long-term memory to render a judgment and thenprovide an answer. However, if this respondenthas already answered a question on a similartopic, this previous judgment, already used onceand now stored in the short-term memory, willtake priority over other information in the long-term memory and will be drawn upon to providean answer to this new question (Matlin 1994;McClendon and O’Brien 1988; Wyer 1980).
Studies have found that subjects tend to relyon their most recent experience when makingjudgments regarding their attitudes and beliefs,rather than using other information that may beequally or more relevant (Bem 1972; Bem andMcConnell 1970; Wyer 1980). In the context ofsurvey research, Bradburn and Mason (1964)referred to this as the ‘‘saliency effect,’’ whichoccurs when a set of questions concerning ageneral subject are asked early in the survey.The early and repeat appearance of questionsexamining the same topic will give salience tothe topic and sensitize the subject to the issue,and thus increase the subject’s likelihood ofshowing approval, concern, or worry in sub-sequent questions about the topic (also seeJohnson et al. 2006). Bradburn and Mason(1964) suggested three other possible questionorder effects including redundancy, consistency,and fatigue. However, these other types ofquestion order effects are not relevant to thefocus of the current chapter and thus will notbe reviewed here (for more information, seeBradburn and Mason 1964, p. 58; Rossi et al. 1983).
Question order effects are much more com-mon and salient than is often thought. Schumanand Presser (1981) pointed out that in somecases, order effects can elicit as large as 20%marginal differences in the responses given.Moreover, question order effects are complexphenomena and the direction of effects varies.Different arrangement of questions could either
452 S.-M. Yang and J. C. Hinkle
create more consistent results or more contrast-ing findings, depending on the content of ques-tions. Among the discussions of question ordereffects, two notions are tested most often andconsidered important: the assimilation effect (orconsistency effect) and the contrast effect(Babbie 1990; Bishop et al. 1985; Schuman et al.1983b; Schwarz et al. 1991; Strack et al. 1988;Tourangeau et al. 2000). Assimilation (consis-tency) effects represent a situation where ques-tions presented earlier reinforce respondents tointerpret or respond to the target questions in away similar to the preceding questions. Contrasteffects, on the other hand, represent a situationwhere the order of questions diversifies theresponses to target questions relative to thepreceding questions by driving them to giveresponses divergent from those given to theearlier questions (Tourangeau et al. 2000;Schuman and Presser 1981).
25.1.3.1 Assimilation EffectsAs mentioned above, assimilation effects occurwhen the target questions bear information that isunfamiliar to the respondents. Thus, the precedingquestions provide respondents with some con-textual information and help respondents tointerpret the meaning of the unfamiliar questionsthat follow. Subjects tend to make a connectionbetween the target questions and nearby questionsthey had just answered in order to determine thenature of questions about unfamiliar topics.Tourangeau and Rasinski (1988) demonstratedsuch an effect using the Monetary Control Bill asan example. What they found is when there was ablock of inflation questions preceding the targetquestion relating to the Monetary Control Bill,respondents tended to associate the two andanswer the question assuming it was inflationrelated. Upon further examination, it seems thatthe assimilation effect only occurred when thecontextual questions were positioned in a clusterrather than scattered throughout the survey. Theassimilation effect has important implications forpeople who want to examine less popular con-cepts in criminal justice, such as newly imple-mented policies or practices that might beunfamiliar to general public.
Assimilation effects can also occur withquestions regarding value judgments. This typeof effect has also been referred to as ‘‘part–partconsistency,’’ which involves two questions withdifferent contexts but similar logic. Thus, sub-jects’ judgment of one condition will affect theirjudgment of another. Hyman and Sheatsley(1950) conducted a widely cited split-ballotexperiment demonstrating this type of effect andhow the order in which questions are presentedcan have impacts on the results. They presentedthe following two questions in alternate order todifferent respondents. They then comparedresponses of the two halves of the sample to seeif the question ordering affected the results.
Communist reporter question: Do you think theUnited States should let communist newspaperreporters from other countries come in here andsend back to their papers the news as they see it?American reporter question: Do you think aCommunist country like Russia should let Amer-ican newspaper reporters come in and send backto America the news as they see it?
In this study, Hyman and Sheatsley found thatAmerican respondents were more likely to supportletting communist reporters into the United Statesafter having answered the American reporterquestion first. On the contrary, they were less likelyto allow American reporters into communistcountries after having answered the Communistreporter question first. Essentially, respondentstried to be fair by keeping the same standards whenanswering the questions. Schuman et al. (1983a)replicated Hyman and Sheatsley‘s experiment andused a randomly assigned form in a telephonesurvey. Their findings confirmed Hyman andSheatsley conclusion regarding the assimilationeffect. Though these types of opposing questionsare less often seen in victimization surveys, it isnevertheless important to carefully examine theplacement of questions which bear parallel struc-ture in a survey.
25.1.3.2 Contrasting EffectsAs described earlier, contrast effects representa situation where the order of questions resultsin more contrasting opinions, rather thanmore consistent opinions, on ratings of general
25 Issues in Survey Design 453
attitudes. A major cause of contrasting effects ispresenting questions concerning a specificdomain (e.g., past personal victimization expe-riences) prior to later questions regarding ageneral attitude (e.g., how safe is your neigh-borhood), and thus having the specific questionsinfluence responses to the general questions.Questions with general focus usually emphasizethe rating of an omnibus construct or experience,while specific questions focus on a specificdimension of the same overall construct orexperience (DeMoranville and Bienstock 2003).As described in cognitive process theory, whenanswering a general question, respondents needto first define the meaning of the question to helpthem narrow down its focus (Bradburn andDanis 1984; McClendon and O’Brien 1988).Specific questions are usually written with avery clear focus and context, which makes iteasier for respondents to answer without draw-ing upon other cues. Thus, when both types ofquestions are presented in the same survey,respondents tend to use specific questions togive context to the general questions, and todirect their attention in order to accuratelyinterpret the general questions (Benton and Daly1991; McFarland 1981; Schuman and Presser1981; Sigelman 1981; Wyer 1980). The decisionto use both types of questions in the same surveycan be a strategic design to assure responseaccuracy, like the design of the Conflict TacticsScales (CTS) for intimate partner violencestudies (Ramirez and Straus 2006).
In CTS, the earlier questions about intimatepartners were worded in a neutral way to increasethe likelihood of subjects disclosing their violentactions against their partners, which were asses-sed in the later questions. Nonetheless, it isimportant to note that surveys may invoke thisprinciple unintentionally, when the contextualeffects created by the questions preceding theattitudinal questions actually affect responses. Inboth scenarios, the survey results may be altered;however, the priming effect is intentional in thefirst approach but is unintended in the secondscenario and, thus, is more problematic.
Many factors determine whether the questionorder effect provoked should be in the direction
of an assimilation or contrasting effect.Reviewing earlier studies, Tourangeau et al.(2000) summarize two principles determiningthe direction of question order effects. First,when questions require deep processing ofinformation, then the design of a survey tends toproduce contrast effects rather than assimilationeffects. Second, vaguely defined scales, extremescenarios identified in questions, and a clearfocus on both contextual and target questionsall promote contrast effects (pp. 220, 221).The opposite conditions—clearly defined scales,common situations, and questions without clearfocus—promote assimilation effects (Tourangeauet al. 2000; Herr et al. 1983).
25.1.3.3 Question Order Effectsin Victimization Research
In victimization research, it is unclear whetherhaving a significant event in the past, such as avictimization experience, affects subjects’responses about perceptions of safety or fear ofcrime at a later time. Compared to other factors,salient life events or fundamental beliefs aremost likely to affect people’s decision-makingprocesses. For example, social psychologistshave found that events which are memorable,informative, and are recalled with strong emo-tion have the strongest impact on people’sprobability judgments (Tyler and Rasinski 1984,p. 309). Thus, by focusing on these salientevents, people tend to overestimate the proba-bilities that related events will actually occur inthe future (Carroll 1978). As such, respondentswho are focusing on past victimization experi-ences from first answering such questions maythen overestimate likelihood of future victim-ization and thus report higher levels of fear ofcrime than they would have absent the victim-ization questions having been asked first.
In order to explore the impacts of question ordereffect in victimization surveys, Yang and Wyckoff(2010) conducted a randomized experiment usinga telephone survey which included common vic-timization and fear of crime questions which wasadministered to a sample of college students. Theyadapted items from Smith and Hill’s (1991) surveytesting fear of crime and victimization on campus.
454 S.-M. Yang and J. C. Hinkle
The survey used in the study asked questions spe-cifically about personal experiences with, experi-ences of acquaintances relating to, or knowledgeof, victimization on campus, and asked generallyabout whether the respondents felt safe on campusunder different circumstances. The experimentrandomly assigned respondents to two differentgroups; the control group received a survey thatplaced the general questions of perceptions ofsafety before the specific questions regarding vic-timization, while the treatment group received asurvey where the specific victimization questionspreceded the general safety questions.
Overall, Yang and Wyckoff did not find asignificant difference between the two groups onfour questions relating to perceived safety.However, further examination showed that ques-tions with more detailed reference provided (e.g.,walking on campus at night) tended to extractmore consistent responses while questions withno reference were more likely to be subject tocontrast effects. Additionally, respondents’ gen-der and victimization experience showed signifi-cant differences on perceived safety in whichfemale subjects and prior victims perceived thecollege campus to be less safe than their coun-terparts. For the two safety questions that con-tained specific details (walking alone in the daytime or after dark), there was no interaction effectbetween gender/victim status and questionarrangement. For the other two questions thatasked for a general assessment of safety of thecollege campus, the study found interactioneffects where nonvictims and females demon-strated the expected question order effects.
That is, presenting victimization questionsfirst reduced females and non-victims’ percep-tions of safety. Females, who are also less likelyto be crime victims per crime statistics, weremore likely to be influenced by the early pres-ence of victimization questions and report lowerlevels of perceptions of safety (Baumer 1979;del Carmen et al. 2000; Ferraro 1996; Gordonand Riger 1989; Hale 1996; McConnell 1997;Pain 1995; Rountree and Land 1996; Warr1985). Nonvictims, who did not have personalvictimization experiences, were also more likelyto feel unsafe when asked the victimization
questions first. Thus for both females and non-victims the findings suggested that presentingvictimization questions first served as a refer-ence which caused them to think about thepossibility of being victimized on campus andsubsequently led to lower ratings of perceptionsof safety. As expected, past victims tended toreport lower ratings on perceived safety com-pared to nonvictims. However, when the vic-timization questions were presented first, theirlevels of perceived safety actually increased to alevel comparable to that of the nonvictims. Yangand Wyckoff (2010) argued for the potential forinteraction effects between respondent status andquestion ordering—perhaps people with differ-ent experiences with victimization use differentanchors to make judgments in determining theirlevels of fear of crime.3
This conditional question order effect canperhaps explain the inconsistent findings in stud-ies of the relationship between past victimizationand one’s level of fear. Among the extensive lit-erature focusing on fear of crime, some studiesargue that having been victimized increases fearof crime as past victims tend to be more distrust-ful, more cautious and more fearful (e.g. Balkin1979; Skogan 1987; Smith and Hill 1991).Rountree and Land (1996) found that previousburglary victimization experiences increasedone’s fear of future burglary victimization andproduced a general unsafe feeling toward one’sliving environment. Though this finding soundsreasonable, others have concluded just the oppo-site. Studies by Sparks et al. (1977) and Smith(1976) actually found a negative relationshipbetween victimization and fear of crime; that is,some people show a reduced fear of crime afterbeing victimized. Mayhew (1984), however,using the 1982 British Crime Survey, found only atenuous relationship between victimization andfear of crime. The source of the inconsistentfindings could be variations between samples,differences in survey design (Lynch 1996;
3 A similar mechanism was found in Smith and Jobe’s(1994) study, in which they argued that generic knowl-edge of food consumption worked as an anchor toproduce the base rate while the episodic memories helpto adjust the base rate.
25 Issues in Survey Design 455
Murphy 1976)4 or variations in the definition/measurement of ‘‘fear of crime.’’
Others have further pointed out differentialquestion order effects by survey administrationmethods. Willits and Ke (1995) argued thatresponse to mail questionnaires would be lesslikely to show question order effects than tele-phone surveys, since respondents have more timeto consider their answers as well as the opportu-nity to preview and review the items. Strack et al.(1988), on the other hand, found that the ordereffect reduced in telephone surveys under thecircumstance that subjects were told about theareas to be covered by the survey in advance.Willits and Ke (1995) further concluded thatthough placing specific items first induced theresponses to the general questions, browsing thequestionnaire before answering any questionnonetheless reduced the potential order effect.Therefore, it is reasonable to believe that researchusing mail surveys may reach different conclu-sions than when using telephone surveys.
In our research in progress, we are examiningvictimization surveys closely to see if surveydesign could contribute to the inconsistent find-ings on the relationship between fear and vic-timization. Out of the 35 victimization surveys weidentified in our ongoing study (see Appendix A),we are currently able to determine the specificorder the victimization questions and the ‘‘fear’’questions were asked for 27 of them.5 Of these 27,16 (59%) ask the fear/safety/risk/worry item first,and 11 (41%) ask the victimization questions first.These different practices across surveys offersome direction for further examination to see towhat extent the inconsistent findings in research
testing the link between victimization and fear ofcrime may be the result of survey design regard-ing question ordering.6
25.2 Conclusions
Surveys have become commonplace in the field ofcriminology and criminal justice and are beingused as tools in studies of a variety of topics suchas evaluating police services and interventions,detecting trends in community well-being (e.g.,general social surveys), and measuring the prev-alence of victimization for both the general pop-ulation (e.g., NCVS) and subgroups such asstudents. With so much of the knowledge base inour field being driven by findings from surveydata, it is crucial to be sure our measures areaccurate and reflect what we think they are mea-suring. As such, in order to gather accurateinformation it is extremely important to ensurethat questions are designed in ways that actuallymeasure the phenomenon of interest, and do sowith high validity and reliability.
To understand some common problems in sur-vey designs, three possible issues were identifiedand discussed in this chapter: response optioneffects, question wording effects, and questionorder effects. In terms of response options, mostvictimization surveys tend to omit middle alterna-tives to attitudinal questions based on a belief thatthe offering of middle alternatives might drawrespondents away from their actual attitudes. TheUtah Victimization survey results show that thepercentage of respondents who chose middlealternatives when provided is actually very small.Thus, offering middle alternatives does not neces-sarily lead to biased results in surveys of victim-izations and fear of crime. On the other hand, theoption DK or ‘‘unknown’’ is generally included invictimization surveys. The finding regarding theeffect of including the DK option is similar to thatfor offering middle alternatives. That is, thoughsome respondents will choose DK or middle
4 The order in which questions were presented variesfrom study to study. For example, the British CrimeSurvey presents fear of crime questions (general ques-tions) before the specific victimization questions, whileRountree and Land (1996) provided the specific victim-ization questions first. Smith and Hill (1991), however,presented questions without specifying the order ofpresentation of the questions.5 For some surveys we have merely identified them andbeen able to code question wording from articles orreports published analyzing their data, but cannot yetcode question ordering as we are still attempting toobtain the full survey instruments.
6 Yang and Hinkle (in progress) are currently conductingresearch to further examine whether the empiricalevidence supports this speculation.
456 S.-M. Yang and J. C. Hinkle
alternatives when the options are provided, theproportions of respondents drawn to these optionsis pretty evenly divided among subjects who wouldotherwise show positive or negative attitudes.Thus, the overall ratio of ‘‘pros’’ and ‘‘cons’’ shouldnot be affected dramatically and should not causemuch bias to research findings.
The effects of question wording are best illus-trated by the fear of crime example. This is a greatexample demonstrating the importance of questionwording as it shows that a large portion of research inthis area is hampered by using variations the standardNCVS ‘‘how safe do you feel walking alone atnight’’ question. This question has been shown tohave a host of wording problems that call intoquestion its validity and thus challenge the conclu-sions we can draw from the body of research in thisarea. For instance, even the long-accepted findingthat the elderly are more fearful of crime is nowsuspect given that at least one study which askedspecifically about fear rather than safety did not findan age effect (Farrall and Gadd 2004a, b). Thus, it isimportant for researchers to closely examine thewording of questions to make sure they captureintended information. For instance, the examplesabove show that asking people ‘‘how safe’’ they feelis not the same as asking them ‘‘how fearful’’ or‘‘how afraid’’ they are of crime. As such, researchersmust decide when designing their survey whetherthey are really interested in measuring fear of crimeor perceptions of safety (or other constructs such asperceptions of victimization risk or worry aboutcrime) and design their questions accordingly.
Question order effects, on the other hand, areperhaps the one survey design issue that is mostoften overlooked, but could have importantimpacts on survey results. As described in theprevious section, the order in which questions areasked in a survey could lead to either assimilationeffects which amplify the response tendency orcontrast effects which lead to divergent resultsamong respondents. Moreover, question ordereffects are more likely to occur with regard toquestions involving general attitudes with no spe-cific reference provided. Yang and Wyckoff (2010)also found that respondents’ characteristics canalso influence their susceptibility to order effects(such as their gender, education level, etc.). Thus, itis important to take into account both question
ordering itself, as well as its potential interactionwith respondents’ characteristics, when examiningdata for any question order bias. An examination ofactual victimization surveys showed that there isno consistent practice on whether to present vic-timization questions or fear/perceived safety/riskquestions first in surveys. Thus, the lack ofknowledge on question order effects could partlyexplain the lack of consensus in research on whe-ther past victimizations are related to higher levelsof fear of crime. Overall, the review of literatureshows that questions with more specific referenceor context, clearer wording, and clear scales areless likely to generate biased results.
The examples cited throughout this chapter areof course limited to survey research on fear of crimeand victimization we identified, but they nonethe-less provide a clear illustration of the currentpractices in victimization survey design and thushighlight the importance of being cautious whendesigning surveys. Moreover, they reveal a numberof common issues with survey design which arerelevant to surveys on every topic in criminologyand criminal justice and show the importance ofconsidering factors such as what response optionsto provide, how to word questions, and how ques-tion ordering may affect responses when attemptingto construct surveys. Paying attention to suchdetails is a crucial part of generating the type ofquality data on crime and related issues needed tocontinue advancing our field.
Appendix A: List of 35 ExampleSurveys
The Table provides a list of the 35 surveys offear of crime and victimization used in theexamples in this chapter (see Footnote 1 formore details on the collection of these surveys).These do not include the state-level surveys usedas examples as those are self-explanatory. Manyof these 35 surveys were unnamed and are justreferred to by the author’s name or a descriptivename we gave them. Where possible we providea link to a website for the survey (i.e., for majorsurveys like the ICVS or British Crime Survey),for all others we provide a reference to a pub-lication that uses data from the survey.
25 Issues in Survey Design 457
Sur
vey
nam
eC
itat
ion
for
sing
leus
esu
rvey
s
Bec
kan
dR
ober
tson
surv
eyB
eck,
A.,
&R
ober
tson
,A
.(2
003)
.C
rim
ein
Rus
sia:
Exp
lori
ngth
eli
nkbe
twee
nvi
ctim
izat
ion
and
conc
ern
abou
tcr
ime.
Cri
me
Pre
vent
ion
and
Com
mun
ity
Safe
ty:
An
Inte
rnat
iona
lJo
urna
l,5(
1),
27–4
6
Bri
tish
crim
esu
rvey
http
://w
ww
.hom
eoffi
ce.g
ov.u
k/sc
ienc
e-re
sear
ch/r
esea
rch-
stat
isti
cs/c
rim
e/cr
ime-
stat
isti
cs/b
riti
sh-c
rim
e-su
rvey
/
Cam
bodi
anU
NIC
VS
Bro
adhu
rst,
R.,
&B
ouho
urs,
T.
(200
9).
Pol
icin
gin
Cam
bodi
a:L
egit
imac
yin
the
mak
ing?
Pol
icin
gan
dSo
ciet
y,19
(2),
174–
190
Can
adia
nge
nera
lso
cial
surv
eyht
tp:/
/ww
w.s
tatc
an.g
c.ca
/dli
-ild
/dat
a-do
nnee
s/ft
p/gs
s-es
g-en
g.ht
m
Can
adia
nur
ban
vict
imsu
rvey
Wei
nrat
h,M
.,&
Gar
trel
l,J.
(199
6).
Vic
tim
izat
ion
and
fear
ofcr
ime.
Vio
lenc
ean
dV
icti
ms,
11(3
),18
7–19
7
Car
ibbe
anvi
ctim
izat
ion
surv
eyP
aint
er,K
.A.,
&F
arri
ngto
n,D
.P.(
1998
).C
rim
inal
vict
imiz
atio
non
aC
arib
bean
isla
nd.I
nter
nati
onal
Rev
iew
ofV
icti
mol
ogy,
6(1)
,1–
16
Chi
rico
s,P
adge
ttan
dG
ertz
surv
eyC
hirc
os,
T.,
Pad
gett
,K
.,&
Ger
ta,
M.
(200
0).
Fea
r,T
Vne
ws
and
the
real
ity
ofcr
ime.
Cri
min
olog
y,38
(3),
755–
785
Cho
ckal
inga
man
dS
rini
vasa
nsu
rvey
Cho
ckal
inga
m,
K.,
&S
rini
vasa
n,M
.(2
009)
.F
ear
ofcr
ime
vict
imiz
atio
n:A
stud
yof
univ
ersi
tyst
uden
tsin
Indi
aan
dJa
pan.
Inte
rnat
iona
lJo
urna
lof
Vic
tim
olog
y,16
(1),
89–1
17
Com
mun
ity
livi
ngan
din
tegr
atio
nsu
rvey
Dit
ton,
J.,
&C
hade
e,D
.(2
005)
.P
eopl
e’s
perc
epti
ons
ofth
eir
like
lyfu
ture
risk
ofcr
imin
alvi
ctim
izat
ion.
Bri
tish
Jour
nal
ofC
rim
inol
ogy,
46(3
),50
5–51
8
Sur
vey
ofco
mm
unit
y,cr
ime
and
heal
thht
tp:/
/ww
w.ic
psr.
umic
h.ed
u/ic
psrw
eb/N
AC
DA
/stu
dies
/043
81
Duk
esan
dH
ughe
ssu
rvey
Duk
es,R
.L.,
&H
ughe
s,R
.H.(
2004
).V
icti
miz
atio
n,ci
tize
nfe
aran
dat
titu
des
tow
ards
poli
ce.F
ree
Inqu
iry
inC
reat
ive
Soci
olog
y,32
(1),
50–5
7
Eur
opea
nun
ion
inte
rnat
iona
lcr
ime
surv
eyht
tp:/
/ww
w.e
urop
eans
afet
yobs
erva
tory
.eu/
euic
s_fi
.htm
Eur
opea
nso
cial
surv
eyht
tp:/
/ww
w.e
urop
eans
ocia
lsur
vey.
org/
Gen
eral
soci
alsu
rvey
http
://w
ww
.nor
c.uc
hica
go.e
du/G
SS
+W
ebsi
te/
Inte
rnat
iona
lcr
ime
vict
imiz
atio
nsu
rvey
(IC
VS
)ht
tp:/
/rec
hten
.uvt
.nl/
icvs
/
Inte
rnat
iona
lla
bor
orga
niza
tion
surv
eyD
amm
ert,
L.,
&M
alon
e,M
.F
.T
.(2
003)
.F
ear
ofcr
ime
orfe
arof
life
?P
ubli
cin
secu
riti
esin
Chi
le.
Bul
leti
nof
Lat
inA
mer
ican
Res
earc
h,22
(1),
79–1
01
Kei
lman
and
Dav
idcr
ime
surv
eyK
lein
man
,P.H
.,&
Dav
id,D
.S.(
1973
).V
icti
miz
atio
nan
dpe
rcep
tion
ofcr
ime
ina
ghet
toco
mm
unit
y.C
rim
inol
ogy,
11(3
),30
7–34
3
Mie
the
seat
tle
crim
esu
rvey
Rou
ndtr
ee,P
.W.,
&L
and,
K.C
.(R
ount
ree
and
Lan
d19
96).
Per
ceiv
edri
skve
rsus
fear
ofcr
ime:
Em
piri
cale
vide
nce
ofco
ncep
tual
lydi
stin
ctre
acti
ons
insu
rvey
data
.So
cial
For
ces,
74(4
),13
53–1
367
458 S.-M. Yang and J. C. Hinkle
(continued)
Sur
vey
nam
eC
itat
ion
for
sing
leus
esu
rvey
s
Ital
ian
mul
tipu
rpos
esu
rvey
Mic
eli,
R.,
Roc
cato
,M
.,&
Ros
ato,
R.
(200
4).
Fea
rof
crim
ein
Ital
y:S
prea
dan
dde
term
inan
ts.
Env
iron
men
tan
dB
ehav
ior,
36(6
),77
6–78
9
Nat
iona
lcr
ime
vict
imiz
atio
nsu
rvey
(NC
VS
)ht
tp:/
/bjs
.ojp
.usd
oj.g
ov/i
ndex
.cfm
?ty=
dcde
tail
&ii
d=24
5
NC
VS
(sch
ool
crim
esu
pple
men
t)ht
tp:/
/bjs
.ojp
.usd
oj.g
ov/i
ndex
.cfm
?ty=
dcde
tail
&ii
d=24
5
Nik
olic
-Ris
tano
vic
crim
esu
rvey
Nik
olic
-Ris
tano
vic,
V.
(199
5).
Fea
rof
crim
ein
Bel
grad
e.In
tern
atio
nal
Rev
iew
ofV
icti
mol
ogy,
4(1)
,15
–31
Obs
erva
tory
ofth
eN
orth
-Wes
tsu
rvey
Am
erio
,P
.,&
Roc
cato
,M
.(2
007)
.P
sych
olog
ical
reac
tion
sto
crim
ein
Ital
y:20
02–2
004.
Jour
nal
ofC
omm
unit
yP
sych
olog
y,35
(1),
91–1
02
Okl
ahom
aci
tysu
rvey
Bab
a,Y
.,&
Aus
tin,
D.
M.
(198
9).
Nei
ghbo
rhoo
den
viro
nmen
tal
sati
sfac
tion
,vi
ctim
izat
ion
and
soci
alpa
rtic
ipat
ion
dete
rmin
ants
ofpe
rcei
ved
neig
hbor
hood
safe
ty.
Env
iron
men
tan
dB
ehav
ior,
21(6
),76
3–78
0
Que
ensl
and
crim
evi
ctim
ssu
rvey
http
://w
ww
.pol
ice.
qld.
gov.
au/p
rogr
ams/
cvs/
Rei
d,R
ober
tsan
dH
illa
rdsu
rvey
Rei
d,L
.W
.,R
ober
ts,
J.T
.,&
Hil
lard
,H
.M
.(1
998)
.F
ear
ofcr
ime
and
coll
ecti
veac
tion
:A
nan
alys
isof
copi
ngst
rate
gies
.So
cial
Inqu
iry,
68(3
),31
2–32
8
Sco
ttis
hcr
ime
and
vict
imiz
atio
nsu
rvey
http
://w
ww
.esd
s.ac
.uk/
find
ingD
ata/
scsT
itle
s.as
p
Sm
ith
and
Tor
sten
sson
surv
eyS
mit
h,W
.,&
Tor
sten
sson
,M
.(1
997)
.G
ende
rdi
ffer
ence
sin
risk
perc
epti
onan
dne
utra
lizi
ngfe
arof
crim
e.B
riti
shJo
urna
lof
Cri
min
olog
y,37
(4),
608–
634
Sti
les,
Hal
iman
dK
apla
nsu
rvey
Sti
les,
B.L
.,H
alim
,S.,
&K
apla
n,H
.B.(
2003
).F
ear
ofcr
ime
amon
gin
divi
dual
sw
ith
phys
ical
lim
itat
ions
.Cri
min
alJu
stic
eR
evie
w,
28(2
),23
2–25
3
The
Fea
rof
crim
ein
Am
eric
aF
erra
ro,
K.
F.
(199
5).
Fea
rof
crim
e:In
terp
reti
ngvi
ctim
izat
ion
risk
.A
lban
y,N
Y:
Sta
teU
nive
rsit
yof
New
Yor
kP
ress
(Not
e:S
eeA
ppen
dix
Afo
rsu
rvey
deta
ils)
The
all,
Ste
rkan
dE
lifs
oncr
ime
surv
eyT
heal
l,K
.P.,
Str
ek,C
.E.,
&E
lifs
on,K
.W.(
2009
).P
erce
ived
neig
hbor
hood
fear
and
drug
use
amon
gyo
ung
adul
ts.A
mer
ican
Jour
nal
ofH
ealt
han
dB
ehav
ior,
33(4
),35
3–36
5
Tri
nida
dcr
ime
surv
eyC
hade
e,D
.,&
Dit
ton,
J.(1
999)
.Fea
rof
crim
ein
Tri
nida
d:A
prel
imin
ary
empi
rica
lres
earc
hno
te.C
arib
bean
Jour
nal
ofC
rim
inol
ogy
and
Soci
alP
sych
olog
y,4(
1),
112–
129
Tse
loni
and
Zar
afon
itou
crim
esu
rvey
Tse
loni
,A
.,&
Zar
afon
itou
,C
.(2
008)
.F
ear
ofcr
ime
and
vict
imiz
atio
n:A
mul
tiva
riat
em
ulti
leve
lan
alys
isof
com
peti
ngm
easu
rem
ents
.E
urop
ean
Jour
nal
ofC
rim
inol
ogy,
5(4)
,38
7–40
9
Vio
lenc
eag
ains
tw
omen
surv
eyht
tp:/
/ww
w.n
ij.g
ov/n
ij/p
ubs-
sum
/172
837.
htm
Wom
en’s
safe
tysu
rvey
http
://w
ww
.aus
stat
s.ab
s.go
v.au
/aus
stat
s/su
bscr
iber
.nsf
/0/F
1668
0629
C46
5E03
CA
2569
8000
7C4A
81/$
Fil
e/41
280_
1996
25 Issues in Survey Design 459
References
Andersen, R., Kasper, J., & Frankel, M. R. (1979). Totalsurvey error: Applications to improve health surveys.San Francisco, CA: Jossey-Bass.
Ayidiya, S., & McClendon, M. (1990). Response effects inmail surveys. Public Opinion Quarterly, 54(2), 229–247.
Babbie, E. (1990). Survey Research methods. Belmont,CA: Wadsworth, Inc.
Bachman, R., & Schutt, R. K. (2008). Fundamentals ofresearch in criminology and criminal justice. ThousandOaks, CA: Sage.
Bachman, R., & Schutt, R. K. (2010). The practice ofresearch in criminology and criminal justice (4th ed.).Thousand Oaks, CA: Sage.
Balkin, S. (1979). Victimization rates, safety, and fear ofcrime. Social Problems, 26, 343–358.
Baumer, T. L. (1979). Research on fear of crime in theUnited States. Victimology, 3, 254–263.
Bem, D. J. (1972). Self-perception theory. In L. Berkowitz(Ed.), Advances in experimental social psychology(Vol. 6, pp. 1–62). New York: Academic Press.
Bem, D. J., & McConnell, H. K. (1970). Testing the self-perception explanation of dissonance phenomena: Onthe salience of premanipulation attitudes. Journal ofPersonality and Social Psychology, 14, 23–31.
Benton, J. E., & Daly, J. (1991). A question order effectin a local government survey. Public Opinion Quar-terly, 55, 640–642.
Bradburn, N., & Danis, C. (1984). Potential contributionsof cognitive research to survey questionnaire design. InT. B. Jabine, M. L. Straf, J. M. Tanur, & R. Tourangeau(Eds.), Cognitive aspects of survey methodology:Building a bridge between disciplines (pp. 101–129).Washington, DC: National Academy Press.
Bradburn, N. M., & Mason, W. M. (1964). The effect ofquestion order on responses. Journal of MarketingResearch, 1(4), 57–61.
Bishop, G. F. (1987). Experiments with the middleresponse alternative in survey questions. PublicOpinion Quarterly, 51(2), 220–231.
Bishop, G. F., Oldendick, R. W., & Tuchfarber, A. J.(1985). The importance of replicating a failure toreplicate: Order effect on abortion items. Publicopinion quarterly, 49, 105–114.
Cantor, D., & Lynch, J. P. (2000). Self-report surveys asmeasures of crime and criminal victimization. Crim-inal justice 2000: Measurement and analysis of crimeand justice (Vol. 4, pp. 263–315). Washington, DC:United States Department of Justice.
Carroll, J. S. (1978). The effect of imaging an event onexpectations for the event. Journal of ExperimentalSocial Psychology, 14, 88–97.
Cochran, W. G. (1953). Sampling techniques. New York:Wiley.
Dalenius, T. (1974). Ends and means of total surveydesign. Report in ‘‘Errors in Surveys.’’ Stockholm:Stockholm University.
del Carmen, A., Polk, O. E., Segal, C., & Bing, R. L. I. I. I.(2000). Fear of crime on campus: Examining fearvariables of CRCJ majors and nonmajors in pre- andpost-serious crime environments. Journal of SecurityAdministration, 21(1), 21–47.
Deming, W. E. (1944). On errors in surveys. AmericanSociological Review, 9, 359–369.
DeMoranville, C. W., & Bienstock, C. C. (2003).Question order effects in measuring service quality.International Journal of Research in Marketing, 20,217–231.
Farrall, S. (2004). Revisiting crime surveys: Emotionalresponses without emotions? International Journal ofSocial Research Methodology, 7, 157–171.
Farrall, S., & Ditton, J. (1999). Improving the measure-ment of attitudinal responses: An example from acrime survey. International Journal of SocialResearch Methodology, 2(1), 55–68.
Farrall, S., & Gadd, D. (2004a). Rearch note: Thefrequency of fear of crime. British Journal of Crimi-nology, 44, 127–132.
Farrall, S., & Gadd, D. (2004). Fear today, gonetomorrow: Do surveys overstate fear levels? Unpub-lished manuscript. Retrieved April 15, 2008, fromhttp://www.istat.it/istat/eventi/2003/perunasocieta/relazioni/Farral_abs.pdf.
Farrall, S., Bannister, J., Ditton, J., & Gilchrist, E.(1997). Questioning the measurement of the ‘fear ofcrimefear of crime:’ Findings from a major method-ological study. British Journal of Criminology, 37(4),658–679.
Fattah, E. A. (1993). Research on fear of crime: Somecommon conceptual and measurement problems. InW. Bilsky, C. Pfeiffer, & P. Wetzels (Eds.), Fear ofcrime and criminal victimisation. Stuttgart: FerdinandEnke Verlag.
Ferraro, K. F. (1995). Fear of crime: Interpretingvictimization risk. Albany, NY: State University ofNew York Press.
Ferraro, K. (1996). Women fear of victimization:The shadow of sexual assault. Social Force, 75(2),667–690.
Ferraro, K. F., & LaGrange, R. (1987). The measurementof fear of crime. Sociological Inquiry, 57(1), 70–101.
Gibson, C., Shapiro, G.M., Murphy, L.R., & Stanko, G.J.(1978). Interaction of survey questions as it relatedto interviewer-respondent bias: Proceedings of theSection on Survey Research Methods. Washington,D.C.: American Statistical Association.
Gordon, M., & Riger, S. (1989). The female fear. NewYork, NY: Free Press.
Groves, R. M., & Lyberg, L. (2010). Total survey error:Past, present, and future. Public Opinion Quarterly,74(5), 849–879.
Hale, C. (1996). Fear of crime: A review of the literature.International Review of Victimology, 4, 79–150.
Heaps, C. M., & Henley, T. B. (1999). Language matters:Wording considerations in hazard perception andwarning comprehension. The Journal of Psychology,133(3), 341–351.
460 S.-M. Yang and J. C. Hinkle
Herr, P., Sherman, S. J., & Fazio, R. (1983). On theconsequences of priming: Assimilation and contrasteffects. Journal of Experimental Social Psychology,19, 323–340.
Hyman, H. H., & Sheatsley, P. B. (1950). The current statusof American public opinion. In J. C. Payne (Ed.), Theteaching of contemporary affairs: Twenty-first year-book of the National Council of Social Studies (pp.11–34). New York: National Education Association.
Johnson, P. K., Apltauer, K. W., & Lindemann, P.(2006). Question-order effects in the domains oflanguage and marriage. Columbia UndergraduateScience Journal, 1(1), 43–49.
Kahneman, D., & Tversky, A. (1984). Choices, values,and frames. American Psychologist, 39, 341–350.
Keane, C. (1992). Fear of crime in Canada: Anexamination of concrete and formless fear ofvictimizationvictimization. Canadian Journal ofCriminology, 34, 215–224.
Kish, L. (1965). Survey sampling. New York: Wiley.Kury, H. (1994). The influence of the specific formula-
tion of questions on the results of victim studies.European Journal on Criminal Policy and Research,2(4), 48–68.
Kury, H., & Ferdinand, T. (1998). The victim’s experi-ence and fear of crime. International Review ofVictimology, 5(2), 93–140.
Levin, I. P. (1987). Associative effects of informationframing. Bulletin of the Psychonomic Society, 25,85–86.
Levin, I. P., Schneider, S. L., & Gaeth, G. J. (1998). Allframes are not created equal: A typology and criticalanalysis of framing effects. Organizational Behaviorand Human Decision Processes, 76, 149–188.
Lorenz, F., Saltiel, J., & Hoyt, D. (1995). Question orderand fair play: Evidence of evenhandedness in ruralsurveys. Rural Sociology, 60, 641–653.
Lynch, J. P. (1996). Clarifying divergent estimates ofrape from two national surveys. The Public OpinionQuarterly, 60(3), 410–430.
Marx, K. (1880). Introduction to the French edition ofEngels’Socialism: Utopian and Scientific. F. Engels,Socialisme utopique et socialisme scientifique, Paris.
Matlin, M. W. (1994). Cognition (3rd ed.). Fort Worth,TX: Harcourt Brace.
Maxfield, M. G., & Babbie, E. R. (2007). Researchmethods for criminal justice and criminology.Belmont, CA: Wadsworth Publishing.
Mayhew, W. P. (1984). The effects of crime: Victims, thepublic and fear. Strasbourg, France: Council ofEurope.
McClendon, M. J., & O’Brien, D. J. (1988). Question-order effects on the determinants of subjective well-being. Public Opinion Quarterly, 52, 351–364.
McConnell, E. H. (1997). Fear of crime on campus: Astudy of a southern university. Journal of SecurityAdministration, 20(2), 22–46.
McFarland, S. G. (1981). Effects of question order onsurvey responses. Public Opinion Quarterly, 45,208–215.
Murphy, L. R. (1976). Effects of attitude supplement onNCS-cities sample victimization data. Washington,DC: U.S. Bureau of the Census, DemographicSurveys Division, National Crime Surveys Branch.
Pain, R. (1995). Elderly women and fear of violent crime:The least likely victims? A recommendation of theextent and nature of risk. British Journal of Crimi-nology, 35(4), 584–598.
Ramirez, I. L., & Straus, M. A. (2006). The effect ofquestion order on disclosure of intimate partnerviolence: An experimental test using the conflicttactics scales. Journal of Family Violence, 21(1), 1–9.
Rossi, P. H., Wright, J. D., & Anderson, A. B. (Eds.).(1983). Handbook of survey research. Orlando, FL:Academic Press.
Rountree, P. W., & Land, K. C. (1996). Burglaryvictimization, perceptions of crime risk, and routineactivities: A multilevel analysis across Seattle neigh-borhoods and census tracts. Journal of Research inCrime and Delinquency, 33, 147–180.
Rugg, D. (1941). Experiments in wording questions: II.Public Opinion Quarterly, 5, 91–92.
Schuman, H., & Presser, S. (1981). Questions andanswers in attitude surveys: Experiments on questionform, wording, and context. Orlando, FL: AcademicPress Inc.
Schuman, H., Presser, S., & Ludwig, J. (1983a). Contexteffects on survey responses to questions aboutabortion. Public Opinion Quarterly, 45(2), 216–223.
Schuman, H., Kalton, G., & Ludwig, J. (1983b). Contextand contiguity in survey questionnaires. PublicOpinion Quarterly, 47(1), 112–115.
Schwarz, N., Strack, F., & Mai, H. P. (1991). Assimi-lation and contrast effects in part-whole questionsequences: A conversational logic analysis. PublicOpinion Quarterly, 55, 3–23.
Sigelman, L. (1981). Question-order effects on presiden-tial popularity. Public Opinion Quarterly, 45,199–207.
Skogan, W. E. (1987). The impact of victimization onfear. Crime and Delinquency, 33(1), 135–154.
Smith, D. L. (1976). The aftermath of victimization: Fearand suspicion. In E. Viano (Ed.), Victims and society.Washington, DC: Visage Press.
Smith, L. N., & Hill, G. D. (1991). Victimization andfear of crime. Criminal Justice and Behavior, 18,217–239.
Smith, A., & Jobe, J. (1994). Validity of reports of long-term dietary memories: Data and a model. In N.Schwarz & S. Sudman (Eds.), Autobiographicalmemory and the validity of retrospective reports(pp. 121–140). New York: Springer.
Sparks, R. F., Glenn, H. G., & Dodd, D. G. (1977).Surveying victims. Chichester: Wiley.
Strack, F., Martin, L. L., & Schwarz, N. (1988). Primingand communication: The social determinants ofinformation use in judgments of life-satisfaction.European Journal of Social Psychology, 18, 429–442.
Sudman, S., & Bradburn, N. M. (1982). Asking questions.San Francisco, CA: Jossey-Bass Inc.
25 Issues in Survey Design 461
Thompson, C. Y., Bankston, W. B., & St. Pierre, R.(1992). Parity and disparity among three measures offear of crime: A research note. Deviant Behavior: AnInterdisciplinary Journal, 13, 373–389.
Tourangeau, R., & Rasinski, K. A. (1988). Cognitiveprocesses underlying context effects in attitude mea-surement. Psychological Bulletin, 103(3), 299–314.
Tourangeau, R., Rips, L. J., & Rasinski, K. A. (2000).The psychology of survey response. Cambridge:Cambridge University Press.
Tyler, T. R., & Rasinski, K. (1984). Comparing psycho-logical images of the Social perceiver: Role ofperceived informativeness, memorability, and affectin mediating the impact of crime victimization.Journal of Personality and Social Psychology, 46(2),308–329.
Walonick, D. S. (1993). Do researchers influence surveyresults with their question wording choices? (DoctoralDissertation). Walden University.
Warr, M. (1985). Fear of rape among urban women.Social Problems, 32(3), 238–250.
Willits, F. K., & Ke, B. (1995). Part whole question ordereffect: Views of rurality. Public Opinion Quarterly,59, 392–402.
Wyer, R. S, Jr. (1980). The acquisition and use of socialknowledge: Basic postulates and representativeresearch. Personality and Social Psychology Bulletin,6, 558–573.
Yang, S., & Wyckoff, L. A. (2010). Perceptions of safetyand victimization: Does survey construction affectperceptions? Journal of Experimental Criminology,6(3), 293–323.
462 S.-M. Yang and J. C. Hinkle