+ All Categories
Home > Documents > The American Community Survey: Warmer (More Current), but Fuzzier (Less Precise) than the Decennial...

The American Community Survey: Warmer (More Current), but Fuzzier (Less Precise) than the Decennial...

Date post: 14-Dec-2016
Category:
Upload: heather
View: 215 times
Download: 2 times
Share this document with a friend
14
This article was downloaded by: [University of Wyoming Libraries] On: 05 September 2013, At: 06:33 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Journal of the American Planning Association Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/rjpa20 The American Community Survey: Warmer (More Current), but Fuzzier (Less Precise) than the Decennial Census Heather Macdonald Published online: 05 Mar 2008. To cite this article: Heather Macdonald (2006) The American Community Survey: Warmer (More Current), but Fuzzier (Less Precise) than the Decennial Census, Journal of the American Planning Association, 72:4, 491-503, DOI: 10.1080/01944360608976768 To link to this article: http://dx.doi.org/10.1080/01944360608976768 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http:// www.tandfonline.com/page/terms-and-conditions
Transcript
Page 1: The American Community Survey: Warmer (More Current), but Fuzzier (Less Precise) than the Decennial Census

This article was downloaded by: [University of Wyoming Libraries]On: 05 September 2013, At: 06:33Publisher: RoutledgeInforma Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House,37-41 Mortimer Street, London W1T 3JH, UK

Journal of the American Planning AssociationPublication details, including instructions for authors and subscription information:http://www.tandfonline.com/loi/rjpa20

The American Community Survey: Warmer (MoreCurrent), but Fuzzier (Less Precise) than the DecennialCensusHeather MacdonaldPublished online: 05 Mar 2008.

To cite this article: Heather Macdonald (2006) The American Community Survey: Warmer (More Current), butFuzzier (Less Precise) than the Decennial Census, Journal of the American Planning Association, 72:4, 491-503, DOI:10.1080/01944360608976768

To link to this article: http://dx.doi.org/10.1080/01944360608976768

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) containedin the publications on our platform. However, Taylor & Francis, our agents, and our licensors make norepresentations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of theContent. Any opinions and views expressed in this publication are the opinions and views of the authors, andare not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon andshould be independently verified with primary sources of information. Taylor and Francis shall not be liable forany losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoeveror howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use ofthe Content.

This article may be used for research, teaching, and private study purposes. Any substantial or systematicreproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in anyform to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

Page 2: The American Community Survey: Warmer (More Current), but Fuzzier (Less Precise) than the Decennial Census

The American Community Survey

Warmer (More Current), but Fuzzier (Less Precise)than the Decennial Census

Heather MacDonald

Dealing with the U.S. Census of Population and Housing is almost un-avoidable for local planners. The decennial census offers the only socialand economic information that is spatially detailed and consistent across

time and space, and available at minimal cost. Census data have a special legiti-macy that local surveys rarely have (Starr, 1987). They are institutionalized inmuch social and economic legislation, and many federal (and state and local)resources are distributed according to formulae anchored in decennial censusdata. Census data underpin arguments about how to target resources, how toprioritize needs, and how to evaluate programs. Precisely because they providequantified, apparently neutral evidence, they are crucial to our modern policy-making rhetoric; they form an information infrastructure (Skerry, 2000).

But census data have at least two important limitations. First, decennial dataare usually from 2.5 to 12 years out of date, requiring us to construct or purchaseupdated estimates, or to make cases with outdated evidence. Because social andeconomic trends are difficult to predict, and administrative data sources (such asbuilding permits or immigration records) are usually incomplete at best, suchupdating inevitably introduces error, which is usually compounded for smallerareas. The official status of census data is also no guarantee of their accuracy.Most of the census data planners use (such as travel-to-work times or housingcosts) are estimates based on surveys of the sample of households who completethe long form. Often, the measures of greatest interest to planners (such as thenumber of transit riders by block group) are based on very few respondents,which means they are more approximate than many users realize. Even decennialenumerations obtained from the short form are not the exact counts the nameimplies.

The traditional decennial census has also become more expensive to conductas the complexity of counting a diverse and rapidly growing population hasmultiplied. The American Community Survey (ACS)1 addresses both cost andcurrency concerns, fundamentally re-engineering most aspects of the census froma point-in-time survey to continuous measurement. In 2010, the decennial censuswill consist of the enumeration only (the short form); averaged ACS data willprovide what the long form once did. The Census Bureau began in 2005 tosurvey three million addresses each year to provide annually updated, but lessprecise information. Annual estimates are released for all places, but the estimates

491

The American Community Survey,which will replace data many plannersrely on from the decennial Census longform, is finally in progress. The firstnationwide data for places of 65,000 ormore was released in the summer of 2006.It has several interesting implications forplanning. On the one hand, more currentdata will eliminate many of the inaccu-racies introduced by projection-basedupdates of stale census data. On the other,smaller sample sizes will mean we willhave less precise estimates. Because theACS will use averaged rather than point-in-time data, it will measure slightlydifferent things than the decennial census.Finally, planners should be alert to theopportunities they will have to improvelocal data quality by improving theaddress file from which the sample isdrawn.

Heather MacDonald ([email protected]) is an associateprofessor in the graduate program inurban and regional planning at theUniversity of Iowa. Her recent work hasincluded analyses of housing and com-munity development needs for stateplanning, and a book, Unlocking theCensus with GIS (ESRI Press, 2004, co-authored with Alan Peters).

Journal of the American Planning Association,

Vol. 72, No. 4, Autumn 2006.

© American Planning Association, Chicago, IL.

Dow

nloa

ded

by [

Uni

vers

ity o

f W

yom

ing

Lib

rari

es]

at 0

6:33

05

Sept

embe

r 20

13

Page 3: The American Community Survey: Warmer (More Current), but Fuzzier (Less Precise) than the Decennial Census

are averages rather than the previous year’s data for smallerplaces (3-year averages for places with populations between20,000 and 65,000, and 5-year averages for places withpopulations less than 20,000), as Figure 1 shows. As withlong form data, a Public Use Microdata Sample (PUMS)file will be released annually for census-defined PUMAs(Public Use Microdata Areas) of 100,000 people.2

Clearly, the ACS offers several improvements over de-cennial data. This article traces the history of and rationalefor the ACS, investigating the social, political, and economiccontext in which it evolved, and then discusses its keyimplications for planning.

A Brief History of the ACS

Social and economic planning expanded rapidly dur-ing the 1930s, and planners struggling to respond to themassive changes of the Depression demanded both moredetailed and more current data on housing conditions andemployment (Anderson, 1988). The 1940 Census was thefirst to use a sample to collect a much-expanded menu ofdetailed data (although the separate long form was intro-duced only in 1960). More current (intercensal) data wereprovided by the Current Population Survey (CPS) and theInterCensal Population Estimates (ICPE) program. But the

CPS provided only national- and regional-level estimates,and ICPE small-area estimates were developed for only anarrow range of characteristics. As estimates based onprojections from the previous census and administrativedata, they were also much less accurate for small areas.Interim Census Bureau Director Philip Hauser first pro-posed an “annual sample census” in 1941, to providebetter quality intercensal data. Census Bureau statisticianLeslie Kish, however, was the person who developed andchampioned the idea of continuous measurement on whichthe ACS is based beginning in the late 1970s, arguing thatrolling samples could fully replace the decennial census(Alexander, 2001).

During the 1970s and 1980s, as more resources cameto be distributed through block grants, and formulae tiedto census data dictated eligibility and funding levels (deNeufville, 1987), demands for more spatially detailed andaccurate intercensal data grew. Greater policymaking re-sponsibility devolved to localities under the New Federalismof the 1970s. Local governments needed data to design andmanage new programs, while state and federal governmentsneeded data to oversee and evaluate the same programs (deNeufville, 1987). Census data offered local governments away to target funds that appeared insulated from charges ofpatronage or bias (Innes, 1990; Starr, 1987). (For a recentexample, see Galster, Tatian & Accordino in this issue.)

492 Journal of the American Planning Association, Autumn 2006, Vol. 72, No. 4

Figure 1. ACS data releases.

Source: U.S. Census Bureau (2004b, p. 5). Reprinted with permission.

Data for the previous year released in the summer of:

Type of data Population size of area 2003 2004 2005 2006 2007 2008 2009 2010+

Annual estimates ≥ 250,000

Annual estimates ≥ 65,000

3-year averages ≥ 20,000

5-year averages Census tractand block group*

Data reflect American Community Survey testing through 2004.

*Census tracts are small, relatively permanently statistical subdivisions of a county averaging about 4,000 inhabitants. Census block groups generallycontain between 600 and 3,000 people. The smallest geographic level for which data will be produced is the block group; the Census Bureau will notpublish estimates for small numbers of people or areas if there is a probability that an individual can be identified.

Dow

nloa

ded

by [

Uni

vers

ity o

f W

yom

ing

Lib

rari

es]

at 0

6:33

05

Sept

embe

r 20

13

Page 4: The American Community Survey: Warmer (More Current), but Fuzzier (Less Precise) than the Decennial Census

But if data are to depoliticize contentious decisions, theymust be widely perceived as accurate and timely. Althoughdecennial census data may have the legitimacy of beingofficial, stale data are easily challenged.

A planned mid-decade census in 1985 was neverfunded. In its place, the Census Bureau began work inearnest on a continuous measurement approach (Alexander,2001). Congress provided the first round of funding to de-velop the concept in 1995. The Bureau proposed replacingthe 2000 long form with the ACS, but the 1995 NationalAcademy of Sciences (NAS) census advisory panel arguedthis was premature:

. . . the work to date on continuous measurement hasoverestimated the savings from dropping the long form,understated the cost of a continuous measurement sys-tem, and not sufficiently examined feasible alternativesfor meeting the nation’s needs for more timely long-form-type data at reasonable overall cost. We concludethat it will not be possible to complete the needed re-search in time to make the critical decisions regardingthe format of the 2000 Census. (Edmonston &Schultze 1995, p. 135)

By 1998 the Bureau began planning instead to replacethe long form in the 2010 census. The plan expanded theinitial four ACS test counties to 36 (at 31 sites), and initiateda special supplementary survey to test ACS methods duringthe 2000 Census (Census 2000 Supplementary Survey, orC2SS).3 But though the 2004 NAS advisory panel expressedstrong support for the ACS as “fundamental” to a re-engineered census, it noted that many of the concernsraised by the 1995 panel were still unresolved, and arguedthat work remained, particularly on estimation techniquesand relationships to other federal surveys, if the CensusBureau was to win full support for the ACS (Cork, Cohen,& King 2004, p. 6).

Congressional support, and thus full funding, wasindeed difficult to mobilize. Privacy concerns and anxietiesabout government intrusiveness have been the chief criticismsof census opponents since the U.S. Census’ inception in1790 (Anderson, 1988; Prewitt, 2004; Singer, Mathiowetz,& Couper, 1993). As a new program, the ACS attractedsome attention; it did not obviously fall under the consti-tutional mandate for the decennial enumeration, and wasregarded with suspicion by many congressional conserva-tives. Opponents dramatized centralized information’spotential to enable government social control, invasion ofprivacy, and social programs, as the following from Con-gressman Ron Paul, a Texas Republican, illustrates:

You may not have heard of the American CommunitySurvey, but you will. The national census, whichhistorically is taken every ten years, has expanded toquench the federal bureaucracy’s ever-growing thirst togovern every aspect of American life. . . . It contains 24pages of intrusive questions concerning matters that aresimply none of the government’s business, includingyour job, your income, your physical and emotionalhealth, you family status, your dwelling, and yourintimate personal habits.

The questions are both ludicrous and insulting.The survey asks, for instance, how many bathroomsyou have in your house, how many miles you drive towork, how many days you were sick last year, andwhether you have trouble getting up stairs. It goes onand on, mixing inane questions with highly detailedinquiries about your financial affairs. One can onlyimagine the countless malevolent ways our federalbureaucrats could use this information. At the veryleast the survey will be used to dole out pork, which isreason enough to oppose it. (Paul, 2004)

As a result of such opposition, it took 3 years to securefull funding for the ACS. Finally, in the fall of 2004, justin time to provide an acceptable substitute for the 2010long form, Congress appropriated $146 million of the totalrequested $165 million. Consequently, although householdsurveys began nationwide in 2005, counting the groupquarters population was deferred until 2006 (U.S. CensusBureau, 2004a, 2006a). Falling refusal rates in Figure 2suggest that popular opposition to the ACS may havediminished somewhat between 2000 and 2004, perhapsdue to better marketing, interviewers becoming morepersuasive and skilled with experience, or a dimunition ofthe high profile opposition that seems to accompany eachdecennial census. 4

Why Continuous Measurement?

The central argument for the ACS is that users needmore timely data to capture and demonstrate the effects ofrapid growth and dramatic change.5 And current local datado not have the same legitimacy as census data. Withoutofficial data, even knowledgeable people who understandthe local situation cannot demonstrate it to higher levels ofgovernment and funding agencies:

Within a year, Fulton County at one point went fromhaving the highest rate of employment in the State tothe lowest rate of employment in the State. The census

MacDonald: The American Community Survey 493

Dow

nloa

ded

by [

Uni

vers

ity o

f W

yom

ing

Lib

rari

es]

at 0

6:33

05

Sept

embe

r 20

13

Page 5: The American Community Survey: Warmer (More Current), but Fuzzier (Less Precise) than the Decennial Census

data didn’t show us that. You know, living in FultonCounty we know that this is the case, but we can’texpress that to funders. . . . (American CommunitySurvey, Reardon testimony, 2003, pp. 32–33)

Businesses, too, rely on consistent, spatially detailed,data to make location, marketing, and other decisions. As Iwill discuss later, privately provided data do not substitutefor reliable, official census data. In the words of a represen-tative of the U.S. Chamber of Commerce, “...the ACS isan important part of our country’s economic infrastructure”(American Community Survey, Naymark testimony, 2003,p. 51).

But demand for timely data only partly explains thepush to develop the ACS. The rising cost of the decennialcensus made fundamental restructuring imperative. Inconstant 2000 dollars, it cost $13 to count each householdin 1970 and $56 in 2000. By 2010, this cost would be $72if the structure of the census remained unchanged (U.S.Government Accountability Office [GAO], 2004a, p. 3).Costs have increased largely because it has become muchmore difficult to count a particular subset of households,and accuracy has suffered for the same reason (Edmonston,

2001, p. 44; Singer et al., 1993). For 2000, the CensusBureau had proposed sampling non-respondents to containcosts and improve accuracy. Sampling offered a way tocorrect for the differential undercount of groups such asrenters, African-American men, and immigrants (Anderson& Feinberg, 1999; Prewitt, 2003). But sampling became alightning rod for partisan conflict, with Republicans argu-ing that “statistics” would be used to expand populationestimates in Democratic districts, and Democrats arguingthat without sampling, the differential undercount woulddeprive difficult-to-count minority groups of significantrights and resources.6 In January, 1999, the SupremeCourt ruled that the plan to produce a “one-number”census (one which incorporated statistical adjustments intothe final enumeration figures) was unconstitutional, requir-ing a major last-minute reorganization of preparations forthe 2000 census (Prewitt, 2003, p. 25).

Supporters claim that, at best, the ACS may cost thesame as would conducting a traditional decennial census in2010 (American Community Survey, Kincannon testimony,2003, p. 23). However, if response rates on the short-form(the basic enumeration) improve, this could help containcosts. In 2000, short-form response rates were 13% higher

494 Journal of the American Planning Association, Autumn 2006, Vol. 72, No. 4

Figure 2: ACS survey refusal rates by state, 2000 and 2004.

Source: Maps created by author; data from U.S. Census Bureau (2006b).

2000 2004

Percent ACS Refusalslower than 1%1% to 2%2% to 2.5%2.5% and higher

Percent ACS Refusalslower than 1%1% to 2%2% and higher

Dow

nloa

ded

by [

Uni

vers

ity o

f W

yom

ing

Lib

rari

es]

at 0

6:33

05

Sept

embe

r 20

13

Page 6: The American Community Survey: Warmer (More Current), but Fuzzier (Less Precise) than the Decennial Census

than those for the long form. The Bureau argues thateliminating the long form from the decennial count mayincrease response rates, improving both accuracy and cost-effectiveness (American Community Survey, Kincannontestimony, 2003, p. 23).7

Finally, the Bureau argues that the ACS may improvedata quality. Sampling error will be higher in the ACS (asI discuss below), but non-sampling error may be reducedbecause the ACS will follow up with non-respondents usinga small professional staff, rather than a large workforce oftemporary and poorly paid enumerators (U.S. CensusBureau, 2002, 2004c). Comparisons of ACS, the 2000C2SS, and 2000 Census results provide some support forthis claim. ACS interviewer follow-up resulted in higherrates of survey completion, and evaluations concluded thatthis may reduce non-sampling error because fewer answerswere imputed or obtained from neighbors (Gage, 2004,pp. 12–13; Hough & Swanson, 2004, p. 17; U.S. CensusBureau, 2004c).8 While reducing non-sampling errors willnot compensate for the higher sample error in the ACS,improved data quality may allow users to estimate totalerror better and thus manage total error better.

Another source of non-sampling error is the currencyand completeness of the master address file from whichthe sample frame for both the ACS and future decennialcensuses will be drawn. The U.S. Postal Service deliverysequence files form the basis for this address list, but theyhave important gaps. Many households (especially in ruralareas) do not get mail where they live, not all units in multi-unit structures get mail separately, and new or convertedunits may not be added until they are occupied. ACS fieldinterviewers equipped with handheld GPS units will up-date addresses for the ACS coverage program, also knownas the Community Address Updating System (CAUS;American Community Survey, Census Bureau DirectorLouis Kincannon testimony, 2003, p. 28; Cork et al., 2004,pp. 76–77). The CAUS is part of a broader reform to theCensus Bureau’s geographic database systems, a majoremphasis of Census 2010 planning (Cork et al., 2004,chap. 3; Waite & Reist, 2005, p. 16).

Implications of Re-engineering

The ACS is central to the Bureau’s efforts to re-engineerthe traditional census. It is clear why its threatened derail-ment in the 2004 budget deliberations mobilized such awide range of supporters, many of them planners. But theflurry of political activity has deflected attention from thedetails of the proposal. I consider the implications the re-engineering will have for planners in greater detail below.

Smaller Sample Sizes Will Mean LargerSample Errors

Sample size most obviously represents the tradeoffsbetween costs on one hand and the accuracy and timelinessof small-area data on the other. Any sample entails someloss of precision. We call this sampling error: In formalstatistical terms, it is the estimated amount by which theaverage of any single sample differs from the average wewould observe if we took infinite samples of the samepopulation, or the true mean. A complete census of thepopulation, on the other hand, does not have a sampleerror (by definition), but it may have many other kinds ofnon-sample error: Respondents may misunderstand ques-tions (or lie), interviewers may bias responses, or somerespondents may be missed or refuse to be interviewed.So, although a larger sample may reduce sample error, itmay do little about non-sample error. Sample error is easierto deal with than non-sample error; we can estimate theeffect it is likely to have and adjust for it, most usually byconstructing confidence intervals around sample estimates.9

To aid the user of ACS data, which are based on a muchsmaller sample than the long form data, the Census Bureaupublishes 90% confidence intervals alongside estimates.

The original ACS proposal for a sample size of 6 millionhouseholds provided reliability equivalent to that of the longform, but at double or triple the cost (Diffendahl & Weid-man, 1995; Edmonston & Schultze, 1995). A significantlyscaled down sample of 3 million housing units a year (plus2.5% of the group quarters population) was settled on, andthe Census Bureau estimates that sampling error will beabout 1.33 times that of the long form (U.S. Census Bureau,2002). Over 5 years, approximately 12.5% of all addresseswill be surveyed, compared to just under 17% surveyed forthe 2000 long form.10 But only one third of non-respondingaddresses will be followed up with personal visits, so theactual “interviewed” sample size (including people whorefuse to respond or who could not be contacted) may beonly 56% of that for the long form (Hubble, quoted in VanAuken, Hammer, Voss, & Veroff, 2004, p. 19).

Areas with very low survey response rates, small places,and rural areas without city address structures would havehigher sample errors if sample sizes were not adjusted.Table 1 compares ACS and Census 2000 self-response rates(the percent of households who mail surveys back withoutrequiring follow-up) and occupied unit non-response rates(the percent of households not responding to interviewerfollow-up) for four ACS test counties for which detailedcomparisons have been completed.

There is wide variation. Bronx County has one of thelowest self-response rates among ACS test sites, but it may

MacDonald: The American Community Survey 495

Dow

nloa

ded

by [

Uni

vers

ity o

f W

yom

ing

Lib

rari

es]

at 0

6:33

05

Sept

embe

r 20

13

Page 7: The American Community Survey: Warmer (More Current), but Fuzzier (Less Precise) than the Decennial Census

be typical of many other difficult enumeration areas.11

Although better ACS follow-up resulted in lower non-response rates for the ACS than for the Census long form,Salvo, Lobo and Calabrese (2004) point out that this is acostly operation, and caution that “. . . five years of datamay not be enough to generate reliable estimates at thecensus tract level if mail return rates do not improve” (p. 22).Compensating for these problems requires larger samplesin difficult-to-survey places, and corresponding reductionsin sample size elsewhere. Actual samples will vary between8.5% and 50% over a 5-year period (Waite & Reist, 2005,p. 16).

Sampling error will have a greater impact not onlyon estimates for smaller places, but also on estimates forsmaller groups of people.12 Some separate items are highlycorrelated within the same household (for instance, raceand Hispanic origin, language spoken at home, and citizen-ship status). A smaller sample will include fewer independentobservations of such households, and thus we can expecthigher sampling errors for these items. Confidence intervalswill vary for different types of information, and the ade-quacy of small-area estimates will vary depending on thetype of data examined. Table 2 shows 2004 ACS povertydata for San Diego County; compare the width of theconfidence intervals around the estimates of the percentageof families in poverty with those around the estimates ofthe percentage of single-woman-headed households withchildren under 5.

Clearly, planners will be dealing with highly variablelevels of accuracy, and data on more vulnerable smallgroups are likely to be less precise than data on large groups.Of course, this is already the case with the long form dataon population and housing characteristics, but with thedecennial data these consequences are lessened by the largersample sizes and resulting smaller confidence intervals.

How adequate will the ACS be to answer the questionsplanners deal with, compared to the decennial census? Thefollowing hypothetical example illustrates a typical chal-lenge planners could face. During hearings on a 2005Consolidated Housing Plan for County X (a medium sizedmetropolitan area with relatively high rates of home owner-ship), planners prepare graphs showing how several housingcharacteristics changed between 2000 and 2004. Figure 3shows a striking finding: the number of severely cost-burdened households (those paying more than half theirincome in rent) appears to have increased dramaticallybetween 2000 and 2004. In each lower cost-burden cate-gory, Figure 3 also shows fewer renters in 2004 than in2000. People attending the public meetings are impressedwith this graph. They conclude that the situation has

496 Journal of the American Planning Association, Autumn 2006, Vol. 72, No. 4

Table 1. Household self-response and non-response rates, ACS andCensus 2000.

Occupied unitSelf-response rates non-response rates

2000 Census 2000 Census County ACS 2000 ACS 2000

Multnomah, OR 65.0% 70.4% 3.8% 5.1%Tulare, CA 50.1% 63.4% 3.9% 10.1%San Francisco, CA 57.9% 65.7% 6.4% 12.0%Bronx, NY 36.0% 53.0% 11.0% 21.0%

Note:Self-response rates are the proportion of households that returned thesurvey by mail. Occupied unit non-response rates are the proportion ofhouseholds (occupied units) that neither mailed back the survey norresponded to follow-up by enumerators.

Sources: Hough and Swanson (2004); Gage (2004); Salvo, Lobo, andCalabrese (2004)

Table 2. Percentage of San Diego County families whose prior year’sincome was below the poverty level, 2004.

Lower UpperEstimate bounda bounda

All families 9.5 8.3 10.7With related children under

18 years 14.2 12.1 16.3With related children under

5 years only 10.2 5.9 14.5

Married couple families 5.8 4.6 7.0With related children under

18 years 8.6 6.4 10.8With related children under

5 years only 4.8 1.8 7.8

Families with female householder, no husband present 23.4 19.2 27.6With related children under

18 years 31.5 25.8 37.2With related children under

5 years only 35.3 18.3 52.3

Note:a. For a 90% confidence interval.

Source: U.S. Census Bureau, American Factfinder.

Dow

nloa

ded

by [

Uni

vers

ity o

f W

yom

ing

Lib

rari

es]

at 0

6:33

05

Sept

embe

r 20

13

Page 8: The American Community Survey: Warmer (More Current), but Fuzzier (Less Precise) than the Decennial Census

worsened considerably for renter households since 2000,and recommend that the county finance a generous rentalassistance fund to aid severely cost-burdened renters.

As the planner who will be arguing for this proposalbefore the county board, you need to know how solid theevidence really is that many more renter households faceunreasonable and unsustainable housing costs. Rentersmake up a fairly small share of all households (31.2%nationally), and slightly more than half of all renter house-holds pay less than 25% of income for housing. Thus thegroup you are interested in is a relatively small proportionof households (15.6%). Once we take into account theeffect of sample error by constructing confidence intervalsaround these estimates (in the case of the ACS, using theconfidence intervals supplied in the published tables), willwe still have a solid argument that dramatic changes haveoccurred? Figure 4 suggests not.

In statistical terms, Figure 4 provides no basis forrejecting the null hypothesis that the number of severelycost-burdened renters increased between 2000 and 2004.Although the overlap in confidence intervals is small, it ispossible that the true average of sample means in 2004 isno different from the true average in 2000. Figure 4 also

graphically illustrates the effect of moving to smaller samplesizes with the ACS: Confidence intervals, and thus therange of likely true values, are far wider for the ACS datathan for the 2000 long form data. For variables that applyto very small groups of people, such as means of transpor-tation to work for some of the smaller racial groups, ACSsamples may be simply too small to produce usable estimates.

One outcome of special importance to planners is thatsmaller samples and streamlined ACS follow-up procedureswill make data on vacant units less precise. Vacant unitswill be identified during the in-person follow-up with non-respondents. This follow-up will cover one in three addressesand occur in the third month after the survey is distributed.Thus two months will pass before a unit that was vacant asof the time of the survey is rechecked to determine vacancystatus, during which time it might have become occupied.Consequently, vacant units will be under-sampled com-pared to occupied units, and information collected for va-cant units, such as asking price, number of bedrooms, andtype of unit, will have a higher average coefficient of variance(estimated at 16.5%, compared to 12.2% for occupiedunits) and thus a higher sampling error (U.S. Census Bureau,2002, p. 28). Although the 2010 Census will count vacant

MacDonald: The American Community Survey 497

Figure 3. Simple comparison of renter households in County X by percent of income spent for housing: 2000 Census vs. 2004 ACS.

Source: Author’s calculation of a hypothetical case based on 2000 Census of Population and Housing SF3 detailed tables, and ACS 2004 detailedtables.

0

2,000

4,000

6,000

8,000

10,000

12,000

25% to 29.9% 30% to 34.9% 35% to 39.9% 40% to 49.9% 50% or more

Percent of income spent for housing

20002004

Renter households

2000 Census2004 ACS

Dow

nloa

ded

by [

Uni

vers

ity o

f W

yom

ing

Lib

rari

es]

at 0

6:33

05

Sept

embe

r 20

13

Page 9: The American Community Survey: Warmer (More Current), but Fuzzier (Less Precise) than the Decennial Census

homes, it will not collect other information about them.Given how often vacancy rates are used as a leading indi-cator of neighborhood distress, housing availability, orgrowth rates, less accurate vacancy rates will be a real loss.But reduced precision should be set in context; more cur-rent data could eliminate other common sources of error.

More Timely Data Will Mean FewerProjection-Based Errors

Until now, planners typically have had two options fordealing with outdated census data. They could run theirown models to project from the last census, using localadministrative data and sometimes surveys, to estimatecurrent conditions. Or, they could purchase updated esti-mates from a variety of private firms who use small-areaprojection techniques to provide updated census estimates.

Unless they are doing the work themselves, plannersmay not be aware that most estimates of current conditionsare in reality projections based on the last decennial census.Projection methodologies differ,13 and detailed evaluationis difficult because of the limited technical information

disclosed by firms.14 Yet without thoughtful adjustmentbased on local knowledge, particularly of migration, themost volatile component of population change, such effortsare likely to be quite inaccurate, especially for the smallestareas. Isserman (1993) states the too-often-neglectedtruism: “Any projection method that extends observedtrends or rates into the future will fail to forecast the futureaccurately when the trends or rates change” (p. 57). Thisapplies to projection models taught in planning schools aswell as to the commercial models many planners use asready-made alternatives. For instance, some data providersuse the Census Bureau’s Intercensal Population Estimates(ICPE) as the basis for projections that update a widerange of demographic characteristics. But although theICPE are official, historically they have been far fromaccurate. And discrepancies between the 2000 ICPE andthe 2000 Census counts were far higher for small placesand for more rapidly changing areas (Devine & Coleman,2002; Harper, Coleman, & Devine, 2002).

Local knowledge may be key not only to producingreasonably accurate forecasts (Isserman, 1993), but also to

498 Journal of the American Planning Association, Autumn 2006, Vol. 72, No. 4

Figure 4. Estimates of renter households in County X with 90% confidence intervals, by percent of income spent on housing: 2000 Census vs. 2004 ACS.

Source: Author’s calculation of a hypothetical case based on 2000 Census of Population and Housing SF3 detailed tables, and ACS 2004 detailedtables, confidence intervals calculated by author.

0

2,000

4,000

6,000

8,000

10,000

12,000

14,000

16,000

Renterhouseholds

2000 2004 2000 2004 2000 2004 2000 2004 2000 2004Census ACS Census ACS Census ACS Census ACS Census ACS

25% to 29.9% 30% to 34.9% 35% to 39.9% 40% to 49.9% 50% or more

Percent of income spent on housing

Dow

nloa

ded

by [

Uni

vers

ity o

f W

yom

ing

Lib

rari

es]

at 0

6:33

05

Sept

embe

r 20

13

Page 10: The American Community Survey: Warmer (More Current), but Fuzzier (Less Precise) than the Decennial Census

choosing the projection approach most appropriate to eachlocal area (Smith & Cody, 2002; West & Fullerton, 1996).Local knowledge appears to be highly correlated with moreaccurate projections (Lenze, 2000; Smith & Cody, 2002).Yet mass-produced projections from commercial vendorstake only the most dramatic sources of local variation intoaccount. (For instance, Claritas claims to account formajor natural disasters and military base closings.) Privatesector estimates would be vastly more expensive than theyare if they were customized for localities. For most businessusers, cheap, off-the-shelf projections of current and futureconditions are good enough (given the other uncertaintiesprivate businesses face) to guide broad-brush marketingdecisions. But local public policy decisions, which maypotentially affect a much larger volume of investment,both public and private, should be made based on a moreaccurate understanding than these generic models offer.

Most commercial estimates and projections do incor-porate recent administrative or other data into models todeal with some of the more variable components, such asmigration, housing unit growth, or job growth. In mostcases, however, the data are chosen from what is readilyavailable, and do not necessarily provide a good proxy forthe real trends the model should attempt to capture. Forinstance, one private data provider updates small-areamedian household income using the “average sum ofhousehold credit limits” from a consumer marketingdatabase, arguing that this measure is highly correlatedwith tract-level income. The likelihood that cards withhigh credit limits are aggressively marketed in tracts theprevious census identified as high income is not promi-nently discussed. Basing projections on possibly invertedcausality in this way may be no better than using outdateddata.

Thus planners often introduce significant errors whenthey attempt to update stale census data. Though I havespecifically criticized the estimates and projections ofprivate providers, local efforts to update census data cansuffer many of the same problems. This is an importantcounterpoint to the increased sample error the ACS willcause. Increased sample error may be a lesser problem thanthe unsystematic and unpredictable error introduced bythe projection techniques we have been using. Even thoughwe will still struggle with the limitations of projection andforecasting when we venture into discussions of the future,it may be desirable to abandon the widespread practice ofprojecting what we call current conditions.

Continuous vs. Point-in-Time MeasuresThe ACS changes what we measure because it depicts

communities over time rather than once a decade. It

redefines resident population by sampling those who arecurrently living in a location and who say they are there forat least two months or have no other residence. By contrast,the decennial census samples residents at their “usual placeof residence” (where they live for the majority of the year).Redefining residence will clearly improve data for countieswith large seasonal fluctuations in population (Van Aukenet al., 2004, p. 3). This more fluid concept of residencemay also do better at capturing the presence of footlooseretirees in a community, and the increasing mobility ofsome kinds of labor (construction workers and consultantsas well as the more typical migrant farm workers).

However, this change also complicates convertingsample numbers to population totals. In previous decennialcensuses, long form sample results were adjusted to popu-lation totals based on the short form enumeration totals.But there is no similar benchmark for the ACS because itsdefinition of residence is unique.15 The ICPE, discussedpreviously, estimate population totals based on 2000 Censuscounts that use the “usual place of residence” concept.Although ACS data are expected to improve ICPE estimates(Alexander & Wetrogan, 2000), questions still remainabout controlling ACS population totals to ICPE estimatesgiven their different operational definition of the residentpopulation. As a consequence, ACS estimates of politicallycharged variables such as poverty rates among the childrenof seasonal migrants, for instance, may be more vulnerableto criticism than similar data from the last decennial census.

A moving average may produce quite a differentpicture of a community than a point-in-time estimate. Forexample, comparing median household incomes from theACS to those from the 2000 census, ACS incomes ap-peared relatively lower in some communities with morelow-wage migrant farm workers (Gage 2004, p. 6), andhigher in retirement destination counties with more affluentseasonal residents (Van Auken et al., 2004, p. 24). Usingthe ACS to redefine the number of people in poverty, forexample, a measure used to allocate funds such as HOME,may be an extremely controversial process in some places,especially given the sample error issues discussed above.

Continuous measurement has other implications.Although the Bureau will release multiyear averages annu-ally for all sizes of places, these should not be comparedyear to year because overlapping data would distort trends.Instead, one set of 3-year averages (for instance, 2005–2007) should be compared to the previous non-overlap-ping set of 3-year averages (2002–2004, not 2004–2006).Planners should also use consistent data when comparingcommunities, which means using the most recent dataavailable for the smallest community in the comparison.For instance, a state may decide to target housing tax

MacDonald: The American Community Survey 499

Dow

nloa

ded

by [

Uni

vers

ity o

f W

yom

ing

Lib

rari

es]

at 0

6:33

05

Sept

embe

r 20

13

Page 11: The American Community Survey: Warmer (More Current), but Fuzzier (Less Precise) than the Decennial Census

credits to communities with low vacancy rates and highproportions of cost-burdened renters, based on ACS data.This is precisely the kind of policy design improvement theACS is meant to enable. However, targets will have to bebased on 5-year averages for these variables if developers indifferent size communities are to compete fairly. An appli-cant in a larger city with a suddenly tighter rental marketmay have current ACS data to demonstrate his or herproject should be ranked higher than it is using 5-yearaverage data, but fairness dictates that more current infor-mation cannot be considered in awarding points based onlocal need. This is likely to be a contentious issue.

Of course, it is still unclear what effect the ACS meas-urement changes will have on allocation formulas or eligi-bility cutoffs. Averaging data over several years will smoothsharp changes in need; some communities could benefit byremaining eligible for funding in years when need dropped,and others will lose out by not qualifying for fundingdespite a temporary increase in need (Cork et al., 2004,p. 110). Only empirical research will show the effect of thetrade-off between the bias introduced from averaging dataover time, to the bias introduced by using outdated esti-mates (or estimates that have been updated through less-than-perfect projection methods).

Both the U.S. Department of Housing and UrbanDevelopment (HUD) and the GAO raise a further averag-ing issue for the ACS. The Census Bureau plans to releasemonetary data adjusted for inflation, using the ConsumerPrice Index (CPI). It will adjust monthly data to reflect anaverage for the calendar year, and inflate annual data fromprevious years to current year values as each new set ofACS data is released. Multiyear rolling averages will beexpressed in current year values. Critics have pointed outthat adjusting monthly data for inflation to produce anannual estimate is not the same as adjusting monthly datafor trends to produce an annual estimate. The formerassumes that purchasing power remains constant over theyear, inflating incomes to reflect changes in prices. Thelatter adjusts monthly data to reflect trends in income,distinct from trends in prices (ORC Macro, 2002, p. 16).However, once again the ACS’s shortcomings should be setagainst the shortcomings of previous methods of updatingmonetary values from the decennial census, which reliedon much smaller national surveys such as the CPS, theAmerican Housing Survey, Bureau of Labor Statisticssurveys, and local rent surveys (Peters & MacDonald,2004, pp. 208–209). Despite the shortcomings of aninflation-based model, the ACS will probably offer morelocally specific data on monetary variables.

Boosting Accuracy by Improving theAddress File

An accurate address file is essential for a successfulsample survey (or enumeration). A sampling frame thatexcludes some households cannot be the basis of a trulyrandom sample. Inaccurate address files have been animportant source of non-sampling error in the long formportion of the census, and of the inaccuracy that plaguesthe enumeration. This is not an issue unique to the ACS,but it is an issue that takes on added importance given thesmaller sample from this point forward. A more currentand complete Master Address File (MAF) would help limitbias in the sample frame. The Local Update of CensusAddresses program (LUCA) will involve local and tribalgovernments in address file updates to a much greaterextent than previously (Cork et al., 2004, p. 76; Waite &Reist, 2005).16 Planners are likely to play a key role in this,and this provides an opportunity to improve the quality oflocal data. Integrating new subdivisions, conversions, andproperty tax records into local geographic databases (wherethis does not yet occur) is one way to enhance currencyand expand coverage. But the challenges of improvingaddress files go far beyond merely technical issues.

Ethnographic research suggests address files may system-atically exclude some types of households. In a review ofethnographic research commissioned by the Census Bu-reau in the early 1990s, de la Puente (1995) estimated thatacross the 29 sample areas examined, 40% of residents whoshould have been enumerated on the 1990 Census were not(p. 10). The most significant reasons were the failure of theaddress list and enumerators to identify households livingin “irregular” housing units, and the “irregular” nature ofmany households in low-income and minority enclaves:

In one example, Velasco [an interviewer] points to ahistoric mansion built in 1890 that used to be a singlefamily home but is now subdivided into 22 apartmentscovering four different floors. According to Velasco thelayout of this building was very confusing. Moreover,the mailboxes did not have the same numbering systemas the housing units. It took Velasco and his researchassistants over ten visits to identify all the housingunits in the building. The census missed four housingunits and a total of eight individuals. . . . (de la Puente,1995, p. 13)

Illegal apartments, unmarked rural roads, residentswith a strong desire to pass undetected, and the transitorynature of non-traditional households create problems thatmake most technical data managers despair.

500 Journal of the American Planning Association, Autumn 2006, Vol. 72, No. 4

Dow

nloa

ded

by [

Uni

vers

ity o

f W

yom

ing

Lib

rari

es]

at 0

6:33

05

Sept

embe

r 20

13

Page 12: The American Community Survey: Warmer (More Current), but Fuzzier (Less Precise) than the Decennial Census

But local governments have a strong interest in ensuringaccurate counts of real needs, and planners arguably have aprofessional responsibility to ensure that the communityprofile on which social programs are based is a more ratherthan less complete picture. Updating the address file, then,is far more than a technical problem; in many places, it willraise significant political and social challenges about howbest to ensure inclusion of the most marginalized whileprotecting their confidentiality and privacy. Developingstronger partnerships with local community leaders andcommunity-based organizations where these exist will becrucial for a successfully updated address list (de la Puente,1995, p. 28). But these partnerships may not be sufficient.In many places, researchers reported that neighbors wereeven less likely to be trusted than outsiders. Local plannerscould face ethical problems in their positions as employeesof bureaucracies that enforce occupancy rules and buildingcodes, if not taxation and immigration laws. Will it bepossible for planners, as agents of local government, toidentify a more complete address list? Will they be able toseparate this task from their other responsibilities to en-force local regulations designed to protect public health,safety, and welfare? This is not a dilemma raised by theACS alone; the issue will be even more important for the2010 Census.

Conclusions

The ACS will re-engineer the information infrastructureplanners rely on. More timely data will improve accuracyby doing away with the projection models we have reliedon in the past, both those we created and those we pur-chased from private vendors. But these timely data will bemuch less precise than the decennial sample data on whichwe used to base our projections. Local planners will facethe challenge of accurately communicating the limitationsof ACS data, without undermining their value comparedto the unknown error involved in updated guesstimates oroutdated information.

The ACS will also change the basis on which fundingformulas and eligibility thresholds are calculated, but it isnot yet clear how these changes will be distributed. Giventhe complex issues raised by averaging and the lower levelsof precision involved, ACS estimates may create as muchcontention as decennial population counts. Once more,planners will need to educate their audiences to ensure dataare used appropriately.

Finally, local governments should participate activelyin updating address lists. This, along with encouragingmail-back response, is one of the most important things a

locality can do to improve the quality of local ACS data.This entails far more than just the technical challenges ofintegrating geographic databases among governmentdepartments. Constructing detailed, accurate address liststhat include informal and illegal dwellings as well as newconstruction and small-scale apartment conversions maypose some real ethical challenges for planners. Closerpartnerships with community-based organizations mayhelp, but the responsibility cannot be subcontracted outentirely.

The ACS offers exciting new ways to describe andanalyze our communities, and the tradeoffs it makes be-tween timeliness and precision may enable us to manageand explain error more clearly. Because it offers a videostream rather than a snapshot, it may equip us to modeland explain change more effectively, which may be a steptoward managing change better. But we will be able to usethe ACS effectively only if we have a clear grasp of itslimitations and are able to communicate these clearly.

Notes1. The Census Bureau has a helpful web site including both a widerange of information on the ACS and links to available data: http://www.census.gov/acs/www/2. More information about the PUMS file, which will be similar to thePUMS files released from decennial long form data, can be found athttp://www.census.gov/acs/www/Products/PUMS/index.htm. Note thatduring the test phase (until 2004), PUMS data were released only at thestate level, because the sample size was too small to release them byPUMAs.3. Plans to integrate other surveys such as the CPS into the ACS wereabandoned because it would be difficult to collect the same quality ofinformation through a mail survey, and conducting the ACS throughin-person interviews would be too expensive (Alexander, 2001, pp. 7–8).4. Refusal rates are the percentage of addresses where a potentialrespondent was contacted but refused to complete the survey; they aredistinct from other reasons for non-response such as not being able tocontact a responsible household member. Legally, people are required torespond to the ACS, as they are to the decennial census.5. Supporters mobilized during the appropriations debates, arguing that:

• ACS data and projections will provide the basis for thousands ofdata tools and models developed by the private sector to marketnew products, discern new markets, and respond to changes inAmerica’s communities.

• Reliable, up-to-date information will level the playing field forsmall communities. . . .

• Fast growing and rapidly changing metropolitan areas will benefitfrom the availability and relevance of ACS data. (Sabety, Reamer,& Clark, 2004)

6. In fact, several commentators have pointed out that it is unclear whatthe effect of sampling for non-response would be on political districts,voting rights, or resources, and whether it would favor Democrats, asRepublicans claimed (Brunell, 2001; Skerry, 2000).7. Some supporting evidence may be found from the experience of othercountries. Britain does not include a question on income in its census,

MacDonald: The American Community Survey 501

Dow

nloa

ded

by [

Uni

vers

ity o

f W

yom

ing

Lib

rari

es]

at 0

6:33

05

Sept

embe

r 20

13

Page 13: The American Community Survey: Warmer (More Current), but Fuzzier (Less Precise) than the Decennial Census

and tests of the effect of adding an income question found that responserates dropped overall, and by a greater margin (10%) in low-incomeinner-city neighborhoods (Tunstall, 2005, p. 13).8. Nationally, mail-back rates for Census 2000 were higher than ACSmail-back rates (68.1% compared to 55.3%), but more rigorous follow-up resulted in total non-response rates that were lower for ACS (4.4%,5.2% for occupied units) than for Census 2000 (9.7%, 8.7% foroccupied units; U.S. Census Bureau, 2004b, pp. 14, 20). However, it isimportant to remember that ACS non-response rates are based on theone-in-three households selected for in-person follow-up, while Census2000 non-response rates are based on all sampled households. Allocationrates show the proportion of questions with missing data, for whichanswers are imputed based on other household characteristics. ACSallocation rates were lower than Census 2000 rates in all 36 ACS testcounties for all types of questions, except those related to vacant units(U.S. Census Bureau, 2004b, p. 29). The Bureau expects non-samplingerror in the ACS to be lower than that of the decennial census for bothof these reasons.9. Confidence intervals are ranges within which we can expect the truesample mean to fall, with a given level of probability. For instance, wecould expect (with a 95% probability we will be correct) that the truemean will fall within a confidence interval that is two standard errors oneither side of our estimate. To aid users, the Census Bureau publishescalculations of standard errors for variables collected through samples,which can be used to construct confidence intervals around the estimatesprovided in published data tables.10. These are approximate percentages only; as the number of totalhousing units expand, the fixed, 3-million-address sample will constitutea shrinking proportion of all households.11. Comparing response rates to the Census 2000 SupplementarySurvey (which used the same methodology as the ACS) in tracts withhigh concentrations of a single racial or ethnic group, Griffin (2002,p. 6) found that 57.1% of responses from Hispanic households and52% from Black households were collected through personal interviews,compared to 31.7% for Whites.12. The coefficient of variance (CV) measures the size of the standarderror of an estimate compared to the total value of the estimate. Thus,the same size standard error will have a much smaller CV for an itemthat applies to approximately 50% of the population (such as beingfemale) than it will for an item that applies to approximately 10% of thepopulation (such as being in poverty; U.S. Census Bureau, 2002, p. 29).Items with larger CVs will thus have wider confidence intervals. Whilethe average coefficient of variance for uncorrelated items such as schoolenrollment and employment is 12.2%, the average CV for correlatedvariables is estimated by the Census Bureau at 19.3% (U.S. CensusBureau, 2002, p. 28).13. For instance, Woods and Poole Economics uses an export-basemodel to project regional employment, which becomes the basis for themigration component of the cohort-survival population projection modelthey appear to use for small area estimates (Woods & Poole EconomicsInc., 2005). Claritas, another large provider, uses the Census Bureau’sIntercensal Population Estimates (ICPE) as the starting point, distribut-ing change among census tracts and even block groups using adminis-trative (such as the U.S. Postal Service deliverable addresses) and privatesource data (Claritas Inc., 2005).14. There is very little scholarly examination of the methodology usedby private data providers. One assessment comparing Woods and PooleEconomics projections with those produced by state demographersconcluded that Woods and Poole Economics produced somewhat moreaccurate population projections, but that the projections had a “substan-

tial and statistically significant bias,” negative in the case of jobs andincome and positive in the case of population (Lenze, 2000, p. 219).15. The GAO and the Census Bureau continue to battle over this issue.The GAO (2004b) argues that adjustments to the ICPE are “critical tothe reliability of the ACS estimates” (p. 11). The Bureau responds thatit is not clear these are essential, but it is continuing to investigate thequestion (U.S. GAO, 2004b, p. 90).16. Several problems reduced the effectiveness of the LUCA duringpreparations for the 2000 Census (Cork et al., 2004, p. 64). These maybe avoided by better preparation this decade, and by the related TIGER(Topologically Intergrated Geographic Encoding and Referencing system,the Census Bureau’s geographic information system) enhancements thatare in progress.

ReferencesAlexander, C. H. (2001). Still rolling: Leslie Kish’s “rolling samples”and the American Community Survey. Proceedings of Statistics CanadaSymposium 2001, Achieving Data Quality in a Statistical Agency. Ottawa,ON: Statistics Canada.Alexander, C. H., & Wetrogan, S. (2000). Integrating the AmericanCommunity Survey and the intercensal demographic estimates program.Proceedings of the American Statistical Association Survey Research MethodsSection, 295–300.American Community Survey: The challenges of eliminating the long formfrom the 2010 Census, Hearing before the Committee on GovernmentReform, Subcommittee on Technology, Information Policy, Intergovernmen-tal Relations and the Census. House of Representatives, 108th Cong.,Serial No. 108–197 (2003, May 13)Anderson, M. J. (1988). The American Census: a social history. NewHaven, CT: Yale University Press.Anderson, M. J., & Feinberg, S. E. (1999). Who counts? The politicsof census-taking in contemporary America. New York: Russell SageFoundation.Brunell, T. L. (2001, November/December). Science and politics in thecensus. Society, 11–16Claritas Inc. (2005). Demographic update 2005 methodology. RetrievedAugust 24, 2006, from http://www.tetrad.com/pub/documents/clarimeth-detailed.pdfCork, D. L., Cohen, M. L. & King, B. F. (Eds.). (2004). Reengineeringthe 2010 Census: Risks and challenges. Washington, DC: NationalAcademy Press.de la Puente, M. (1995). Using ethnography to explain why people aremissed or erroneously included by the census: Evidence from small areaethnographic studies. Proceedings of the Section on Survey Research Methods,Alexandria, Virginia: American Statistical Association. Retrieved August24, 2006 from http://www.census.gov/srd/papers/pdf/mdp9501.pdfde Neufville, J. I. (1987). Federal statistics in local governments. In W.Alonso & P. Starr (Eds.), The politics of numbers (pp. 343–362). NewYork: Russell Sage Foundation.Devine, J., & Coleman, C. (2002). People might move but housing unitsdon’t: An evaluation of the state and county housing unit estimates. Paperpresented at the Annual Meeting of the Southern Demographic Associa-tion, Austin, TX.Diffendahl, G., & Weidman, L. (1995, August). Simulation of continu-ous measurement for small area estimation. Paper presented at the AnnualMeeting of the American Statistical Association. Retrieved August 31,2005, from http://www.census.gov/acs/www/AdvMeth/Papers/ACS/Paper2.htm

502 Journal of the American Planning Association, Autumn 2006, Vol. 72, No. 4

Dow

nloa

ded

by [

Uni

vers

ity o

f W

yom

ing

Lib

rari

es]

at 0

6:33

05

Sept

embe

r 20

13

Page 14: The American Community Survey: Warmer (More Current), but Fuzzier (Less Precise) than the Decennial Census

Edmonston, B. (2001, November/December). The case for modernizingthe U.S. Census. Society, 42–53.Edmonston, B., & Schultze, C. (Eds.). (1995). Modernizing the U.S.Census. Washington, DC: National Academy Press.Gage, L. (2004). Comparison of Census 2000 and American CommunitySurvey 1999–2001 estimates: San Francisco and Tulare Counties, Califor-nia. Sacramento: California Department of Finance.Griffin, D. H. (2002). Measuring survey nonresponse by race and ethnicity.Washington, DC: U.S. Census Bureau.Harper, G., Coleman, C., & Devine, J. (2002). Evaluation of 2002subcounty population estimates. Paper presented at the meeting of thePopulation Association of America, Atlanta, GA. Retrieved August 24,2006 from http://www.census.gov/population/www/documentation/twps0070/twps0070.pdfHough, G. C., & Swanson, D.A. (2004). The 1999–2001 AmericanCommunity survey and the 2000 Census data quality and data compar-isons: Multnomah County, Oregon. Portland, OR: Population ResearchCenter, Portland State University.Innes, J. E. (1990). Knowledge and public policy: The search for meaning-ful indicators. New Brunswick, NJ: Transaction Publishers.Isserman, A. M. (1993). The right people, the right rates. Journal of theAmerican Planning Association, 59 (1), 45–64.Lenze, D. G. (2000). Forecast accuracy and efficiency: An evaluation ofex ante substate long-term forecasts. International Regional ScienceReview, 23 (2), 201–226.ORC Macro. (2002). The American Community Survey: challenges andopportunities for HUD (Final Report). Washington, DC: U.S. Depart-ment of Housing and Urban Development.Paul, R. (2004, July 16). Ron Paul warns of American Community Surveynational census: None of your business! Retrieved February 21, 2005, fromhttp://www.anomalynews.com/phorum/read.php?f=2Peters, A., & MacDonald, H. (2004). Unlocking the census with GIS.Redlands, CA: ESRI Press.Prewitt, K. (2003). Politics and science in census taking. New York:Russell Sage Foundation.Prewitt, K. (2004, June). What if we give a census and no one comes?Science, 4, 1452–1453.Sabety, P., Reamer, A., & Clark, L. (2004, November 30). Understand-ing our communities: Funding the American Community Survey. RetrievedJanuary 30, 2006, from http://www.brookings.edu/views/op-ed/reamer/20041130.htmSalvo, J., Lobo, P., & Calabrese, T. (2004). Small area data quality:A comparison of estimates 2000 Census and the 1999–2001 ACS Bronx,New York test site. New York: New York Department of City Planning,Population Division.Singer, E., Mathiowetz, N. A., Couper, M. P. (1993). The impact ofprivacy and confidentiality concerns on survey participation. PublicOpinion Quarterly, 57, 465–482.Skerry, P. (2000). Counting on the census? Race, group identity, and theevasion of politics. Washington, DC: Brookings Institution Press.

Smith, S. K., & Cody, S. (2002). An evaluation of population estimatesin Florida: April 1, 2000. Paper presented at the annual meeting of theSouthern Demographic Association. Retrieved January 26, 2006 fromhttp://www.bebr.ufl.edu/Articles/SDA_2002.pdfStarr, P. (1987). The sociology of official statistics. In W. Alonso & P.Starr (Eds.), The politics of numbers (pp. 7–57). New York: Russell SageFoundation.Tunstall, R. (2005, February). Using the U.K. and U.S. censuses forcomparative research. Retrieved August 24, 2006, from http://www.brookings.edu/metro/pubs/20050208_comparativecensus.htmU.S. Census Bureau. (2002, May). Meeting 21st century demographicdata needs—Implementing the American Community Survey: Report 2:Demonstrating survey quality. Retrieved August 24, 2006, fromhttp://www.census.gov/acs/www/Downloads/Report02.pdfU.S. Census Bureau. (2004a, December 10). American CommunitySurvey alert no. 28. Retrieved August 22, 2005, from http://www.census.gov/acs/www/Special/Alerts/Alert28.htmU.S. Census Bureau. (2004b). American Community Survey: A hand-book for state and local officials. Washington, DC: Author.U.S. Census Bureau. (2004c, June). Meeting 21st century demographicdata needs—Implementing the American Community Survey: Report 7:Comparing quality measures: Comparing the American CommunitySurvey’s three-year averages and Census 2000’s long form sample estimates.Retrieved August 24, 2006, from http://www.census.gov/acs/www/AdvMeth/acs_census/creports/Report07.pdfU.S. Census Bureau. (2006a, January 26). American Community Surveyalert no. 35. Retrieved January 31, 2006, from http://www.census.gov/acs/www/Special/Alerts/Latest.htmU.S. Census Bureau. (2006b) Using the Data: Quality Measures.Retreived January 31, 2006 from http://www.census.gov/acs/www/UseData/sse/index.htmU.S. Government Accountability Office. (2004a, January). 2010Census: cost and design issues need to be addressed soon (GAO-04-37).Washington, DC: Author.U.S. Government Accountability Office. (2004b). American CommunitySurvey: Key unresolved issues (GAO-05-82). Washington, DC: Author.Van Auken, P. M., Hammer, R. B., Voss, P. R., & Veroff, D. L. (2004,March). American Community Survey and census comparison final analyticreport: Vilas and Oneida Counties, Wisconsin, Flathead and Lake counties,Montana. Madison: Applied Population Laboratory, University ofWisconsin.Waite, P. J., & Reist, B.H. (2005). Reengineering the census of popula-tion and housing in the United States. Statistical Journal of the UnitedNations Economic Commission for Europe, 22 (1), 13–23.West, C. T., & Fullerton, T. M. Jr. (1996). Assessing the historicalaccuracy of regional economic forecasts. Journal of Forecasting, 15, 19–36.Woods & Poole Economics Inc. (2005). Summary technical descriptionof the Woods & Poole Economics Inc. 2005 regional projections anddatabase. Washington, DC: Author.

MacDonald: The American Community Survey 503

Dow

nloa

ded

by [

Uni

vers

ity o

f W

yom

ing

Lib

rari

es]

at 0

6:33

05

Sept

embe

r 20

13


Recommended