University of IowaIowa Research Online
Theses and Dissertations
Fall 2013
Continuity of care among Medicare beneficiaries :the development of patient-reported measures,their association with claims-based measures, andthe prediction of health outcomesSuzanne Elizabeth BentlerUniversity of Iowa
Copyright 2013 Suzanne Elizabeth Bentler
This dissertation is available at Iowa Research Online: http://ir.uiowa.edu/etd/1952
Follow this and additional works at: http://ir.uiowa.edu/etd
Part of the Health Services Administration Commons
Recommended CitationBentler, Suzanne Elizabeth. "Continuity of care among Medicare beneficiaries : the development of patient-reported measures, theirassociation with claims-based measures, and the prediction of health outcomes." PhD (Doctor of Philosophy) thesis, University ofIowa, 2013.http://ir.uiowa.edu/etd/1952.
CONTINUITY OF CARE AMONG MEDICARE BENEFICIARIES: THE DEVELOPMENT OF PATIENT-REPORTED MEASURES, THEIR ASSOCIATION
WITH CLAIMS-BASED MEASURES, AND THE PREDICTION OF HEALTH OUTCOMES
by
Suzanne Elizabeth Bentler
A thesis submitted in partial fulfillment of the requirements for the Doctor of
Philosophy degree in Health Services and Policy in the Graduate College of
The University of Iowa
December 2013
Thesis Supervisor: Professor Fredric D. Wolinsky
Copyright by SUZANNE ELIZABETH BENTLER
2013 All Rights Reserved
Graduate College The University of Iowa
Iowa City, Iowa
CERTIFICATE OF APPROVAL
_________________________
PH.D. THESIS
_______________
This is to certify that the Ph.D. thesis of
Suzanne Elizabeth Bentler
has been approved by the Examining Committee for the thesis requirement for the Doctor of Philosophy degree in Health Services and Policy at the December 2013 graduation.
Thesis Committee: ______________________________
Fredric D. Wolinsky, Thesis Supervisor
_______________________________ Robert O. Morgan ______________________________
Keith Mueller
______________________________ Thomas E. Vaughn
______________________________ Beth A. Virnig
______________________________ Robert B. Wallace
To my husband and my parents
ii
ACKNOWLEDGEMENTS
From the start, I would like to express my gratitude and appreciation to my
advisor, Dr. Fred Wolinsky. There are not enough words to express how lucky I am that
he has been my mentor throughout my time in the doctoral program and on this
dissertation journey. I would also like to thank my entire committee – Drs. Fredric
Wolinsky, Robert Morgan, Keith Mueller, Tom Vaughn, Beth Virnig, and Bob Wallace -
for all of the advice, support, and encouragement they provided to me during this process.
I am truly thankful that each of you agreed to help guide me along the way. And, a
special thanks goes to Dr. Robert Morgan and Dr. Beth Virnig for allowing me access to
their data so that I could pursue this project.
I have an incredible family and they have been with me every step of the way. I
especially want to thank my wonderful and unbelievably patient husband, Steve, for his
love and unwavering support. A special thank-you goes to my father, William Bentler,
who I credit with sending me on this course when he insisted that I take Honors Algebra
instead of Freshman Choir during my first year of high school. Thanks Dad! And, I want
to recognize and thank my mother, JoAnn Bentler, who passed away 15 years ago this
year. She was an incredible woman and no one in my life had more confidence in me
than her. Thanks also to my mother-in-law, Diane Frohling, who has always been a
friend to me in times of need. Finally, I would like to thank my beautiful niece, Amelia
Akers, whose courage in the face of incredible odds has taught me how to be strong even
when I think I am not.
I have so many friends and colleagues to thank that I cannot do it justice here.
However, I want to especially thank my office mate and good friend, Paula Weigel, for
her incredible sense of humor and for being my sounding board. Thanks also to Lois
Albrecht, Drs. Elizabeth Momany, Peter Damiano, and Mary Charlton, and the late Grace
Piro who each encouraged me to keep at it. Finally, I want to thank several people from
iii
the Department of Health Management and Policy who helped and supported me along
the way – Drs. Marcia Ward, Samuel Levey, Brian Kaskie, and George Wehby, Fred
Ullrich, Jean Sheeley, Torrie Malichky, Diane Schaeffer, and Karrey Shannon. This
doctoral research was supported by an Alvin R. Tarlov & John E. Ware Jr. Doctoral
Dissertation Award in Patient Reported Outcomes received by Suzanne Bentler and funds
from the University of Iowa John W. Colloton Chair.
iv
ABSTRACT
Continuity of patient care is an essential element of primary care because it
should result in better quality care and disease management, especially for older adults
who often have multiple chronic illnesses. Even though continuity of care has been
studied for decades, it remains difficult to define and quantify and, there is no consensus
about best practices for assessing whether or not a patient experiences it or a practitioner
provides it. Moreover, no theoretically-driven measures for the assessment of continuity
of care exist, and there have been few rigorous evaluations of its association with
subsequent health and health service utilization outcomes. The principal purpose of this
dissertation research was to better understand continuity of care for older adults by
identifying the components of the patient-provider relationship that are important from
the patient perspective, understanding how commonly used provider-proxy continuity
measures relate to the patient experience, and evaluating whether the patient experience
or provider-proxy assessments are associated with improved health and health services
utilization. I used survey data from the 2,997 Medicare beneficiaries who participated in
the 2004 National Health and Health Services Use Questionnaire (NHHSUQ) linked to
their Medicare claims for 2002-2009. The NHHSUQ contained patient-reported data on
usual primary provider, usual place of care, and the quality and duration of the
relationship with their provider. By linking this information to their Medicare claims, I
was able to evaluate both patient-reported and provider-proxy (claims-based) measures of
continuity of care from two years prior to the survey, and evaluate the impact of
continuity on health and health service utilization for five years after the survey. Study
results indicate that the older adult patient experience of continuity is reflective of both
relationship duration and patient-provider interaction during the care visit, and that most
provider-proxy continuity assessments did not relate to patient perceptions. And, the
patient and provider-proxy experiences of continuity had different relationships with
v
important health outcomes. These results enhance our understanding of continuity of
care for older adults and inform policymakers and researchers about aspects of continuity
that are important for the health of older adults and the appropriate use of health care
resources.
vi
TABLE OF CONTENTS
LIST OF TABLES ix LIST OF FIGURES x LIST OF ABBREVIATIONS xi CHAPTER
1. INTRODUCTION 1
The Importance of Continuity of Care 1 Conceptual Components of Continuity 3 The Patient Experience of Continuity 4 Comparing Claims-Based to Patient-Reported Measures 5 Continuity of Care and Health Care Outcomes 7 The Structure of this Dissertation 8
2. EVALUATION OF A PATIENT-REPORTED CONTINUITY OF CARE MODEL FOR OLDER ADULTS
10
Introduction 10 Methods 11
Study Design 11 Measures 12 Statistical Analyses 15
Results 17 Respondent Characteristics 17 Confirmatory Factor Analyses 17 Factorial Invariance 20
Discussion 22
3. DO CLAIMS-BASED CONTINUITY OF CARE MEASURES REFLECT THE PATIENT PERSPECTIVE?
26
Introduction 26 New Contribution 26 Conceptual Framework 27
Methods 30 Sample and Data Sources 30 Claims-Based Continuity of Care Measures 31 Patient-Reported Continuity of Care Measures 34 Covariates 37 Statistical Analyses 38
Results 39 Sample Characteristics 39
vii
Continuity of Care Measure Characteristics 39 Exploratory Factor Analysis 39 Patient-Reported and Claims-Based CoC Association 41
Discussion 41
4. THE ASSOCIATION OF LONGITUDINAL AND INTERPERSONAL CONTINUITY OF CARE WITH EMERGENCY DEPARTMENT USE, HOSPITALIZATION, AND MORTALITY AMONG MEDICARE BENEFICIARIES
46
Introduction 46 Methods 48
Study Design, Data Sources, and Sample 48 Outcome Measures 49 Continuity of Care Measures 50 Covariates 52 Statistical Analyses 53
Results 53 Respondent Characteristics 53 Continuity of Care Measures 55 Emergency Department Use 55 Hospitalization 58 Mortality 62
Discussion 64
5. DISCUSSION AND CONCLUSION 68
Overview of the Studies’ Findings 69 Study Limitations 72 Policy Implications 75
REFERENCES 80 APPENDIX
THE 2004 NATIONAL HEALTH AND HEALTH SERVICES USE QUESTIONNAIRE
89
viii
LIST OF TABLES
Table
2.1 Characteristics of the 2,620 Respondents to the NHHSUQ Survey
18
2.2
Multiple Group Analysis Fit Indices by Sex, Race/Ethnicity, Medicare Type, and Health Status for the Four-Factor Final Model
22
3.1
Summary of Claims-Based Continuity of Care Measures obtained for the NHHSUQ Survey Respondents
32
3.2
Descriptive Statistics for the Continuity of Care Measures obtained from the 1,219 NHHSUQ Survey Respondents in Medicare FFS
36
3.3
Factor Loadings and Communalities based on a Principal Components Analysis for 16 Claims-Based CoC Measures
40
3.4
Linear Regression of each Patient-Reported Continuity Scale by the Claims-Based Factor Scores
42
4.1 Characteristics of the Sample (N=1,219 weighted)
54
4.2 Average Continuity of Care Scores
56
4.3 Eighteen Proportional Hazards Models of Time to First ED Visit
57
4.4 Thirty-six Proportional Hazards Models of Time to First Hospitalization
59
4.5 Eighteen Proportional Hazards Models of Time to Death
61
4.6 Summary of Results for ED Use, Hospitalization, and Mortality for each of the Patient-Reported and Claims-Based Continuity of Care Indicators
63
ix
LIST OF FIGURES
Figure
2.1 A Theoretically-Derived Model of Patient-Reported Continuity of Care
13
2.2
CFA Model of Continuity
21
x
LIST OF ABBREVIATIONS
ACO Accountable Care Organization ACSC Ambulatory Care Sensitive Condition AHR Adjusted Hazard Ratio CFA Confirmatory Factor Analysis CFI Comparative Fit Index CoC Continuity of Care CPT Current Procedural Terminology DF Degrees of Freedom E&M Evaluation and Management ED Emergency Department EFA Exploratory Factor Analysis FFS Medicare Fee-for-Service GFI Goodness of Fit Index IOM Institute of Medicine MMC Medicare Managed Care NFI Normed Fit Index NHHSUQ National Health and Health Services Use Questionnaire PCMH Patient-Centered Medical Home RMSEA Root Mean Square Error of Approximation TLI Tucker-Lewis Index
xi
1
CHAPTER 1
INTRODUCTION
The Importance of Continuity of Care
Continuity of patient care is widely considered to be an essential element of
quality primary care (Starfield, Shi, Macinko, 2005). Nearly two decades ago, the
Institute of Medicine (IOM) (Institute of Medicine, 1996) identified continuity of care as
a core attribute of primary care that should result in higher quality patient care and
disease management, especially for older adults with multiple chronic conditions. In
2003, the IOM elevated continuity of care to the status of a primary aim within its
comprehensive call for national action to transform health care quality (IOM, 2003). As
a result, continuity of care has become a core component of new health care delivery
models such as the patient-centered medical home (PCMH), which have become integral
to health care reform initiatives enacted under the Affordable Care Act and are currently
under evaluation by the Center for Medicare and Medicaid Innovation. Yet, the concept
of continuity of care is difficult to define and quantify and there is no consensus about
best practices for assessing whether or not a patient experiences it or a practitioner
provides it. Moreover, there is no theoretically-driven measure for the standard
assessment of continuity of care and there have been few rigorous evaluations of the
association of continuity of care with subsequent health and health service utilization
outcomes.
Continuity in the provider-patient relationship is especially important for older
adults, who by 2030 will number about seventy million and account for nearly 20% of the
total population (US Census Bureau, 2004). The fastest growing segment of the
population are the “oldest old” (those 85 years of age and older), who are
disproportionately impacted by the burden of chronic disease and face many other age-
related problems (e.g. falls, incontinence, functional and cognitive decline). These
problems require coordinated and comprehensive health care, which should be a hallmark
2
of a continuous care relationship (Cabana & Jee, 2004; Center for Policy Studies in
Family Medicine and Primary Care, 2007; Wolff, Starfield, & Anderson, 2002). Thus, it
is important to fully operationalize and validate a measure of continuity of care specific
to older adults. Few continuity of care studies, however, have focused on older adults
and have provided only limited and inconsistent evidence of the value of a continuous
care relationship for health outcomes and appropriate health services use.
The evaluation of continuity of care has important and broad implications for
public policy. Because it is a key component of Affordable Care Act health care reform
initiatives such as the PCMH, evaluating reliable and valid measures of continuity is a
necessary step toward subsequently identifying PCMHs for performance-based or shared-
savings reimbursement (Carrier, Gourevitch, & Shah, 2009). PCMH accrediting
organizations like the National Committee on Quality Assurance or the Joint Commission
use claims-based continuity of care measures because these are the easiest for
organizations to calculate (Stanek & Takach, 2010; O’Malley, Peikes, Ginsburg, 2008).
Yet, there is increased advocacy for the use of patient-reported indicators of quality and
outcomes (Patient-Centered Outcomes Research Initiative (PCORI), 2012; Gray, Weng,
& Holmboe, 2012), and there is no clear evidence about which method of assessment
adequately captures continuity of care. Therefore, determining an appropriate
operationalization of continuity of care would allow organizations to better plan for how
to accurately and efficiently account for their provision of quality care.
Research into the effects of high continuity of care on outcomes has often lacked
methodologic rigor, which has led to mixed conclusions as to its value (van Walraven,
Oake, Jennings, & Forster, 2010; Carrier et al., 2009; Saultz, 2003). If the value of
continuity of care is understated and health system changes to promote it are not
advocated, the chance for continuity of care to improve patient health is compromised.
And, if the value is overstated, advocating for improving continuity of care through
procedural and structural changes to primary care practices (Gupta & Bodenheimer,
3
2013) could prove costly to providers. Thus, a more rigorous and thorough evaluation of
how the components of continuity are associated with better health outcomes and service
utilization in the older adult population is important for planning an effective health
policy agenda.
Conceptual Components of Continuity
Continuity of care is a foundational principle that underlies primary care, chronic
care, geriatric medicine, and the PCMH (IOM, 1996; Wagner et al., 2001; Starfield et al.,
2005). But what constitutes continuity of care? Unfortunately, the answer to that
question is not straightforward. The conceptualization of continuity of care has been
discussed and debated for decades without consensus about how best to define it
(Haggerty et al., 2003; Christakis et al., 2004; Reid, Haggerty, & McKendry, 2002;
Ettner, 1996; Lambrew, DeFriese, Carey, Ricketts, & Biddle, 1996; Mainous, Baker,
Love, Pereira Gray, & Gill, 2001). Consensus has emerged, however, that continuity is a
multidimensional concept from which multiple benefits should accrue.
In his notable literature review on defining and measuring continuity, John Saultz
(2003) concluded by proposing a hierarchical definition of continuity that included three
major dimensions: informational, longitudinal, and interpersonal. Informational
continuity reflects the archiving and broad accessibility of patients’ medical histories.
Longitudinal continuity describes the physical site or “medical home” where the patient
receives the majority of care by either an individual provider or team of providers over
time. Interpersonal continuity reflects the ongoing relationship between the patient and
the provider that should arise from having longitudinal continuity of care, and is
considered the essential component for improving patient outcomes. These conceptual
ideals, however, are where the consensus over continuity of care ends (Haggerty et al.,
2003; .van Walraven et al., 2010).
Defining continuity of care for research, policy, performance, or quality
assessment purposes is extremely complex. In a 2002 report to the Canadian Health
4
Services Research Foundation, Reid and colleagues (2002) detailed the difficulties in
defining continuity of care and highlighted the complexity of measurement by compiling
an inventory of 22 different continuity of care indices (Reid et al., 2002). A year later
Saultz (2003) identified 21 different instruments that were designed to measure
continuity of care, including several that had never been used. And three years after that
Jee and Cabana (2006) identified 32 separate indices for measuring continuity of care that
emphasized the applicability of indices to pediatric and chronic-disease populations.
Three years later, Carrier and colleagues (2009) described the difficulties of defining and
quantifying continuity of care within the context of evaluating PCMHs.
While all of these reviews stopped short of recommending best practices for
measuring continuity of care, there are four common themes in their recommendations.
First, there are few direct measures of continuity from the patient’s perspective and
emphasis should be placed on the development and testing of patient-reported measures.
Second, continuity measures that only reflect longitudinal continuity, such as those based
purely on patterns of health service use, should be used with caution until more is known
about how such measures relate to the duration and quality of the provider-patient
relationship (i.e., interpersonal continuity). Third, there is a need to identify those aspects
of continuity associated with improved health outcomes and health services utilization.
And finally, each of these needed avenues of inquiry should be pursued within the
population most likely to benefit from continuity of care, namely those with multiple
chronic illnesses.
The Patient Experience of Continuity
Continuity in the provider-patient relationship is especially important for older
adults with multiple chronic illnesses, and there is a clear need for indices that reflect this
(Jee & Cabana, 2006). It is not clear, however, whether patient-reported measures can or
should be used to identify such a relationship, although there is also debate about whether
claims-based measures are appropriate (Stanek & Takach, 2010). The strengths of using
5
administrative (claims-based) data include the ability to longitudinally assess continuity
in a variety of ways and, at the same time, access all billable patient care in order to
identify health service utilization and expenditures. The limitations of using claims-
based data include the potential errors in linking the patient to the “true” usual care
provider either from the patient or physician perspectives, and the lack of patients
perspectives about relationship that develop over time (Stanek & Takach, 2010). Extant,
claims-based, longitudinal measures of continuity may be capable of identifying a usual
provider but cannot provide information about the nature of the patient relationship with
the usual provider that derives from the interactions during the visits (Nutting, Goodwin,
Flocke, Zyzanski, & Stange, 2003). Those interactions are where a mutually respectful
and trusting relationship (the foundation for good interpersonal continuity of care) has the
ability to develop. Thus, only through patient- and/or provider-reports about the visits
can interpersonal continuity be directly assessed.
To date few reliable and valid patient-reported measures on interpersonal
continuity have been reported (Gulliford, Naithani, & Morgan, 2006; Uijen et al., 2011).
In the second chapter of this dissertation, I use a national survey of Medicare
beneficiaries, the 2004 National Health and Health Services Use Questionnaire
(NHHSUQ) to develop a patient-reported continuity of care model. The NHHSUQ
contains extensive patient-reported data on usual primary provider and usual place of care
as well as data on the quality and duration of the patient’s relationship with their usual
primary provider (Morgan et al., 2008). This work thereby uses Medicare beneficiaries’
responses to the NHHSUQ to identify the components of the patient-provider relationship
that are important in defining interpersonal (relationship) continuity from their
perspective as an older adult.
Comparing Claims-Based to Patient-Reported Measures
In the continuity of care literature, only one other study has examined how the
most commonly used claims-based continuity measures related to the patient experience
6
(Rodriguez, Marshall, Rogers, & Safran, 2008). Moreover, the majority of studies
evaluating the effect of continuity of care on health care outcomes have used claims-
based measures of longitudinal continuity as a proxy for provider-perceived continuity.
This approach assumes that repeated contact with a particular provider equates to having
a strong patient-provider relationship. Adequate testing of that assumption has not
occurred (Reid et al., 2002; Saultz, 2003). Furthermore, it is not known whether these
proxy measures of the provider perspective are good indicators of either the patient’s
perspective of the duration of the patient-provider relationship (longitudinal continuity)
or the quality of the patient-provider interaction (interpersonal continuity). This has
important implications for health services research and policy. If claims-based
assessments of continuity adequately capture the experience of patients, then current
standards that rely on administrative claims data may be appropriate. If they do not,
however, then the inclusion of patient-reported assessments of continuity of care will be
needed to provide a more complete picture of care provision.
In the third chapter of this dissertation, I expand upon the work of Saultz (2003)
and Jee & Cabana (2006) to explore the question of how patient-reported interpersonal
continuity relates to provider-proxy longitudinal continuity obtained from administrative
claims. Respondent information from the NHHSUQ was linked to their Medicare claims
from the period of 2002 – 2009 for this purpose. The multidimensional patient-reported
continuity of care scale derived from the NHHSUQ respondents and evaluated in Chapter
2 is compared in Chapter 3 to sixteen separate claims-based (derived from the
respondents’ Medicare administrative claims) continuity of care indices identified by
Saultz (2003), Jee & Cabana (2006), and a more recent review of the continuity of care
literature (Wolinsky et al., 2007) to provide a comprehensive picture of how claims-based
continuity of care measures relate to the patient experience.
7
Continuity of Care and Health Care Outcomes
Regardless of the approach to measuring and evaluating continuity of care, the
expected benefits include improved patient-provider relationships, enhanced physician
knowledge of the patient, greater degrees of loyalty and trust, increased compliance with
directives and adherence to treatment plans, reduced hospitalization rates, increased
patient and provider satisfaction, reductions in disability levels, costs, and missed
appointments, and improved problem recognition and management (Nutting et al., 2003;
Rogers & Curtis, 1980; De Maeseneer, De Prins, Gosset, & Heyerick, 2003). The
evidence on the relationship between continuity of care and patient satisfaction, use of
preventive services, reduced hospitalization and emergency room visits, and health
outcomes, however, has generally suffered from persistent methodologic problems
resulting in inconsistent findings that are difficult to interpret (Jee & Cabana, 2006,
Carrier et al., 2009; Saultz & Lochner, 2005; Saultz & Albedaiwi, 2004; van Walraven et
al., 2010). In addition to those issues, it is also unclear whether the distinction between
longitudinal continuity and interpersonal continuity matters when it comes to
understanding the specific aspects of continuity associated with improved health
outcomes and service utilization (Burton, Devers, & Berenson, 2011; Stanek & Takach,
2010; Rodriguez et al., 2008; Haggerty et al, 2003; Saultz, 2003).
Thus, the fourth chapter of this dissertation enhances the evidence base
surrounding whether longitudinal and interpersonal continuity with a health care provider
is associated with improved health and health services utilization for Medicare
beneficiaries. We again use the NHHSUQ patient-reported information linked to
Medicare claims to understand the relationship that both patient-reported and provider-
proxy (claims-based) continuity of care indices have with three important health and
utilization outcomes for older adults: emergency department use, hospitalization, and
mortality. Factors derived from Andersen’s behavioral model of health services
utilization (Andersen, 1968; Andersen, 1995), which categorizes the use of health
8
services as a function of individuals’ predisposing (sociodemographic), enabling
(socioeconomic), need (health status), and prior health service utilization characteristics,
were used to adjust for potential confounding effects on the association of continuity of
care with subsequent health outcomes.
The Structure of this Dissertation
The overarching purpose of this dissertation is to provide additional insight into
defining and quantifying the concept of continuity of care for older adults in three
particular ways. First, I evaluate a theoretically derived, multidimensional, and patient-
reported measure of continuity that incorporates both the longitudinal and interpersonal
experiences of the patient with a particular provider or site of care with whom the
continuous relationship exists. Second, I evaluate how well several commonly cited
claims-based (provider proxy) measures of continuity relate to a multidimensional,
patient-reported continuity measure. And, third, I use both claims-based (provider-
proxy) and patient-reported continuity measures to evaluate the association of
longitudinal and interpersonal continuity of care with emergency department visits,
hospitalizations, and mortality among Medicare beneficiaries.
Thus, three distinct studies follow that are presented as independent manuscripts
in Chapters 2, 3, and 4. The papers in Chapters 2 and 3 have been peer-reviewed and
accepted for publication in respected health services research journals. The paper
described in Chapter 4 will be submitted to a major medical journal for publication
consideration. Unlike traditional dissertations, this three paper format is concise in form
and presentation due to the requirements imposed by particular journals during the course
of manuscript preparation, submission, review, and acceptance. In the Department of
Health Management and Policy, the requirements for the three-paper model in lieu of the
traditional dissertation are that the three papers must be deemed by the dissertation
committee to be of publishable quality for respected scholarly journals and have been
9
written while the student is enrolled in the Department’s Ph.D. program. The three
papers must be distinctively different, but still form a coherent body of research.
10
CHAPTER 2
EVALUATION OF A PATIENT-REPORTED CONTINUITY OF CARE MODEL FOR
OLDER ADULTS
Introduction
Over the past two decades, the Institute of Medicine has consistently highlighted
the importance of continuity of care (CoC) for obtaining a high quality health care system
in the United States (IOM, 1996; IOM, 2003). Thus, CoC has become a cornerstone of
many health policies, including the primary care based model of health care delivery
known as the patient-centered medical home (PCMH) (Ginsberg, Maxfield, O’Malley,
Peikes, & Pham, 2008). CoC within the context of the PCMH and other health policies,
however, is difficult to define and measure (Carrier et al., 2009). Many CoC assessments
used in evaluations derive from administrative claims data (O’Malley et al., 2008; Stanek
& Takach, 2010). However, the importance of patient reports is gaining recognition
among those evaluating health policies such as the PCMH (Scholle, Torda, Peikes, Han,
& Genevro, 2010; Bitton, Martin, & Landon, 2010; National Partnership for Women and
Families, 2009; National Committee for Quality Assurance, 2012). Yet, there are few
theoretically-driven, patient-reported models of CoC, especially that incorporate the
constructs of knowledge, trust, and respect within the enduring patient-provider
relationship and particularly that are specific to older adults (O’Malley et al., 2008;
Stanek & Takach, 2010; Gray et al., 2012).
Theoretical dimensions of CoC have been proposed previously. In 2003, John
Saultz published a conceptual hierarchy for CoC that included informational (medical
record knowledge), longitudinal (ongoing healthcare interactions), and interpersonal
(patient-provider relationship) dimensions of continuity. The underlying implication of
this hierarchy is that at least some informational CoC is required to establish longitudinal
CoC and one could only have interpersonal CoC in the presence of longitudinal CoC
(Saultz, 2003). And, within the interpersonal CoC dimension, there are affective (mode
11
of provider behavior toward the patient) and instrumental (content of provider knowledge
about the patient) subcomponents of the patient-provider relationship (Ben-Sira, 1976;
Ben-Sira, 1980). In practice, medical records, billing claims, or patient reports could be
used to measure informational and longitudinal CoC, but only patient and/or provider
reports could adequately measure the interpersonal CoC dimensions.
The main objective of this research was to evaluate a theoretically-derived,
patient-reported CoC model for older adults, who are most likely to benefit from CoC
given their propensity to have multiple chronic conditions needing management
(Anderson, 2010; Jee & Cabana, 2006). To do this, we used patient reports from 2,620
Medicare beneficiaries who completed all of the necessary components of the 2004
National Health and Health Services Use Questionnaire (NHHSUQ) (Wei, Virnig, John,
& Morgan, 2006; Morgan et al., 2008). The NHHSUQ collected self-reported data on
usual primary provider and place of care, as well as data on the quality and duration of
the patients’ relationship with their provider. These data enabled us to empirically
evaluate a multidimensional model of CoC that incorporates two of the theoretically key
patient-reported aspects of continuity – longitudinal (with site and provider) and
interpersonal (of both the affective and instrumental relationship).
Methods
Study Design
The NHHSUQ survey was designed to identify factors affecting enrollment in
Medicare managed care plans (Morgan et al., 2008). It was mailed to a
disproportionately stratified random sample of 6,060 community-residing Medicare
beneficiaries 65 years old or older in the fall of 2004 to obtain equal numbers of
participants with regard to race/ethnicity (white, black, Hispanic), Medicare plan type
(Medicare fee-for-service (FFS) or Medicare managed care (MMC)), sex, and population
density (metropolitan or nonmetropolitan). The sampling frame included six urban areas
(Los Angeles, Phoenix, Chicago, Houston, New York City, and Tampa) and
12
nonmetropolitan counties in three broad regions—the southwest (California, Nevada, and
Arizona), mid-south (Texas and Louisiana), and southeast (Florida). These regions
provided wide geographic diversity and comparable numbers of MMC and FFS enrollees
in each of the race/ethnicity and sex groups. After adjusting for the 363 survey recipients
who were ineligible (e.g., non-community residing, moved out of the geographic area, or
died before the survey was mailed), the overall response rate was 53% (2,997/5,697)
(Morgan et al., 2008).
Measures
We hypothesized and evaluated a model of self-reported continuity using the
NHHSUQ data. This theoretically-derived CoC model has four dimensions: longitudinal
continuity of the care site and provider, and instrumental and affective interpersonal
continuity. Figure 2.1 depicts this a priori conceptualization. In the figure, boxes
indicate observed variables, circles indicate latent variables, solid arrows indicate
directional and causal pathways, and dashed arrows indicate covariation. The NHHSUQ
instrument can be found in the Appendix.
Longitudinal continuity – Care Site
The NHHSUQ asked two questions about the usual place of care. The first was,
“Of the places you go for medical care, where do you go most often for care if you are
sick or need advice about your health?” Reponses included a doctor’s office or clinic,
walk-in urgent care center or emergency room, Veteran’s Affairs Medical Center
(VAMC), other, or “no specific place I visit most often for care.” We created a binary
marker for any usual place of care, and an ordinal variable for the type of care site that
ranked the responses from the least to most conducive setting to promote continuity of
care (0=no specific place, 1=other, nonspecific, 2=urgent care/emergency room,
3=VAMC, 4=doctor’s office). The second question was “Approximately how long have
you been receiving your care at this place,” and had categorical responses of “less than 6
months”, “6 months to 1 year”, “1 year to 2 years”, “2 years to 5 years”, and “5 years or
13
Figure 2.1: A Theoretically-Derived Model of Patient-Reported Continuity of Care
more,” along with an option to choose no specific care place. We created a continuous
variable using category midpoints truncated at 5 years, with those not indicating a
specific care site coded as zeroes.
Longitudinal Continuity – Provider Duration
Three questions tapped provider durational continuity. The first asked “When
you go for regular medical care, is there a particular doctor that you usually visit?” with a
yes/no response set. A “doctor” could mean a variety of practitioners who provide
primary care services (e.g., general doctor, nurse practitioner, or physician’s assistant).
The second question asked about the long-term duration of care with this provider. These
questions were combined into a continuous measure indicating the number of years of
care (truncated at 10 or more) with this usual provider, with those not identifying a usual
provider coded as zeroes. The third question asked whether this relationship had changed
during the past 12 months and was used to construct a measure of the one-year duration
Care site
Instrumental
Provider duration
Affective
Usual site Site duration
Knowledge of health
Exam thoroughness
Accuracy of diagnoses
Provider duration (long-term)
Provider duration (short-term)
Explanations of tests
Comfort
Interest in you
Interest in medical probs
Satisfaction
Site type
14
with the provider. This short-term duration variable was truncated at 1 year (for those
who indicated at least 1 year duration with a provider and no change in the past year)
with 0 indicating no usual provider, 0.25 for those indicating no usual provider because
of a provider change within the past 6 months, 0.50 for those indicating no change in
usual provider but the relationship duration was < 6 months, and 0.75 for those indicating
no change in usual provider with the length of the relationship from 6 months to 1 year.
The correlation between the long-term and short-term measures was 0.55.
Interpersonal continuity - Instrumental
The instrumental component of interpersonal continuity involves physician
competence in performing the technical aspects of care (e.g., performing diagnostic tests,
physical examinations, or prescribing treatments) and, from the patient perspective,
assesses the content of the providers’ behavior. Four items tap instrumental continuity.
The first three asked respondents to rate the “thoroughness of your primary doctor’s
examinations”, “accuracy of your primary doctor’s diagnoses”, and “the explanations you
are given of medical procedures and tests” on a scale from 5 to 1 for excellent, very good,
good, fair, or poor. The fourth question asked respondents “How knowledgeable about
your health and health care is your primary doctor or the providers at your usual place of
care?” The response set was very knowledgeable, somewhat knowledgeable, unsure, and
not knowledgeable.
Interpersonal continuity - Affective
The affective component of interpersonal continuity reflects the “people skills”
portion of the interaction, such as warmth, empathy, and how the physician approaches
the patient (Ben-Sira, 1976; Ben-Sira, 1980). This component assesses providers’
interaction style and reflects communication, trust, and respect in the enduring patient-
provider relationship. Four questions were used to tap the affective component. Two
asked participants to rate “your primary doctor’s interest in you” and “your primary
doctor’s interest in your medical problems” on a scale from 5 to 1 for excellent, very
15
good, good, fair, or poor. The third question asked participants “how satisfied are you
with your health care,” on a scale from 4 to 1 for very satisfied, somewhat satisfied,
somewhat dissatisfied, and very dissatisfied, with “not sure” responses coded as 2.5. The
fourth question asked “How comfortable are you with your primary doctor or with the
providers at your usual place of care” on a scale from 5 to 1 for very comfortable,
somewhat comfortable, not sure, somewhat uncomfortable, very uncomfortable.
Statistical Analyses
As a first step, sensitivity analyses were conducted after alternately assigning the
lowest level of continuity to “not sure” responses (assuming those stating uncertainty
perhaps had limited continuity); because these results were essentially equivalent, the
original coding was retained. We used confirmatory factor analysis (CFA) to formally
evaluate the conceptual model shown in Figure 2.1, and to evaluate alternative models
(based on modification and fit indices) that imposed additional constraints to identify the
best configural model representing the data. For the initial model, items were allowed to
load on a single latent factor only, errors were uncorrelated, and the factors were allowed
to co-vary. We also evaluated two alternative higher-order models to account for the
potential hierarchical nature of CoC and compared them to the four-factor model. The
first included Interpersonal Continuity (from the instrumental and affective factors) and
Longitudinal Continuity (from the care site and provider duration factors) as second-
order factors, and the second included one higher-order construct, Continuity.
We evaluated the CFA models using a range of fit measures. Because the overall
chi-squared goodness-of-fit statistics are more sensitive to large sample sizes (Bentler &
Bonett, 1980), we expected inflated chi-square statistics. Therefore, we also selected and
reviewed several other fit indices that are less influenced by sample size, including the
Goodness of Fit Index (GFI), the Normed Fit Index (NFI), the Comparative Fit Index
(CFI), and the Tucker-Lewis Index (TLI). Values of these indices range from 0 to 1, with
values of ≥ 0.90 indicating a good fit and values ≥ 0.95 indicating an excellent fit (Hu &
16
Bentler, 1999). We also evaluated the Root Mean Square Error of Approximation
(RMSEA) statistic which is sensitive to model complexity (Byrne, 2001). RMSEA
values also range from 0 to 1, with values ≤ 0.05 indicative of a good fit and values up to
0.10 suggesting adequate fit. Cronbach’s alpha (Cronbach, 1951) was calculated to
assess internal consistency of the final four factor model with values greater than 0.70
considered acceptable. To evaluate whether the complex, stratified sampling design had
any effect on our final model, we re-estimated the model after applying the sampling
weights by using a weighted correlation matrix as the input data file.
The final model was evaluated for factorial equivalence across sex, race/ethnicity,
Medicare plan type, as well as a median split on general health status. Sex and
race/ethnicity were self-reported, and factorial equivalence was expected. Medicare plan
type was defined as FFS or MMC at the time of the survey. We hypothesized that
differences might exist between FFS and MMC respondents because options for health
care might be dictated by health plan restrictions. We created two health status groups
based on responses to the self-rated health question from the SF-8 Health Survey (Ware,
Kosinski, Dewey, Gandek, 2001) —good general health (responses of “excellent”, “very
good”, or “good”) and not good general health (responses of “fair”, “poor”, or “very
poor”)—because CoC perceptions might vary based on health status.
Because our objective was to evaluate the consistency of the final model across
the various groups, our multigroup analyses fitted a model that imposed constraints by
forcing the factor loadings to be equal across groups, and compared this to the baseline
configural model without constraints. Measurement invariance holds if the constraints
make a significant improvement in the model fit. One assessment of whether or not there
is significant improvement is to assess the Δχ2 between the two models. Failure to
observe statistical differences between the baseline configural model and the constrained
models when examining the Δχ2 is one indicator of factorial invariance across groups.
However, since Δχ2 is a function of sample size and we have a relatively large sample,
17
using the change of fit indices (noted above) to determine if factorial invariance holds is
recommended. (Keith, 1997; Robles, 1995; Millsap, 2011; Raju, Laffitte, & Byrne,
2002).
Results
Respondent Characteristics
Of the 2,997 respondents in the NHHSUQ survey, 2,620 (87.4%) had complete
responses to all items used in the CFA models. Table 2.1 displays the characteristics of
these respondents. Age ranged from 65 to 100 years old (mean age = 74.3; SD = 6.5).
Most respondents had at least a high school education (65%), 49% reported an annual
income < $20,000, and most (61%) reported good to excellent health. By design, the sex,
race-ethnic, and care plan distributions were nearly equivalent. Fifty-one percent were
men, 38% were white, 30% black, and 30% Hispanic, and about half were in managed
Medicare care plans (53%). Also by design, most respondents were from urban areas
(62%).
Confirmatory Factor Analyses
CFA was initially conducted on the model shown in Figure 2.1, which assumes
that each of the error terms are independent and the four factors are correlated. With the
exception of ‘Site duration’, the items generally had strong loadings as hypothesized
(ranging from 0.55 to 0.97). The chi-square goodness-of-fit was 2,828.7 with 59 degrees
of freedom (df) and was statistically significant (p<.001). The other fit indices indicated
that the model did not fit the data adequately (GFI = 0.86, CFI = 0.89, NFI = 0.89, TLI =
0.86, and RMSEA = 0.13).
In our conceptual model, the ‘Site duration’ item was included with the Care Site
construct because the focus of the construct was continuity at a care site. However, based
on the modification indices from the initial CFA, it was apparent that the ‘Site duration’
item contributed far more to the Provider Duration construct than the Care Site construct.
18
Table 2.1: Characteristics of the 2,620 Respondents to the NHHSUQ Survey
Characteristic Number of Respondents
Percentage
Age 65 – 74 ≥ 75 Missing
1479 1127 14
56.5% 43.0% 0.5%
Sex Female Male
1276 1344
49% 51%
Race/Ethnicity White Black Hispanic Other
1002 794 775 49
38% 30% 30% 2%
Medicare plan type Traditional fee-for-service Medicare managed care
1219 1401
47% 53%
Population density Metropolitan Nonmetropolitan
1624 996
62% 38%
Income ≥ $35,000 $20,000 – $34,999 < $20,000 Not reported
542 594 1271 213
21% 23% 49% 8%
Education ≥ High School < High School Not reported
1698 820 102
65% 31% 4%
SF-8a General Health Status
Good, Very Good, Excellent Fair, Poor, Very Poor Missing
1592 1000 28
61% 38% 1%
a The SF-8TM Health Survey
19
In hindsight, this is intuitively plausible because providers are nested within care sites.
Therefore, it is reasonable that duration with a care site might also contribute to a
Provider Duration construct. In effect, this created a Care Site construct specific to
identification of a usual care site/provider of care, and a Provider Duration construct
specific to the notion of duration of continuity, with both constructs theoretically
contributing to longitudinality. Upon additional review of the modification indices, the
error terms between ‘Satisfaction’ and ‘Comfort’, and between ‘Site duration’ and
‘Provider duration (long-term)’ were allowed to correlate (i.e., were freely estimated).
Standardized factor loadings were all above 0.50 except for the ‘Site duration’ item,
which had a factor loading of 0.42. These changes drastically reduced the chi-square
(1091.8, df = 57) although it remained statistically significant. The other fit indices,
however, indicated that this revised model had an adequate to good fit (GFI = 0.94, CFI =
0.96, NFI = 0.96, TLI = 0.95, RMSEA = 0.08).
Cronbach’s alpha coefficients for the longitudinal continuity scales of Care Site
(two items) and Provider Duration (three items) were 0.88 and 0.75, respectively for this
second model. The Instrumental and Affective relationship continuity scales (each with
four items) had Cronbach’s alpha coefficients of 0.88 and 0.87, respectively. Thus, all
four scale constructs had acceptable internal consistency. The correlations between the
four factors ranged from 0.11 (between Care Site and both Affective and Instrumental) to
0.89 (between Affective and Instrumental). Based on the modification indices for Model
2, we allowed the error term for ‘Knowledge of health’ to be correlated with the error
term for ‘Comfort’ in Model 3. This cross-factor correlation (factorial complexity)
improved the fit (chi-square = 752.4, df = 56; p< .001); however the GFI, CFI, NFI, TLI,
and RMSEA values remained virtually unchanged. Because the fit indices did not
markedly improve with the addition of the cross-factor correlation, Model 2 was retained
as the four-factor configural model to compare to the higher-order alternative models.
20
The first alternative hierarchical model included the two second-order latent
constructs of Interpersonal and Longitudinal continuity, and the model chi-square was
1093.59 (df= 58). The addition of the two second-order constructs did not significantly
improve the model fit (χ2diff = 1.8; df=1; p>.05). The second alternative hierarchical
model included one second-order latent construct (Continuity) and the model chi-square
was 1180.74 (df=59) with the chi-square difference test (χ2diff = 88.9; df=2; p<.001)
indicating that this specification also did not improve the model fit. Applying the
sampling weights to these models did not appreciably alter the findings. Factor loadings
differed primarily at the second decimal and goodness-of-fit criteria differed at the third
decimal. Given these findings and due to software limitations, the unweighted, first-
order model in Figure 2.2 was retained for evaluating factorial invariance.
Factorial Invariance
Table 2.2 presents the multiple group CFA fit indices for the four-factor
configural model across sex, race/ethnicity, Medicare plan type, and general health status.
For each analysis (with the exception of sex), the model comparison chi-square values
were statistically significant (p < .01), suggesting potential model differences across the
groups. All factor loading differences between Medicare plan types, health status, or
among race/ethnic groups were less than 0.10 with the following exceptions. Factor
loadings for two items, ‘Usual site’ and ‘Site duration’, were higher for whites (+0.16 and
+0.24, respectively) and Hispanics (+0.18 and +0.12, respectively) compared to blacks.
For the ‘Provider duration (long term)’ item, factor loadings where higher for whites than
either Hispanics (+0.17) or blacks (+0.24) but for the ‘Provider duration (short term)’
item, factor loadings were lower for whites compared to either Hispanics (-0.20) or
blacks (-0.21). However, given these minimal differences along with good overall model
fit (RMSEA < 0.08 and CFI, GFI, NFI, and TLI > 0.95), the first-order, four-factor model
(Model 2) is sufficiently consistent across sex, race/ethnicity, Medicare plan, and general
health for use among older adults.
21
Figure 2.2: CFA Model of Continuity. Boxes indicate observed factors, circles indicate latent factors, single-headed arrows indicate direct causal pathways, and double-headed arrows represent covariation.
.84
.65
.92
.35
.31
.50 .81
.42
.95
.83
.60
.56
.97
.97 .95
.25
.27
.10 .43 .25
.11
.55
.88
.93
E1
E2
E3
Exam thoroughness
Instrumental
Accuracy of diagnosis
Explanations of tests
Knowledge of health
Interest in you
Interest in medical probs
Satisfaction
Comfort
Usual site
Site type
Site duration
Provider duration (long-term)
Provider duration (short-term)
Affective
Care Site
Provider Duration
E4
E5
E6
E7
E8
E9
E10
E11
E12
E13
.89
.67
RMSEA CFI GFI NFI TLI χ2 (df) p-value 0.08 0.96 0.94 0.96 0.95 1091.8 (57) p<.001
.87
.77
.30
.94
.69
.90
.18
.45
22
Table 2.2: Multiple Group Analysis Fit Indices by Sex, Race/Ethnicity, Medicare Type, and Health Status for the Four-Factor Final Model Group - Constraints
RMSEAa
CFIb
GFIc
NFId
TLIe
χ2 (df)
Model Comparison Δχ2 (Δdf)
Sex Model 1f
Model 2g
.06 .06
.96 .96
.94 .94
.95 .95
.94 .95
1164.9 (114) † 1176.9 (123) †
12.1 (9)
Race/Ethnicity Model 1f
Model 2g
.05 .05
.96 .96
.94 .93
.95 .95
.94 .95
1251.3 (171) † 1291.8 (189) †
40.5 (18)*
Medicare Type Model 1f
Model 2g
.06 .06
.96 .96
.94 .94
.95 .95
.94 .95
1194.3 (114) † 1224.0 (123) †
29.7 (9)†
General Health Model 1f
Model 2g
.06 .06
.96 .96
.94 .94
.95 .95
.94 .94
1237.3 (114) † 1259.5 (123) †
22.1 (9)* * p < .01; † p < .001
a Root Mean Square Error of Approximation
b Comparative Fit Index
c Goodness-of-fit Index
d Normed Fit Index
e Tucker-Lewis Index
f Model 1 is the baseline configural model
g Model 2 is the model with equal factor loadings
Discussion
CoC should have a significant positive impact on the health status of older adults.
Yet, there is debate about whether the best way to define and measure CoC includes only
the longitudinal aspect, or whether it should also include assessment of the provider-
patient interpersonal relationship. If the latter is to be included, then the use of
administrative data alone may not be sufficient, and patient- (or provider-) reports would
23
need to be incorporated into CoC measurement (Saultz, 2003; Jee & Cabana, 2006; Reid
et al., 2002; Haggerty et al., 2003; Ridd, Lewis, Peters, & Salisbury, 2011). We used
CFA to evaluate a theoretically-derived model of CoC in older adults and found that both
longitudinal and interpersonal dimensions of CoC can be evaluated by Medicare
beneficiaries.
Our multidimensional patient-reported CoC model consists of 13 items tapping
longitudinal continuity with a site and a provider, and interpersonal continuity through
assessment of the patient’s experience with the provider’s instrumental knowledge and
affective demeanor. All subscales had good internal reliability with Cronbach’s alpha
ranging from 0.75 for the Provider Duration subscale to 0.88 for the Instrumental and
Care Site subscales. Longitudinal assessments are most commonly used to measure
continuity and yet, when we evaluated an alternative model with the two second-order
constructs of Longitudinal and Interpersonal continuity included (which was statistically
equivalent to the final model in Figure 2.2), the factor loadings for the two longitudinal
constructs of Care Site (0.31) and Provider Duration (0.81) were not as large as those for
the interpersonal constructs of Instrumental (0.91) and Affective (0.99) continuity. Given
this finding, we might suggest added emphasis on the interpersonal domains.
A strength of this work is that the sample included almost equal numbers of men
and women, white, black, and Hispanic older adults, and respondents in FFS and MMC
plans. Thus, we were able to determine whether older adults’ perceptions of CoC were
factorially invariant across these groups and perceived health. Specifically, our results
suggest that the ‘Usual site’ item contributes more to the Care Site construct and the ‘Site
duration’ item contributes more to the Provider Duration construct for whites and
Hispanics. In contrast, the ‘Provider duration (short term)’ item contributes more to the
Provider Duration construct for minorities, whereas the ‘Provider duration (long term)’
contributes more to Provider Duration for whites. Overall, our results supported factorial
invariance for males and females and indicated somewhat weaker factorial invariance
24
across race, health plan, and perceived health. The weaker factorial invariance for these
groups is not surprising. In the early 2000s, several studies showed discrepancies in
continuity of care based on race/ethnicity, insurance type, and health status (Doescher,
Saver, Fiscella, & Franks, 2001; Phillips, Mayer, & Aday, 2000; Flocke, Stange, &
Zyzanski, 1997; Cabana & Jee, 2004; Nutting et al., 2003). It is well documented that
minorities have access barriers to health care (Doescher et al., 2001; Phillips et al., 2000),
individuals in managed care plans may have more discontinuities in care (Phillips et al.,
2000; Flocke et al., 1997), and individuals with health problems have varying degrees of
continuity (Cabana & Jee, 2004; Nutting et al., 2003). It is therefore important to account
for these potential differences when assessing the implications of CoC.
There are some limitations to this work. One is that we were not able to design
the content or format of the survey questions. Although the longitudinal and
interpersonal continuity questions were designed to map well to the Consumer
Assessment of Healthcare Providers and Systems (Agency for Health Care Research and
Quality, 2011) and the Medicare Current Beneficiary Survey (US Department of Health
and Human Services Health Care Financing Administration, 2012), we could not fine
tune the questions nor control the number of items used to assess each construct. That
being said, our final model did have one subscale with only two items (one of which had
a dichotomous response) stemming from one survey question. This fact may limit the
validity of Cronbach’s alpha as a test for the internal consistency of this subscale and
increase the likelihood of measurement error. Another limitation is that we did not have
access to information about non-respondents for assessing the potential impact of
differential response rates. Finally, the perceptions of CoC held by these older Medicare
beneficiaries may not generalize to younger people.
The limitations imposed by our data and the fact that we were not able to include
all known continuity of care domains (e.g., informational continuity from the Saultz
hierarchy) limits our ability to recommend this 13 item scale as a definitive measure of
25
care continuity. However, our results strongly suggest that both the longitudinal and
interpersonal domains, as experienced by the patient, should routinely be included in the
assessment of continuity. These findings support the work of Gulliford and colleagues
(Gulliford et al., 2006) who developed and tested an experience-based measure of
continuity of care for diabetic patients. By evaluating the patient experience of continuity
across a more heterogeneous group of older adult patients, we expand upon the relevance
of this earlier work in highlighting the importance of the patient experience when
measuring CoC.
Our results are important for two reasons. First, the most commonly used
measures of CoC are those that only identify longitudinal care. Our findings show that
longitudinal continuity is only part of the concept. Second, the longitudinal measures are
most commonly used because they are easily calculated using administrative claims. Yet,
there is no way to measure interpersonal continuity using claims data. Interpersonal
continuity can only be measured through assessment of the patient experience. This
finding supports the interests advanced by organizations such as the Patient-Centered
Outcomes Research Institute and the National Committee for Quality Assurance who
advocate for the importance of using the patient perspective in the evaluation of health
care quality. In Chapter 3, we will link the NHHSUQ data to each beneficiary’s
Medicare claims to expand upon this work by evaluating how well this patient-reported
CoC measure relates to extant claim-based CoC measures and subsequently, in Chapter 4,
validating these CoC measures by relating them to health outcomes and service use in
older adults.
26
CHAPTER 3
DO CLAIMS-BASED CONTINUITY OF CARE MEASURES REFLECT THE
PATIENT PERSPECTIVE?
Introduction
Continuity in the provision of health services has been identified as a key element
of good primary care (IOM, 1996; Starfield et al., 2005), is highly valued by patients
(Rodriguez, Rogers, Marshall, & Safran, 2007a; Wasson et al., 1984), and has the
potential for improving patient outcomes (Saultz & Lochner, 2005; Wasson et al., 1984).
In 2003, the IOM recommended continuity of care (CoC) as a primary aim for improving
health care quality (IOM, 2003), especially for older adults with multiple chronic
conditions that require comprehensive medical management. Recent health reform
initiatives and policies, including the IOM report about achieving the best care at the
lowest cost (IOM, 2012) further highlight the importance of CoC. Moreover, CoC is a
cornerstone of the Patient Protection and Affordable Care Act’s patient-centered medical
home (PCMH) model of health care delivery (Center for Policy Studies in Family
Medicine and Primary Care, 2007).
Despite its centrality for health policy and decades of study, CoC remains difficult
to define and quantify (Starfield, 1980; Reid et al., 2002; Haggerty, et al., 2003; Saultz,
2005). Thus, rigorous research that carefully defines and operationalizes the critical
components of CoC is sorely needed for evaluating health system reforms and their
effects on patient outcomes. The purpose of this study is to address that need by
comprehensively evaluating multiple claims-based measures of CoC and examine their
associations with a recently developed CoC measure based on patients’ experiences.
New Contribution
We are unaware of any study to date that has looked at the interrelations between
several CoC measures to answer the key question: how do traditional, claims-based
measures of longitudinal continuity relate to the duration and quality of the patient-
27
provider relationship from the perspective of older adult patients? Because CoC is a
fundamental component of the PCMH and other initiatives promoted by the Affordable
Care Act, the ability to rigorously evaluate it using both claims-based and patient-
reported experiences is a crucial step for developing a reliable and valid measure. Only
with such a measure can the PCMH and other health reforms and initiatives be
meaningfully evaluated. If claims-based assessments of CoC adequately capture the
experience of patients, then current standards that rely on claims data may be appropriate.
If they do not, then the inclusion of patient-reported assessments of CoC will be needed
to provide a more complete picture of care provision. This is especially important for
understanding CoC from the perspective of older adults with multiple chronic conditions
(e.g., hypertension, diabetes, heart failure) and aging-related issues (e.g., falls,
incontinence, functional and cognitive decline) which require more coordinated and
comprehensive health care, which is a hallmark of CoC (Cabana & Jee, 2004; Wolff et
al., 2002; Weiss & Blustein, 1996).
Conceptual Framework
Over the last several decades, hundreds of articles have been published using
more than 40 different empirical measures of the CoC concept. Relatively few studies,
however, have provided assessments of the reliability or validity of these indices. In
2003, John Saultz provided a comprehensive review of CoC measures, resulting in a
long-overdue conceptual definition of CoC. Saultz (2003) defined CoC using a
hierarchical framework based on providers having enough information about the patient
(informational continuity), that facilitates patients having a familiar care setting over time
(longitudinal continuity), which culminates in a relationship between the patient and
provider characterized by mutual trust and accountability (interpersonal continuity).
Several studies have suggested that interpersonal CoC leads to less intensive care (less
hospitalization and emergency department use and better preventive care) and lower costs
(Wolff et al., 2002; Weiss & Blustein, 1996). The underlying assumption is that CoC
28
allows for interaction and better communication between patients and providers which
fosters a relationship of mutual trust, comfort, and shared information resulting in more
accurate provider diagnoses, shared decisions regarding treatments, and increased patient
compliance and adherence to treatment plans. Thus, interpersonal CoC is the essence of
quality primary care (IOM, 1996; Freeman, Olesen, Hjortdahl, 2003; Starfield et al.,
2005). It follows, then, that understanding interpersonal continuity is important and
requires input from both the patient and the provider.
Most studies, however, have used claims-based measures of longitudinal
continuity as a proxy for provider-perceived CoC. This approach assumes that repeated
contact with a particular provider is tantamount to having a strong patient-provider
relationship. Adequate testing of that assumption has not yet occurred (Reid et al., 2002;
Saultz, 2003). Moreover, it is not known whether these proxy measures of the provider
perspective are good indicators of either the patient perspective of the duration of the
patient-provider relationship (longitudinal continuity) or the quality of the patient-
provider interaction (interpersonal continuity).
In 2006, Jee & Cabana expanded Saultz’ work by reviewing claims-based CoC
indices used in outpatient, primary care settings under the premise that interpersonal
continuity is most likely to develop in those settings. They qualitatively assessed the
strengths and weaknesses of using CoC indices to measure provider-patient relationships,
and in the process, developed a categorization of the various types of claims-based CoC
indices. Thirty-two different indices used to measure CoC were identified and classified
into five categories: density of visits, dispersion of providers, sequence of provider visits,
duration of relationship, and subjective patient estimates. Density indices were the most
easily calculated, widely used, and commonly cited (Jee & Cabana, 2006). They require
the identification of a particular provider (e.g., most recently seen, most frequently seen)
to serve as the index for quantifying patient visit patterns. Dispersion indices expand
density indices by accounting for the variety of providers seen by patients, while
29
sequential indices further expand them by accounting for the order in which different
providers are seen. In contrast, duration indices measure the total length of the
relationship with one provider and are infrequently used in CoC studies.
Subjective CoC measures typically require the patient to identify a particular site
or provider of care and, by definition, cannot be calculated using claims. In part because
of the additional data collection requirements (surveying patients), few CoC measures
reflecting the patient experience were developed until recently (Bentler, Morgan, Virnig,
& Wolinsky, 2013a; Uijen et al., 2011; Gulliford et al., 2006). One of these, the patient-
reported measure developed by Bentler (2013a), has recently been shown to be reliable
and factorially valid. It taps the patient’s experience of longitudinal continuity with their
provider as well as the quality of their interaction during visits.
We used Bentler’s (2013a) measure of patient experiences in this research
because it was developed using data from the 2,620 Medicare beneficiaries who
completed the 2004 National Health and Health Services Use Questionnaire (NHHSUQ)
(Wei et al., 2006; Morgan et al., 2008). The NHHSUQ survey data was then linked to
Medicare claims for comparison of Bentler’s measure to a variety of the claims-based
CoC indices to inform the debate about how closely those claims-based indices reflect
patient experiences. We used Saultz’s (2003) hierarchy and Jee & Cabana’s (2006) CoC
categorizations to frame this inquiry. Carrier and Outpatient Medicare claims allowed us
to create 15 of the CoC indices mentioned in the Saultz and Jee and Cabana reviews, as
well as one additional CoC index published afterwards.
In this study, we first evaluated the claims-based indices to confirm the
categorizations posited by Jee & Cabana (2006). Second, we examined whether the
categories of claims-based measures correlated with our patient-reported measure.
Finally, we used Andersen’s behavioral model of health services use (Andersen, 1968;
Andersen, 1995), which categorizes the use of health services as a function of the
predisposing, enabling, and need characteristics of the individual to adjust for case-mix.
30
Two hypotheses guided our study. We expected that the patient experience of
longitudinal CoC would relate most strongly to the claims-based density measures
because both types of indices are visit-based measures of the duration of care with a
specific provider; one derived from patient recall and the other from provider billing.
And, we expected little or no relationship between the patient-reported experience of
interpersonal CoC and the claims-based, provider proxy measures because it is unlikely
that provider billing adequately captures the quality of the patient-provider interaction
experienced by the patient at each visit.
Methods
Sample and Data Sources
The 2004 NHHSUQ was designed to identify factors affecting Medicare managed
care plan enrollment. It was mailed to a disproportionately stratified random sample of
6,060 community-residing Medicare beneficiaries 65 years old or older in the fall of 2004
to obtain equal numbers of participants with regard to race/ethnicity (white, black,
Hispanic), Medicare plan type (Medicare fee-for-service (FFS) or Medicare managed
care (MMC)), sex, and population density (Morgan et al., 2008). The response rate after
adjusting for ineligible survey recipients (e.g., non-community residing, moved out of
geographic area, or deceased) was 53% (2,997/5,697). Construction of the claims-based
CoC measures focused on the 2,620 Medicare beneficiaries who had complete data for
the 13 continuity-related NHHSUQ items and who were likely to have complete
Medicare claims for the period 2002-2004. There was an average of 3 missing items per
person with incomplete data. Among the 2,620 who completed all items, there were
slightly more males (51% v. 45%), whites (38% v. 30%), and people who had at least a
high school education (65% v. 44%) and slightly fewer blacks (30% v. 38%) compared to
the 377 who did not complete all items. Because by design half of the NHHSUQ
respondents were in MMC plans, they were excluded due to the different billing reporting
requirements for Part B (non-institutional) claims (Asper, 2007). Thus, the analytic
31
sample was reduced to the 1,219 people with complete survey responses who had both
Part A and Part B coverage and were not enrolled in managed care. Medicare claims
were restricted to the 13,896 unique Carrier and Outpatient claims for an Evaluation and
Management (E&M) visit in the year prior to the survey. In sensitivity analyses all
Outpatient and Carrier claims (26,046 claims) were used, as were two years of claims
(48,334 claims, of which 25,899 were E&M claims). These sensitivity analyses yielded
comparable results.
Claims-Based Continuity of Care Measures
Using the Medicare claims, we created nine density indices: six of the eight
identified by Jee & Cabana (Current Provider of Care, Current Provider of Care -
discounted, Usual Provider of Care, Clinician Index, Site Index, and Herfindahl Index)
plus three measures (Continuity Index, Wolinsky Continuity, and Known Provider) from
additional literature reviews (Smedby, Eklund, Anders Eriksson, & Smedby, 1986;
Breslau & Reeb, 1975; Mainous & Gill, 1998; Eriksson & Mattsson, 1983; Wolinsky et
al., 2007). Six dispersion indices were created: five were identified by Jee & Cabana
(Bice-Boxerman CoC, Ejlertsson’s K Index, the Modified Continuity Index, Personal
Provider Continuity, and the Modified, Modified Continuity Index) plus one (Inverse
Number of Providers) from additional literature reviews (Ejlertsson & Berg, 1984; Bice
& Boxerman, 1977; Sturmberg & Schattner, 2001; Sturmberg, 2002; Magill & Senf,
1987; Parchman, Pugh, Hitchcock Noel, & Larme, 2002). We also created one sequential
continuity index (Steinwachs, 1979) identified by Jee & Cabana (2006). We could not
create any duration indices using these claims data. All indices were calculated so that
higher values represented high levels of continuity. Table 3.1 provides the categories,
definitions, examples, and references for the claims-based CoC measures created.
32
Table 3.1: Summary of Claims-Based Continuity of Care Measures obtained for the NHHSUQ Survey Respondents Measure Categorya Definition Formulab Examplec
Current Provider of Care: CPC (Smedby, 1986)
Density The total number of visits to the most recently seen provider divided by the total number of visits.
CPC = 𝑣𝑐𝑤
CPC = 0.25
CPC –discounted: CPCd (Smedby, 1986)
Density The CPC after weighting visits based on proximity to index visit. Value of visits to the provider decrease as time goes on.
CPCd = 𝑣𝑐(𝜏)𝑤(𝜏)
𝜏 = the half-life. For example, if 𝜏 =0.25 year, visit values are 0.5, 0.25, 0.125, …
CPCd = 0.38
Usual Provider of Care: UPC (Breslau & Reeb, 1975)
Density The fraction of visits to the usual (most frequently visited) provider
UPC = 𝑣𝑢𝑤
UPC = 0.50
High Clinician Continuity: HCC (Mainous & Gill, 1998)
Density The fraction of visits to the usual provider was at least 50%
HCC = 1: UPC ≥ 0.5 0: otherwise.
HCC = 1
High Site Continuity: HSC (Mainous & Gill, 1998)
Density The fraction of visits to the most frequently used type of care site was at least 50%
HSC = 1 where 𝑣𝑠
𝑤 ≥ 0.5
0 otherwise.
HSC = 0 If Provider A and C are from the same practice, 𝑣𝑠𝑤
= 0.38 Continuity Index: CI (Mainous & Gill, 1998)
Density A score of continuity based on HCC and HSC.
CI = 0 : HCC & HSC = 0 .5 : HCC = 0 & HSC = 1 1 : HCC =1
CI = 1
Herfindahl Index: HI (Eriksson, 1983; Smedby, 1986)
Density The sum of the squared fractions of visits by all providers.
HI =
��𝑣𝑖𝑤�2𝑛
𝑖=1
HI = 0.34
Wolinsky Continuity: WolC (Wolinsky, 2007)
Density If there was at least 1 visit to the same provider every 8 months over the previous 2 year period
WolC = 1 if at least 1 visit to same provider every 8 mos. 0 otherwise
Need a two-year period to calculate.
33
Table 3.1. Continued Measure
Categorya Definition Formulab Examplec
Known Provider: KP (Smedby, 1986)
Density/ Sequential
If the current (most recently visited) provider was seen at least other time
KP= 1 : vc > 1 0 : otherwise
KP = 1
Sequential Continuity: SECON (Steinwachs, 1979)
Sequential The fraction of sequential visit pairs at which the same provider is seen.
SECON = ∑ 𝑞𝑖𝑤𝑖=2𝑤 − 1
Where w-1 is the number of sequential pairs, qi = 1 if vi and vi+1 are to the same provider and 0 otherwise.
SECON = 1/7 = 0.14
Inverse Number of Providers: INOP (Eriksson, 1983)
Dispersion The inverse of the total number of different providers seen.
INOP = 1/n
INOP = 0.125
Bice-Boxerman CoC: BBC (Bice & Boxerman, 1977)
Dispersion Applies the Herfindahl Index to one patient’s visit pattern. The score increases with the number of visits made.
BBC = 𝐻𝐼 − (1
𝑤)
1 − (1𝑤)
BBC = 0.25
Ejlertsson’s Index K: EK (Ejlertsson & Berg, 1984)
Dispersion Similar to Bice-Boxerman but modifies the number of providers to the number of providers divided by the number of visits.
EK = 𝑤 − 𝑛𝑤 − 1
EK = 0.57
Modified Continuity Index: MCI (Sturmberg, 2001; Sturmberg, 2002)
Dispersion Equals 1 minus the number of providers divided by the number of visits. This index is adjusted for utilization by ascribing a higher value to those who have more frequent visits to the same provider.
MCI = 1 - 𝑛
𝑤+0.1
MCI = 0.51
Personal Provider Continuity: PPC (Sturmberg, 2001; Sturmberg, 2002)
Dispersion A dichotomous version of the MCI
PPC = 1: MCI ≥ 0.66 0: otherwise
PPC = 0
34
Table 3.1. Continued Measure
Categorya Definition Formulab Examplec
Modified, Modified Continuity Index: MMCI (Magill, 1987; Parchman, 2002)
Dispersion This is the MCI divided by 1 minus the inverse number of visits.
MMCI = 1 − 𝑛
𝑤 + 0.11 − 1
𝑤 + 0.1
MMCI = 0.58
a From the Jee & Cabana (Jee & Cabana, 2006) categorization of continuity of care measures. b Observation time period is one year, unless indicated otherwise. Formulas include the following variables: i = the ith provider, c = most recently seen provider, u = most frequently seen provider, s = most frequently seen site of care, v= number of visits with a particular provider, n = total number of visited providers, w = total number of visits. c Consider the following visit pattern in a one year period where each letter represents a different provider: ABABBCBD. In this example, w = 8 visits, n = 4 providers seen, i = 1 to 8, v1 = 2, v2 = 4, v3 = 1, v4 = 1
Patient-Reported Continuity of Care Measures
Our patient-reported CoC measure is a 13-item scale derived from patient
responses to a subset of NHHSUQ questions (Bentler et al., 2013a). It has four
subscales: two tap longitudinal continuity (Care Site and Provider Duration), and two tap
interpersonal continuity (Instrumental and Affective). The Care Site subscale had two
items asking the respondent if they had a site they visited most often for medical care
and, if so, to identify the site (doctor’s office, Veterans’ Affairs Medical Center,
emergency room, or other). The Provider Duration subscale had three items: long-term
duration of care at the named site, long-term duration of care with a regular doctor, and
short-term duration of care (within the past year) with the regular doctor. Cronbach’s
alpha coefficients for the longitudinal continuity subscales of Care Site and Provider
Duration were 0.88 and 0.75, respectively.
35
The Instrumental subscale had four items which tapped the technical care aspects
of the patient-provider relationship. Three of the items (thoroughness of examinations,
accuracy of diagnoses, and explanations of medical procedures and tests) were rated on a
scale from excellent (5) to poor (1) while the fourth item (knowledge about your health
and health care) was rated on a scale from very knowledgeable (4) to not knowledgeable
(1). The Affective subscale also had four items tapping the “people skills” aspect of the
patient-provider interaction. Two items asked participants to rate “your primary doctor’s
interest in you” and “your primary doctor’s interest in your medical problems” on a scale
from excellent (5) to poor (1), one asked respondents to rate their satisfaction with their
health care on a scale from very satisfied (4) to very dissatisfied (1), and the fourth asked
respondents to rate “How comfortable are you with your primary doctor or with the
providers at your usual place of care” on a scale from very comfortable (5) to very
uncomfortable (1). The Instrumental and Affective relationship continuity scales had
Cronbach’s alpha coefficients of 0.88 and 0.87, respectively.
Table 3.2 provides the item counts, mean scores, and intercorrelation between the
four subscales and the overall patient-reported CoC measure. While all of the correlation
coefficients among the patient-reported scales were statistically significant, the highest
were among the overall CoC scale and both the Instrumental and Affective subscales.
The least correlated subscales were between Care Site and each of the Instrumental and
Affective subscales while the other scale relationships were of moderate strength. While
the Instrumental and Affective subscales are highly correlated, both theoretically (Ben-
Sira, 1980) and in previous empirical work (Bentler et al., 2013a), it has been
demonstrated that they capture different aspects of interpersonal continuity. Thus, these
results indicate that while all subscales tap the overall CoC construct, they each capture a
different domain of the CoC concept.
36
Table 3.2: Descriptive Statistics for the Continuity of Care Measures obtained from the 1,219 NHHSUQ Survey Respondents in Medicare FFS
Measures Mean (sd)
Pearson correlation coefficients 1 2 3 4 5 6 7 8 9 10
Patient-Reported (items)
Care Site (2) 1 4.8 (0.8)
--
Duration (3) 2 9.6 (4.9)
.22
Instrumental (4) 3 14.8 (3.0)
.11 .26
Affective (4) 4 15.8 (2.8)
.09 .27 .84
Continuity (13) 5 45.0 (8.5)
.38 .59 .87 .87
Claims-Based Current Provider of Care (CPC)
6 0.41 (0.32)
.06 .05 .02 .04 .05
CPC Discounted
7 0.46 (0.32)
.08 .04 .02 .04 .06 .97
Usual Provider of Care
8 0.50 (0.29)
.10 .08 .05 .06 .09 .87 .86
High Clinician Continuity
9 0.52 (0.50)
.06 .04 .03 .05 .06 .69 .68 .80
High Site Continuity
10 0.89 (0.31)
.19 .13 .06 .06 .13 .43 .48 .58 .36
Continuity Index 11 1.41 (0.67)
.13 .09 .05 .06 .10 .70 .72 .86 .90 .72
Herfindahl Index 12 0.41 (0.29)
.07 .06 .02 .03 .06 .90 .88 .97 .76 .48
Wolinsky Continuity
13 0.45 (0.50)
.09 .25 .11 .09 .18 -.04 -.05 .05 .01 .29
Known Provider Continuity
14 0.62 (0.49)
.11 .09 .06 .06 .11 .33 .32 .21 .38 .41
Sequential Continuity
15 0.28 (0.29)
.07 .00 .03 .03 .04 .45 .45 .58 .48 .33
Inverse Number of Providers
16 0.33 (0.29)
.05 .04 -.02 -.01 .01 .86 .85 .87 .64 .38
Bice-Boxerman CoC
17 0.27 (0.26)
.08 .04 .07 .06 .08 .55 .52 .66 .56 .34
Ejlertsson’s Index K
18 0.52 (0.32)
.13 .09 .10 .09 .14 .25 .23 .39 .29 .52
Modified Continuity Index (MCI)
19 0.46 (0.27)
.14 .10 .12 .12 .16 .18 .16 .34 .23 .55
Personal Provider Continuity
20 0.29 (0.45)
.05 .06 .11 .13 .13 .19 .15 .24 .18 .20
Modified MCI 21 0.59 (0.30)
.12 .12 .07 .08 .13 .66 .62 .79 .52 .63
37
Table 3.2. Continued
Measures Mean (sd)
Pearson correlation coefficients 11 12 13 14 15 16 17 18 19 20
Herfindahl Index 12 0.41 (0.29)
.78
Wolinsky Continuity
13 0.45 (0.50)
.14 -.05
Known Provider Continuity
14 0.62 (0.49)
.30 .12 .38
Sequential Continuity
15 0.28 (0.29)
.50 .54 .13 .43
Inverse Number of Providers
16 0.33 (0.29)
.65 .95 -.15 .02 .42
Bice-Boxerman CoC
17 0.27 (0.26)
.57 .61 .21 .49 .90 .47
Ejlertsson’s Index K
18 0.52 (0.32)
.45 .28 .47 .70 .74 .11 .80
Modified Continuity Index (MCI)
19 0.46 (0.27)
.42 .20 .52 .68 .61 .01 .66 .96
Personal Provider Continuity
20 0.29 (0.45)
.23 .19 .33 .39 .40 .05 .47 .61 .67
Modified MCI 21 0.59 (0.30)
.67 .73 .29 .45 .55 .62 .61 .69 .70 .49
Note: Correlations in bold-face are statistically significant at the p<.05 level.
Covariates
To obtain the independent effect of the claims-based CoC measures on the
patient-reported CoC measure, we adjusted for potential confounders available in the
NHHSUQ. Measures of predisposing characteristics included patient-reported age
(categorized as ≤ 70, 71 to 76, and > 76), sex, race/ethnicity (white, black, and Hispanic),
marital status (married or not), and education (high school education or greater vs. less).
We used an indicator of having supplemental insurance (in addition to Medicare) and
population density (living in a metropolitan area) as measures of enabling characteristics.
Finally, indicators of the need for health services included self-reported smoking status,
the SF-8 measure of health-related quality of life (Ware et al., 2001), and the number of
health conditions (e.g. hypertension, diabetes, and stroke).
38
Statistical Analyses
We used means, standard deviations, and Pearson correlation coefficients to
describe the CoC measures. Exploratory factor analysis (EFA) was used to characterize
the claims-based CoC indices. Factors were initially extracted using principal
components methods with an oblique factor rotation (oblimin). In sensitivity analyses,
we used a promax oblique rotation and the results were comparable. To determine the
appropriate number of factors to retain, we used Kaiser-Guttman eigenvalue criteria for
common factors (Guttman, 1954; Kaiser, 1960) and we required that at least 75% of the
total variation in the items be explained by the extracted factors. Final criteria for our
factors involved having at least two indices with primary loadings on each factor, with
indices loading on each common factor sharing conceptual meaning (Child, 1990;
Tabachnick & Fidell, 2001).
We used linear regression techniques to evaluate the association between the
patient-reported (full scale and each subscale) and claims-based CoC measures. Because
all items in the patient-reported CoC indices do not have the same response set, we first
normalized each item by transforming it into a z-score. Then, we summed the z-scores
for each subscale and for the overall scale. This gave each item equal weight in the
analyses, but advantaged sub-scales with more items. In sensitivity analyses, we
normalized each subscale by taking z-scores of its sum (i.e., giving each subscale equal
weight) and found comparable results. Factor scores derived from the EFA of the claims-
based CoC indices were used as the focal independent variables because each factor score
represented a conceptually meaningful claims-based construct, and to avoid the
multicollinearity that would have occurred if the individual claims-based CoC indices
were used. Each dependent variable (patient-reported CoC, Care Site, Provider Duration,
Instrumental, and Affective) was modeled with only the EFA factor scores first, and with
the addition of the covariates second.
39
Results
Sample Characteristics
The average age was 74 (range = 65-99, SD = 7). About half were women, 42%
were white, 30% were black, 27% were Hispanic, and 57% were married. The majority
(60%) came from urban areas. Less than half (44%) had at least a high school education
and most (88%) reported having a supplemental health insurance plan. The majority
reported good to excellent health (60%) and were non-smokers (91.0%). Over three-
fourths (77%) had at least 3 health conditions with hypertension (63%), arthritis (59%),
and hyperlipidemia (49%) being the most frequently reported.
Continuity of Care Measure Characteristics
Table 3.2 (above) provides the means and standard deviations of each CoC index
as well as the correlation coefficients between indices. The claims-based index scores
ranged from 27% (Bice-Boxerman CoC) to 89% (Site Index CoC). Aside from the Site
Index (which atypically uses site instead of provider type), the highest claims-based
continuity score was 62% for the Known Provider. All 16 of the claims-based CoC
measures correlated at the > 0.30 level with at least one other claims-based measure,
suggesting reasonable factorability (Tabachnick & Fidell, 2001).
Exploratory Factor Analysis
Table 3.3 provides the EFA results for the claims-based CoC measures. The
communalities for each measure were well above 0.30 indicating shared common
variance. The initial eigenvalues showed that the first factor explained almost 54% of the
variance, the second factor almost 21%, and the third factor almost 7% for a three factor
solution explaining 82% of the total item variance. All measures had primary factor
loadings > 0.69 although many also had cross-loadings > 0.35. Thus, the three-factor
solution was retained.
40
Table 3.3: Factor Loadings and Communalities based on a Principal Components Analysis for 16 Claims-Based CoC Measures CoC Measures Factor Loadings Communality
Concentration Dispersion Longitudinality Herfindahl Index 0.98 0.34 -0.06 0.96 Usual Provider of Care
0.97
0.44
0.09
0.96
Current Provider of Care
0.93
0.32
-0.01
0.86
Discounted Current Provider of Care
0.93
0.29
0.01
0.86
Inverse Number of Providers
0.92 0.17 -0.16 0.89
Continuity Index 0.87 0.44 0.31 0.84 High Clinician Continuity
0.82
0.34
-0.01
0.68
Modified, Modified Continuity Index
0.73
0.67
0.42
0.80
Ejlertsson’s Index K
0.31
0.96
0.47
0.96
Modified Continuity Index
0.23
0.92
0.60
0.95
Bice-Boxerman Continuity of Care
0.60
0.86
0.01
0.91
Sequential Continuity
0.54
0.82
-0.07
0.85
Personal Provider Continuity
0.16
0.72
0.26
0.52
Known Provider 0.21 0.69 0.50 0.57 Wolinsky Continuity
-0.05
0.42
0.73
0.60
High Site Continuity
0.56
0.40
0.70
0.78
Factor Statistics Eigenvalue 8.58 3.31 1.09 % of Total Variance Explained
53.6
20.7
6.8
Correlations Concentration Dispersion Longitudinality
-- 0.35 0.02
0.35
-- 0.32
0.02 0.32
--
41
The categorizations proposed by Jee & Cabana (2006) fit the extracted factors
with some exceptions. The Modified, Modified Continuity Index (a dispersion measure)
loaded primarily with the density factor (although it had a high secondary loading with
the dispersion factor), the Known Provider measure (a density measure) loaded primarily
with the dispersion factor, the Sequential Continuity measure loaded with the dispersion
factor, and the Site Index (a density measure) loaded primarily on the third factor but
cross-loaded with both the density and dispersion factors. The Wolinsky Continuity
measure (not included in the Jee & Cabana review) loaded primarily on the third factor.
We labeled Factors 1, 2, and 3 as Concentration, Dispersion, and Longitudinality,
respectively, based on the measures driving each. The correlations between the
Dispersion factor and the Concentration and Longitudinality factors were 0.35 and 0.32,
respectively. The Concentration and Longitudinality factors were uncorrelated (0.02).
Patient-Reported and Claims-Based CoC Association
Table 3.4 shows the results from the linear regression models using the claims-
based factor scores to predict each of the patient-reported CoC scales. Panel A shows the
results without adjusting for the covariates. The claims-based Concentration factor was
only associated with one patient-reported subscale, Care Site (p = .02). The more
sophisticated claims-based Dispersion factor was not associated with either the Care Site
or the Provider Duration subscales but had some association with both the interpersonal
(p = .02) and the full patient-reported CoC scales (p = .04). However, the claims-based
Longitudinality factor was strongly associated (p<.001) with the patient-reported CoC
scale and all of its subscales at the p < .05 level. As shown in Panel B, adjusting for
patient characteristics did not appreciably alter these findings.
Discussion
Accurately determining a patient’s care continuity is important for adequately
compensating providers (demonstrating CoC is required in most PCMH models) and for
42
Table 3.4: Linear Regression of each Patient-Reported Continuity Scale by the Claims-Based Factor Scores
Care Site Subscale
Duration Subscale
Instrumental Subscale
Affective Subscale
Patient-Reported CoC
Panel A
b (sd) b (sd) b (sd) b (sd) b (sd)
Concentration .13 (.05)* .14 (.07) -.03 (.10) .03 (.10) .26 (.23) Dispersion .05 (.06) -.04 (.08) .26 (.10)* .23 (.10)* .50 (.24)* Longitudinality .25 (.05)‡ .54 (.07)‡ .25 (.10)* .20 (.09)* 1.24 (.23)‡ Panel Ba
b (sd) b (sd) b (sd) b (sd) b (sd)
Concentration .12 (.05)* .13 (.08) -.15 (.10) -.07 (.09) .03 (.23) Dispersion .06 (.06) -.02 (.08) .43 (.10)‡ .37 (.10)‡ .83 (.24)‡ Longitudinality .21 (.06)‡ .43 (.08)‡ .30 (.10)† .26 (.10)† 1.21 (.23)‡
* p <.05; † p<.01; ‡ p<.001
a These models were adjusted for age, sex, race/ethnicity, marital status, education, supplemental insurance status, residential population density, self-rated general health status, number of comorbidities, and smoking status.
evaluating how CoC affects patient health outcomes (as prioritized by the Patient-
Centered Outcomes Research Initiative). The most commonly used CoC measures have
been claims-based proxies from the provider’s perspective (Stanek & Takach, 2010; Jee
& Cabana, 2006; Saultz, 2003). Indeed, to date little has been known about whether
these measures adequately assess the quality of the patient-provider interaction (the
essence of good continuity of care) because they do not incorporate patient perceptions.
Our results showed that most claims-based CoC measures were not reflective of patient
perceptions of the quality of their provider-patient interaction.
The most widely used claims-based measures are those that identify a particular
provider with CoC based on the concentration of visits with that provider. Health care
organizations trying to obtain PCMH recognition from accreditors like the National
Committee on Quality Assurance or the Joint Commission are more likely to use
43
concentration measures because they are the easiest to calculate (Stanek & Takach, 2010;
O’Malley et al., 2008). Even though we expected the concentration factor to be
somewhat associated with the patient experience of longitudinal CoC (which includes
identification of consistent visits to an identified provider/site), our findings showed that
the concentration factor related most strongly with patient perceptions of having a regular
site of care, and not at all with the consistency or duration of visits. As expected, the
concentration factor also had no association with the relationship aspects of continuity.
The more sophisticated claims-based dispersion CoC factor was more strongly related to
patient perceptions than the concentration factor, especially regarding the interpersonal
aspects of the patient experience of CoC. While not expected, this result may not be
surprising considering that people with chronic conditions (who are prevalent in this
sample of older adults) may establish care with several different providers to manage
their illnesses and, if effective, their patient-provider relationships thrive (Nutting et al.,
2003; Love, Mainous, Talbert, & Hager, 2004).
We were somewhat surprised that some claims-based measures were related to
the patient experience. The claims-based longitudinality factor was most strongly
associated with all components of patient-reported CoC. In post hoc regression analyses,
we estimated the relationship between the two individual claims-based indices from the
longitudinality factor (the Site Index and Wolinsky Continuity measures) and each of the
patient-reported measures. Not unexpectedly, the claims-based Site Index (b = 0.99, p <
.001) was most associated with the patient-reported Care Site subscale. However, the
Wolinsky Continuity measure had stronger independent associations with each of the
other patient-reported subscales (Duration b = 1.15, p < .001; Instrumental b = 0.63, p =
.001; Affective b = 0.47, p = .01) while both the claims-based Site Index (b = 2.19, p =
.003) and Wolinsky measure (b = 2.39, p < .001) were significantly associated with the
full patient-reported CoC scale. The association with the interpersonal subscales was
perhaps the most surprising given the prevailing thought that claims-based measures may
44
not adequately capture the patient-provider relationship (Saultz, 2003; Stanek & Takach,
2010). The Wolinsky measure, by definition, uses at least two years of E&M visit claims
to identify continuity and it may be that this operationalization best conceptualizes what
patients consider when visiting their regular provider.
One limitation of this work is that we were not able to calculate every extant
claims-based CoC measure. We were, however, able to recreate the two most commonly
used measures (i.e. Usual Provider of Care and the Continuity of Care Index). Another
limitation is that we were only able to use the survey data and claims from respondents in
FFS Medicare due to differential reporting requirements in managed care Medicare plans.
In our previous work (Bentler et al., 2013a), we found that the patient-reported CoC
model was factorially invariant between those in FFS vs. MMC plans suggesting that, at
least for patient experiences of continuity, there is consistency between plan types.
Regardless, because choice of providers may be limited within some (especially closed
panel) MMC plans, these findings may not generalize to such managed care
arrangements. Finally, the CoC experienced by these older Medicare beneficiaries may
not generalize to younger people.
In general, there was a disjuncture between claims-based and patient perceptions
of continuity. Given the increased advocacy for the use of patient-reported indicators of
quality and outcomes by organizations such as the National Committee for Quality
Assurance (National Committee for Quality Assurance, 2012) and the Patient-Centered
Outcomes Research Initiative (Patient-Centered Outcomes Research Initiative, 2012), but
the traditional reliance on administrative claims (Stanek & Takach, 2010; O’Malley et al.,
2008), our results have important implications. Most importantly, using a mixed method
(i.e., both patient reports and administrative claims) when assessing care continuity may
be the most valid approach for evaluating how CoC relates to patient outcomes. While
the use of administrative claims may be adequate if the objective is only to evaluate visit
45
continuity, it is clear that only patient-reports provide adequate assessments of
interpersonal continuity.
That said, some have suggested that in certain settings patient-reports may
overestimate visit continuity (Rodriguez et al., 2008) or, due to resource constraints,
simply may not be obtainable (Stanek & Takach, 2010; O’Malley et al., 2008). Our
findings also have implications for assessing CoC in situations where using patient-
reports may not be feasible. In these instances, claims-based proxies such as the
Wolinsky Continuity measure (Wolinsky et al., 2007), which are derived from E&M
visits to particular providers over a longer time period, could be used as valid
approximations of the patient experience. In Chapter 4, we further validate these CoC
measures by using them to predict health outcomes and service use in older adults.
46
CHAPTER 4
THE ASSOCIATION OF LONGITUDINAL AND INTERPERSONAL CONTINUITY
OF CARE WITH EMERGENCY DEPARTMENT USE, HOSPITALIZATION, AND
MORTALITY AMONG MEDICARE BENEFICIARIES
Introduction
Continuity of care (CoC) is widely considered to be an essential component of
high-quality patient care (Starfield et al., 2005; IOM, 1996), especially for older adults
and those with multiple chronic conditions requiring consistent management and follow-
up. Indeed, over a decade ago the IOM declared that CoC was a primary aim in its
comprehensive call for national action to transform health care quality (IOM, 2003).
Consequently, new health care delivery models that have CoC as a core element
(Ginsburg et al., 2008), like the patient-centered medical home (PCMH), are an integral
component of health care reform today, and are under evaluation by the Center for
Medicare and Medicaid Innovation. At present, however, no standard assessment of
continuity of care exists. Moreover, there has not been a comprehensive evaluation of the
association of continuity of care with subsequent health outcomes and health services
use, especially for older adults, although a few studies focusing on single outcomes like
mortality and hospitalization for ambulatory care sensitive conditions have recently been
reported (Wolinsky et al., 2010; Nyweide et al., 2013).
Historically, CoC has been difficult to define and quantify, although most
definitions consist of at least three components: informational, longitudinal (or provider),
and interpersonal (or relational) continuity (Saultz, 2003; Haggerty et al., 2003).
Informational continuity results when providers have enough information (typically by
maintaining a medical record) about the patient. Having adequate patient information is
the first step toward establishing longitudinal continuity which means that the patient has
a consistent relationship with a provider in a familiar setting over time. And,
conceptually when there is longitudinal continuity, there is a chance for interpersonal
47
continuity to develop. Interpersonal continuity means that knowledge, trust, and respect
have developed between the patient and provider over time allowing for better interaction
and communication. Within interpersonal continuity, there are both instrumental
(provider knowledge about the patient) and affective (mode of provider behavior toward
the patient) dimensions that contribute to a good patient-provider relationship (Ben-Sira,
1980). Interpersonal CoC, as reflected in a strong patient-provider relationship, is viewed
as the essence of quality primary care (IOM, 1996; Freeman et al., 2003; Starfield et al.,
2005).
Most studies examining how CoC relates to patient outcomes have used measures
derived from administrative claims as a proxy for provider continuity, under the
assumption that repeated contact with a particular provider equates to having a strong
patient-provider relationship (Cabana & Jee, 2004; Stanek & Takach, 2010). Recent
studies, however, have shown that patient-reported and claims-based measures tap
different dimensions of CoC (Bentler, Morgan, Virnig, & Wolinsky, 2013b; Rodriguez et
al., 2008). Yet, few measures reflecting the patient experience of both longitudinal and
interpersonal CoC were developed until recently (Bentler et al., 2013a; Uijen et al., 2011;
Gulliford et al., 2006). Therefore, little is known about how patient CoC experiences
relate to patient outcomes and whether those relationships are consistent with those found
when claims-based CoC measures were used.
The purpose of this study was to examine the association of both patient-reported
and administrative data-based CoC measures with emergency department (ED) use,
hospitalization, and mortality among Medicare beneficiaries over a five-year period. We
used survey data from 1,219 fee-for-service (FFS) Medicare beneficiaries who
participated in the 2004 National Health and Health Services Use Questionnaire
(NHHSUQ) (Morgan et al., 2008) linked to their Medicare claims for 2002 through 2009.
Using both survey and claims data allowed us to assess two key aspects of continuity
(care provided over time and the patient-provider relationship) and to comprehensively
48
evaluate the association of continuity of care with the long-term health and health
services use of Medicare beneficiaries.
Methods
Study Design, Data Sources, and Sample
We used information from Medicare beneficiaries who completed the 2004
NHHSUQ merged with their Part A (Institutional) and Part B (non-Institutional)
Medicare claims from 2002-2009. The NHHSUQ was designed to identify factors
affecting Medicare managed care plan enrollment. It was mailed in the fall of 2004 to a
stratified random sample of 6,060 community-residing Medicare beneficiaries 65 years or
older. Sampling fractions were varied in order to obtain equal numbers of participants
with regard to race/ethnicity (white, black, Hispanic), Medicare plan type (Medicare fee-
for-service (FFS) or Medicare managed care (MMC)), sex, and population density
(Morgan et al., 2008). The response rate after adjusting for ineligible survey recipients
(e.g., non-community residing, moved out of geographic area, or deceased) was 53%
(2,997/5,697).
The analytic sample was identified as follows. We started with the 2,620
Medicare beneficiaries who had complete data for the 13 continuity-related NHHSUQ
items that were used to derive the patient-reported CoC measure. There were only
modest differences in a few variables between the 2,620 who completed all items and the
377 who did not (Bentler et al., 2013a). We then linked their survey data to their
Medicare claims (Part A Institutional and Part B non-Institutional) for 2002-2009, and
calculated the claims-based CoC measures using unique Physician Carrier and Outpatient
Facility claims for Evaluation and Management (E&M) visits in the years prior to the
survey (2002-2004). NHHSUQ respondents in MMC plans were excluded due to the
different billing reporting requirements for Part B (non-Institutional) claims (Asper,
2007). Thus, the analytic sample included 1,219 people with complete survey responses
49
who had both Part A and Part B coverage and were not enrolled in managed care when
they completed the NHHSUQ.
Outcome Measures
We evaluated the association of CoC with ED use, hospitalization (unavoidable
and potentially avoidable episodes), and mortality. All outcome measures were derived
using the Denominator (Enrollment file), Part A, and Part B Medicare claims for the five-
year period (2005-2009) after the 2004 NHHSUQ.
Emergency Department Utilization
We used Current Procedural Terminology (CPT) codes to identify ED visits in the
Medicare claims (American Medical Association, 2003). Visits with CPT codes 99281-
99285 (evaluation and management in an ED setting) were considered ED visits. These
CPT codes account for 80% of all Medicare expenditures for ED services (IOM, 2006).
We evaluated time to first ED visit after completion of the survey. Follow-up time was
the number of days from the survey date to the first occurrence of any one of four events:
ED visit, managed Medicare entry, death, or December 31, 2009.
Hospitalization
Two types of hospitalization events from the Medicare Part A claims were
considered within the five-year surveillance period: potentially preventable
hospitalizations and any hospitalizations not categorized as preventable. Potentially
preventable hospitalizations were defined as any admission for an ambulatory care
sensitive condition (ACSC). ACSCs reflect conditions for which continuity of primary
care could reduce the hospitalization risk (Bindman et al., 1995) and include diabetes
with short-term or long-term complications, uncontrolled diabetes, lower-extremity
amputation among patients with diabetes, chronic obstructive pulmonary disease,
hypertension, heart failure, dehydration, bacterial pneumonia, urinary tract infection, and
angina. ACSCs were defined using the ICD-9-CM codes specified by the Agency for
Healthcare Research and Quality’s Prevention Quality Indicators (Prevention Quality
50
Indicators, 2011). Separate models were run for each of two outcomes: time to first
hospitalization that was not preventable, and time to first preventable hospitalization.
Follow-up time was the number of days from the survey date to the first occurrence of
one of four events: hospitalization (not preventable or preventable, depending on the
model), managed Medicare entry, death, or the end of the data period (December 31,
2009).
Mortality
Date of death was obtained from the Medicare denominator files. The main
outcome was time to death calculated by subtracting the survey date from the death date.
Continuity of Care Measures
Patient-Reported
We used a multidimensional, 13-item, patient-reported CoC measure derived
from patient responses to a subset of the NHHSUQ questions (Bentler et al., 2013a). We
considered the overall 13-item scale and each of its four subscales. Two of those
subscales tap longitudinal continuity (Care Site and Provider Duration) and two tap
interpersonal continuity (Instrumental and Affective). The Care Site subscale has two
items and identifies whether the Medicare beneficiary had a usual care site and the type
of that site (i.e., doctor’s office, Veterans’ Affairs Medical Center, emergency room, or
other). The Provider Duration subscale has three items and measures the long-term
duration of care with their usual site of care and their long-term and short-term (within
the past year) duration of care with their regular doctor. The Instrumental subscale has
four items which tap the technical care aspects (i.e., thoroughness of examinations,
accuracy of diagnoses, explanations of tests and procedures, and knowledge of health)
experienced by the patient from visits with the provider. The Affective subscale has four
items which tap the “people skills” aspects of the patient-provider interaction (i.e.,
provider’s interest in you, interest in your medical problems, your satisfaction with the
provider, and your comfort level with your provider). We normalized each scale item
51
using z-score transformations because they did not have the same response set. Z-scores
within each subscale and for the overall scale were summed, giving each item equal
weight in the calculation. Results of sensitivity analyses in which we normalized each
subscale using z-scores of its sum (giving each subscale equal weight) were comparable.
Claims-Based
Thirteen different claims-based CoC measures were used. In previous work
(Bentler et al., 2013b), these measures were categorized as concentration measures
(which identify continuity based on the density of visits to a particular provider),
dispersion measures (which identify continuity based on the number of different types of
providers seen), and longitudinal measures (which identify continuity based on seeing the
same provider at regular intervals over time). The six concentration indices were:
Current Provider of Care, Current Provider of Care – discounted (Smedby et al., 1986),
Usual Provider of Care (Breslau & Reeb, 1975), Inverse Number of Providers,
Herfindahl Index (Eriksson & Mattsson, 1983), and the Modified, Modified Continuity
Index (Magill & Senf, 1987; Parchman et al., 2002). The five dispersion indices were:
Bice-Boxerman CoC (Bice & Boxerman, 1977), Ejlertsson’s K Index (Ejlertsson & Berg,
1984), the Modified Continuity Index (Sturmberg & Schattner, 2001; Sturmberg, 2002),
Known Provider (Smedby et al., 1986), and Sequential Continuity (Steinwachs, 1979).
The two longitudinal indices were Wolinsky’s CoC (Wolinsky et al., 2007) and Site
Continuity (Mainous & Gill, 1998). More detail on the calculation of each of these
measures can be found in Table 3.1.
All CoC indices (patient-reported and claims-based) were calculated so that
higher values represented high levels of continuity. The patient-reported measures
reflected the respondent’s experience in the year prior to the completion of the survey
(2003-2004). With the exception of the Wolinsky continuity measure (which, by
definition, required two years of claims), all of the claims-based measures were
calculated using claims from the same pre-survey period (2003-2004). Where the
52
distributions allowed, we used a set of indicators for each index contrasting the upper and
middle tertiles with the lowest tertile as the reference group. When distributions were
more condensed, we compared those with scores above the median to those with scores
below the median.
Covariates
To obtain the independent effect of the patient-reported and claims-based CoC
measures on each outcome, we adjusted for potential confounders available in the
NHHSUQ and Medicare claims. Covariates were selected based on the conceptual
framework provided by Andersen’s behavioral model of health services use (Andersen,
1968; Andersen, 1995), which categorizes utilization as a function of the predisposing,
enabling, and need characteristics of the individual. Measures of predisposing
characteristics were obtained from the NHHSUQ and included patient-reported age (≤ 70,
71 to 76, and > 76), sex, race/ethnicity (white, black, and Hispanic), marital status
(married or not), and education (high school education or greater vs. less). Indicators for
supplemental insurance (in addition to Medicare), population density (living in a
metropolitan area), and a median split of income (≤ $20K or > $20K) were also taken
from the NHHSUQ, and an indicator of dual use of Medicare and Medicaid was taken
from the Medicare enrollment files as measures of enabling characteristics.
Indicators of the need for health services included smoking status, health-related
quality of life (Ware et al., 2001), history of selected serious medical conditions (cancer,
diabetes, heart failure, myocardial infarction, high blood pressure, cerebrovascular
disease, and lung disease), and a comorbidity indicator for having > 3 of the following
conditions: arthritis, fracture, vision problems, ulcer, arrhythmia, blood disorder,
depression, hypothyroidism, valve problems, high cholesterol, back pain, coronary artery
disease, hearing problems, peripheral vascular disease, and fluid/electrolyte disorders.
The serious condition and comorbidity indicators were obtained from both the NHHSUQ
and prior inpatient claims (Quan et al., 2005; Elixhauser, Steiner, Harris, & Coffey,
53
1998), while the other need indicators were taken just from the NHHSUQ. Finally,
several covariates were used to adjust for prior health services use. Two measures
derived from the Medicare claims were tertiles of the number of physician E&M visits
(0-5, 6-16, and 17+ visits) and the occurrence of any hospitalization in the year before the
survey. Three measures came from the NHHSUQ, including medication use (0-1, 2-4,
and 5+), receipt of a flu shot, and any ED visit reported in the year prior to the survey.
Physician E&M visits were summed for the 2002-2003 period to measure utilization
before the time period used for the continuity of care measures.
Statistical Analyses
Multivariable proportional hazard models were used for ED visits, hospitalization,
and time to death (Kalbfleisch & Prentice, 2002). Eighteen models were estimated for
each of the four outcomes (ED use, avoidable and unavoidable hospitalization, and
mortality), with each including only one of the individual CoC measures. All analyses
were weighted to adjust for the stratified random sampling survey design (Morgan et al.,
2008).
Results
Respondent Characteristics
Table 4.1 displays the characteristics of the 1,219 respondents in the analytic
sample. Mean age was 74.7 (standard deviaton (SD) = 7.2), over half were women, a
majority were white (83%), married (61%) and half had at least a high school education
(52%). Almost 60% had annual incomes over $20,000 and most lived in an urban setting
(67%). The vast majority (89%) had insurance supplementing Medicare, but only 11%
were dually enrolled in Medicare and Medicaid. The analytic sample was fairly healthy
with two- thirds reporting good to excellent health. The most commonly reported
condition was hypertension (79%). Almost half (48%) had at least 3 comorbid
conditions; less than 10% were smokers.
54
Table 4.1: Characteristics of the Sample (N=1,219 weighted)
Predisposing Characteristics Percentage Age 65 – 70 71 to 76 ≥ 77
36% 29% 36%
Female 55% Race/Ethnicity White Black Hispanic
83% 9% 8%
Education > High School 52% Married 61%
Enabling Characteristics Income over $20,000 59% Enrolled in Medicare & Medicaid (Dual User) 11% Has Insurance Supplemental to Medicare 89% Lives in Urban Setting 67%
Need Characteristics Self-Reported Good, Very Good, or Excellent Healtha 66% Current Smoker 8% History of Cancer 26% History of Diabetes 31% History of Heart Failure 19% History of Myocardial Infarction 28% History of Hypertension 79% History of Cerebrovascular Disease 26% History of Lung Disease 30% Comorbidity: > 3 (of 15 possible conditions)b 48%
Health Service Use in year prior to the survey Any Hospitalization 29% Any Emergency Department visit 27% Physician Evaluation & Management visits 0-5 6-16 17 +
30% 33% 37%
Prescription Medications 0-1
2-4 5 or more
23% 43% 34%
Received a flu shot 66% a From the SF-8TM Health Survey b Includes arthritis, non-hip fracture, vision problems, ulcer, arrhythmia, blood disorder, depression, hypothyroid, valve problems, high cholesterol, back pain, coronary artery disease, hearing problems, peripheral vascular disease, or fluid/electrolyte disorders
55
Health service use in the year before the survey was typical with 29% having at
least one hospital stay, 27% visiting the ED at least once, and 43% reporting using 2-4
prescription medications. Two-thirds (66%) had reported receiving a flu shot in the year
prior to the survey. The average number of physician E&M visits during that year was
17 (SD = 12) or about 1.5 visits per month. The average number of years in FFS
Medicare after survey completion was 4.3 (SD = 1.4) out of a maximum of 5.
Continuity of Care Measures
The average CoC scores for both the patient-reported and claims-based measures
are provided in Table 4.2. For the patient-reported measures, the mean scores were well
into the upper half of the potential range of scores indicating fairly high levels of patient-
reported continuity. There was more variation in the mean values for the claims-based
measures (which could range from 0.0 to 1.0), with most average scores between 0.40
and 0.65. Three of the claims-based measures (Inverse Number of Providers, Bice-
Boxerman CoC, and Sequential Continuity) had fairly low average scores (0.33, 0.27,
and 0.28, respectively). Notably, respondents had very high average scores for both
patient-reported and claims-based assessments of site continuity.
Emergency Department Use
Nearly two-thirds (63%) had at least one ED visit during the five-year prospective
observation period. Among those who had at least one ED visit, the average number of
visits was 3.2 (SD = 3.3) and the average time to the first visit was 1.9 years (SD=1.4).
Table 4.3 provides the adjusted hazard ratios from the time to first ED visit analyses for
the 18 CoC models. The Interpersonal patient-reported CoC indicators (Instrumental and
Affective) and the full patient-reported CoC measure as well as three claims-based
measures had protective effects on the time to ED visit. Those in both the middle and
56
Table 4.2: Average Continuity of Care Scores
Patient-Reported Continuity of Care (Potential Range of Scores)
Mean Standard Deviation
Care Site (0-5) 4.8 0.8 Provider Duration (0-16) 9.6 4.9 Instrumental (4-19) 14.8 3.0 Affective (4-19) 15.8 2.8 Patient-Reported CoC (8-59) 45.0 8.5 Claims-Based Continuity of Care (Potential Range of Scores: 0-1)
Current Provider of Care 0.41 0.32 Current Provider of Care (discounted) 0.46 0.32 Usual Provider of Care 0.50 0.29 Herfindahl Index 0.41 0.29 Inverse Number of Providers 0.33 0.29 Modified, Modified Continuity Index 0.59 0.30 Ejlertsson’s Index K 0.52 0.32 Bice-Boxerman CoC 0.27 0.26 Modified Continuity Index 0.46 0.27 Sequential Continuity 0.28 0.29 Known Provider 0.62 0.49 High Site Continuity 0.89 0.31 Wolinsky Continuity 0.45 0.50
highest tertiles on patient-reported Instrumental CoC (Adjusted Hazard Ratio
(AHR)=0.79; p<.05 and AHR=0.75; p<.01, respectively), Affective CoC (AHR=0.77;
p<.05; AHR=0.76; p<.01, respectively), and patient-reported CoC (AHR=0.77; p<.01;
AHR=0.68; P<.001, respectively) had reduced risks of ED visits compared to those in the
lowest tertile of scores. And, those in the middle tertile (compared to lowest tertile) of
scores on the Current Provider of Care Index (AHR=0.78; p<.01), the discounted Current
Provider of Care Index (AHR=0.79; p<.05), and Inverse Number of Providers
(AHR=0.73; p<.01) also had a reduced risk of ED visits.
57
Table 4.3: Eighteen Proportional Hazards Models of Time to First ED Visit
Continuity Measure in Model
Adjusteda Hazard Ratios (95% Confidence Intervals)
Patient-Reported Care Siteb 1.06 (0.77,1.45) Patient-Reported Provider Durationc Middle Tertile Highest Tertile
1.11 (0.90,1.37) 0.84 (0.70,1.02)
Patient-Reported Instrumentalc
Middle Tertile Highest Tertile
0.79* (0.64,0.97) 0.75† (0.62,0.91)
Patient-Reported Affectivec Middle Tertile Highest Tertile
0.77* (0.63,0.95) 0.76† (0.63,0.90)
Patient-Reportedc Middle Tertile Highest Tertile
0.77† (0.64,0.93) 0.68‡ (0.56,0.82)
Claims-Based Herfindahl Indexc Middle Tertile Highest Tertile
0.86 (0.73,1.03) 0.91 (0.75,1.10)
Claims-Based Usual Provider of Carec Middle Tertile Highest Tertile
0.86 (0.72,1.04) 1.02 (0.84,1.22)
Claims-Based Current Provider of Care (CPC)c
Middle Tertile Highest Tertile
0.78† (0.65,0.94) 0.87 (0.72,1.05)
Claims-Based CPC (discounted)c
Middle Tertile Highest Tertile
0.79* (0.66,0.95) 0.89 (0.73,1.08)
Claims-Based Inverse Number of Providersc
Middle Tertile Highest Tertile
0.73† (0.60,0.89) 0.84 (0.68,1.02)
Claims-Based Modified, Modified Continuity Indexc Middle Tertile Highest Tertile
0.90 (0.74,1.09) 0.91 (0.76,1.10)
Claims-Based Ejlertsson’s Index Kc Middle Tertile Highest Tertile
0.89 (0.74,1.09) 0.95 (0.78,1.16)
Claims-Based Bice-Boxermanc
Middle Tertile Highest Tertile
0.89 (0.73,1.08) 1.06 (0.87,1.29)
Claims-Based Modified Continuity Indexc Middle Tertile Highest Tertile
1.05 (0.86,1.28) 0.90 (0.72,1.12)
Claims-Based Sequential Continuityc Middle Tertile Highest Tertile
1.07 (0.88,1.31) 1.11 (0.91,1.35)
Claims-Based Known Provider Continuityb 1.02 (0.86,1.21)
58
Table 4.3. Continued
Claims-Based Wolinsky Continuityb 1.04 (0.87,1.24) Claims-Based Site Continuityb 1.09 (0.79,1.51)
Note: Each row in the table is the result of a separate model that includes the named CoC measure and the covariates. Values in bold-face are statistically significant. * p< .05; † p<.01; ‡ p<.001 a Adjusted for all predisposing, enabling, and need characteristics, as well as health service use. b Reference category is low continuity defined as less than the average score. c Reference category is the lowest tertile of scores.
Hospitalization
Fifty-six percent had at least one non-preventable hospitalization during the five-
year prospective period and 19% had at least one preventable (ACSC) hospitalization.
Among those who had at least one non-preventable hospitalization, the average time to
first non-preventable hospitalization was 2.1 years (SD = 1.5) and the average number of
non-preventable admissions was 2.5 (SD = 2.2). Among those who had at least one
preventable hospitalization, the average time to first preventable hospitalization was 2.0
years (SD = 1.6) and the average number of preventable hospitalizations was 1.8 (SD =
1.5). Table 4.4 provides the adjusted hazard ratios from the time to first hospitalization
analyses for the 36 CoC models. None of the patient-reported CoC indicators was
significantly associated with non-preventable hospitalization. Only two claims-based
indicators of CoC had statistically significant associations with non-preventable
hospitalizations, but in opposite directions. Those experiencing moderate levels (middle
tertile) of discounted Current Provider of Care continuity (AHR = 0.79; p < .05) had a
reduced risk of non-preventable hospitalizations compared to those in the lowest tertile,
and those with high levels (highest tertile) of continuity on the Modified, Modified
59
Table 4.4: Thirty-six Proportional Hazards Models of Time to First Hospitalization
Adjusteda Hazard Ratios (95% Confidence Intervals)
Continuity Measure in Model Non-Preventable Hospitalizationb
Preventable Hospitalizationb
Patient-Reported Care Siteb 1.02 (0.71,1.47) 1.59 (0.71,3.59) Patient-Reported Provider Durationc Middle Tertile Highest Tertile
1.12 (0.90,1.41) 0.89 (0.73,1.09)
0.90 (0.58,1.38) 1.04 (0.72,1.51)
Patient-Reported Instrumentalc
Middle Tertile Highest Tertile
0.86 (0.68,1.08) 0.99 (0.81,1.21)
1.15 (0.78,1.71) 0.86 (0.60,1.25)
Patient-Reported Affectivec Middle Tertile Highest Tertile
0.80 (0.63,1.00) 0.95 (0.79,1.14)
0.61* (0.41,0.91) 0.67* (0.49,0.92)
Patient-Reportedc Middle Tertile Highest Tertile
0.86 (0.70,1.05) 0.91 (0.75,1.11)
0.97 (0.68,1.38) 0.77 (0.54,1.11)
Claims-Based Herfindahl Indexc Middle Tertile Highest Tertile
0.87 (0.72,1.04) 0.89 (0.72,1.10)
1.52† (1.11,2.09) 1.37 (0.93,2.02)
Claims-Based Usual Provider of Carec Middle Tertile Highest Tertile
0.94 (0.78,1.15) 1.16 (0.95,1.42)
0.98 (0.70,1.36) 1.28 (0.90,1.82)
Claims-Based Current Provider of Care (CPC)c
Middle Tertile Highest Tertile
0.83 (0.68,1.02) 1.04 (0.85,1.26)
1.28 (0.91,1.80) 1.30 (0.91,1.87)
Claims-Based CPC (discounted)c
Middle Tertile Highest Tertile
0.79* (0.65,0.96) 1.00 (0.82,1.23)
1.26 (0.90,1.75) 1.21 (0.84,1.75)
Claims-Based Inverse Number of Providersc
Middle Tertile Highest Tertile
0.82 (0.66,1.01) 0.89 (0.71,1.11)
1.17 (0.82,1.66) 1.45 (0.97,2.18)
Claims-Based Modified, Modified Continuity Indexc Middle Tertile Highest Tertile
1.07 (0.87,1.32) 1.25* (1.02,1.54)
1.50* (1.01,2.21) 1.64† (1.11,2.44)
Claims-Based Ejlertsson’s Index Kc Middle Tertile Highest Tertile
0.91 (0.73,1.13) 1.09 (0.88,1.35)
1.48 (0.98,2.24) 1.74† (1.15,2.61)
Claims-Based Bice-Boxermanc
Middle Tertile Highest Tertile
0.94 (0.77,1.16) 1.07 (0.87,1.32)
1.11 (0.77,1.61) 1.40 (0.95,2.05)
Claims-Based Modified Continuity Indexc Middle Tertile Highest Tertile
1.03 (0.83,1.28) 0.99 (0.78,1.25)
1.67* (1.10,2.53) 1.16 (0.74,1.80)
60
Table 4.4. Continued
Claims-Based Sequential Continuityc Middle Tertile Highest Tertile
1.02 (0.83,1.26) 1.13 (0.91,1.40)
0.94 (0.65,1.35) 1.24 (0.84,1.83)
Claims-Based Known Provider Continuityb 1.05 (0.87,1.26) 1.19 (0.85,1.68) Claims-Based Wolinsky Continuityb 0.90 (0.74,1.09) 0.91 (0.65,1.27) Claims-Based Site Continuityb 1.03 (0.70,1.52) 2.99 (0.89,9.99)
Note: Each row in the table is the result of a separate model that includes the named CoC measure and the covariates. Values in bold-face are statistically significant. * p< .05; † p<.01; ‡ p<.001 a Adjusted for all predisposing, enabling, and need characteristics, as well as health service use. b A preventable hospitalization is defined as any hospitalization for an ACSC; Hospitalizations that are not for an ACSC are considered not preventable. c Reference category is the lowest tertile of scores.
continuity index (AHR = 1.25; p < .05) had an increased risk of non-preventable hospital
stays compared to those with low levels.
The association of continuity of care with the risk of potentially preventable
(ACSC) hospitalizations was also somewhat mixed. Only patient-reported Affective
continuity had a protective association with preventable hospitalization for both those in
the middle (AHR = 0.61; p<.05) and the highest tertiles (AHR = 0.67; p<.05) compared
to those in the lowest tertile. Four claims-based CoC measures indicated that higher
levels of continuity increased the risk of a preventable hospitalization. Moderate
(compared to low) levels of continuity on the Herfindahl Index (AHR = 1.52; p< .01) and
the Modified Continuity Index (AHR=1.67; p<.05), high (compared to low) levels of
continuity on Ejlertsson’s Index K (AHR=1.74; p<.01) and moderate and high levels of
continuity (compared to low levels) as indicated by the Modified, Modified Continuity
Index (AHR=1.50; p<.05 and AHR = 1.64; p<.05, respectively) increased the risk of
preventable hospitalizations.
61
Table 4.5: Eighteen Proportional Hazards Models of Time to Death
Continuity Measure in Model
Adjusteda Hazard Ratios (95% Confidence Intervals)
Patient-Reported Care Siteb 2.25† (1.33,3.81) Patient-Reported Provider Durationc Middle Tertile Highest Tertile
0.37‡ (0.24,0.57) 0.54† (0.37,0.80)
Patient-Reported Instrumentalc
Middle Tertile Highest Tertile
0.78 (0.51,1.17) 1.06 (0.70,1.59)
Patient-Reported Affectivec Middle Tertile Highest Tertile
1.17 (0.76,1.80) 1.31 (0.88,1.94)
Patient-Reportedc Middle Tertile Highest Tertile
0.79 (0.54,1.16) 1.04 (0.69,1.58)
Claims-Based Herfindahl Indexc Middle Tertile Highest Tertile
1.48 (0.99,2.23) 1.47 (0.94,2.30)
Claims-Based Usual Provider of Carec Middle Tertile Highest Tertile
1.49* (1.03,2.15) 2.30‡ (1.56,3.38)
Claims-Based Current Provider of Care (CPC)c
Middle Tertile Highest Tertile
0.76 (0.51,1.12) 1.26 (0.87,1.84)
Claims-Based CPC (discounted)c
Middle Tertile Highest Tertile
1.34 (0.90,1.99) 1.41 (0.96,2.09)
Claims-Based Inverse Number of Providersc
Middle Tertile Highest Tertile
1.59* (1.03,2.46) 1.80* (1.12,2.88)
Claims-Based Modified, Modified Continuity Indexc Middle Tertile Highest Tertile
1.32 (0.86,2.03) 1.69* (1.13,2.52)
Claims-Based Ejlertsson’s Index Kc Middle Tertile Highest Tertile
1.00 (0.64,1.56)
1.70* (1.12,2.59) Claims-Based Bice-Boxermanc
Middle Tertile Highest Tertile
1.05 (0.68,1.64)
2.33‡ (1.56,3.49) Claims-Based Modified Continuity Indexc Middle Tertile Highest Tertile
1.42 (0.91,2.22)
1.98† (1.23,3.21) Claims-Based Sequential Continuityc Middle Tertile Highest Tertile
2.00‡ (1.36,2.96) 2.35‡ (1.59,3.49)
Claims-Based Known Provider Continuityb 0.90 (0.65,1.26)
62
Table 4.5. Continued
Claims-Based Wolinsky Continuityb 0.71 (0.45,1.11) Claims-Based Site Continuityb 1.38 (0.79,2.39)
Note: Each row in the table is the result of a separate model that includes the named CoC measure and the covariates. Values in bold-face are statistically significant. * p< .05; † p<.01; ‡ p<.001 a Adjusted for Medicare managed care entry during the follow-up period, all predisposing, enabling, and need characteristics, as well as health service use. b Reference category is low continuity defined as less than the average score.
c Reference category is the lowest tertile of scores.
Mortality
Twenty-two percent died during the five-year prospective observation period with
the average time to death being 2.8 years (SD = 1.6). Table 4.5 provides the adjusted
hazard ratios from the time to death analyses for the 18 CoC models. Only one CoC
indicator, patient-reported Duration continuity, had a statistically significant and
protective association with time to death, after adjusting for the potential confounders
(AHR = 0.37, p-value < .001 for the middle tertile and AHR = 0.54, p-value < .01 for the
highest tertile, compared to the lowest tertile). Seven claims-based CoC indicators
(Usual Provider of Care, Inverse Number of Providers, Modified Modified Continuity
Index, Ejlertsson’s Index K, Bice-Boxerman CoC, Modified Continuity Index, and
Sequential Continuity) and one patient-reported measure (Site continuity) indicated an
increased death hazard associated with higher continuity levels. The three other patient-
reported indicators and six other claims-based indicators of continuity had no statistically
significant associations with mortality. In addition, because of the large number of
continuity of care measures (18) evaluated for each of the four outcomes (emergency
department use, non-preventable and preventable hospitalization, and mortality), Table
4.6 provides a summary of the results for each of the 72 models.
63
Table 4.6: Summary of Results for ED Use, Hospitalization, and Mortality for each of the Patient-Reported and Claims-Based Continuity of Care Indicators
Continuity Indicator
ED use
Non-preventable Hospitalization
Preventable Hospitalization
Mortality
Patient-Reported Care Site Continuity
-
Patient-Reported Duration Continuity
+
Patient-Reported Instrumental Continuity
+
Patient-Reported Affective Continuity
+ +
Patient-Reported Continuity
+
Claims-Based Herfindahl Index
-
Claims-Based Usual Provider of Care
-
Claims-Based Current Provider of Care
+
Claims-Based Current Provider of Care (discounted)
+ +
Claims-Based Inverse Number of Providers
+ -
Claims-Based Modified, Modified Continuity Index
- - -
Claims-Based Ejlertsson’s Index K
- -
Claims-Based Bice-Boxerman CoC
-
Claims-Based Modified Continuity Index
- -
Claims-Based Sequential Continuity
-
Claims-Based Known Provider Continuity
Claims-Based Wolinsky Continuity
Claims-Based Site Continuity
+ indicates higher continuity has a beneficial association with the outcome
- indicates higher continuity has a negative association with the outcome
64
Discussion
Continuity of care is considered a hallmark of high-performing primary care.
There are many different ways to assess whether CoC is provided, but there is little if any
consensus about best practices for CoC assessment. Nonetheless, CoC is a fundamental
component of current health reform, including the PCMH and Accountable Care
Organizations. The reason is that CoC has been associated with preventive and chronic
care service use, patient and provider satisfaction, decreasing hospitalization and
emergency department use, lower overall health care expenditures, and lower mortality in
older adults (Saultz & Lochner, 2005; Nutting et al., 2003; Wolinsky et al., 2010; Leleu
& Minvielle, 2013). In this study, we examined the relationship of two distinct aspects of
CoC (longitudinal and interpersonal) using both patient- and provider-based assessments
with three health outcomes important in the care of older adults: emergency department
use, hospitalization, and mortality.
Older adults are the most frequent users of emergency departments, and often do
so for conditions that are non-urgent (Gruneir, Silver, & Rochon, 2011). Theoretically,
high CoC should have its largest effect on reducing ED visits because the established
patient-provider relationship enables less severe health issues to be treated outside of the
ED setting, reserving ED use for truly emergent situations. Our results indicate that for
both provider-based (through claims) and patient-based CoC indicators, there is a
reduction in the risk of ED use associated with higher levels of CoC. Of particular note,
moderate and high levels of interpersonal CoC (Instrumental and Affective patient-
reported CoC), which are indicative of the patient perspective of the quality of the
provider continuity, were most effective at reducing the risk of ED visits, providing
evidence to support the value of a good patient-provider relationship (Saultz & Lochner,
2005; van Walraven et al., 2010).
Reducing hospitalizations, particularly those that are potentially preventable with
adequate outpatient care (ACSCs), is both a care-quality and expenditure-containment
65
goal (Medicare Payment Advisory Commission, 2005; Jencks, Williams, & Coleman,
2009). High CoC should be an effective tool for meeting this goal. In this analysis,
however, the results were mixed. For preventable hospitalizations, high levels of CoC on
several claims-based (provider continuity) measures increased the risk of hospitalization.
This may reflect a confounding effect due to unmeasured comorbidity and disease-
severity. Or, it may reflect the fact that as individuals age, it is simply difficult for
continuity of care to have much of an effect on hospitalization. Patients are hospitalized
when they are too sick to be cared for at home and the likelihood of this occurring
increases dramatically with age. Another possible explanation is that patients who are
non-compliant with treatment regimens become sicker which results in more visits to
their physician in conjunction with a higher likelihood of hospitalization.
Therefore, further research on the effects of CoC on hospitalization (both non-
preventable and preventable) is needed. This is especially important given a recent report
of a modest 2% reduction in the risk for preventable (ACSC) hospitalizations associated
with moving from no continuity on two claims-based CoC measures to complete
continuity (Nyweide et al., 2013). In sensitivity analyses, we were unable to replicate
those results using the same two CoC measures (Bice-Boxerman CoC and Usual Provider
of Care). However, as with ED use, high levels of interpersonal continuity, specifically
affective interpersonal continuity, had a protective association with ACSC
hospitalizations. It may be that establishing a caring, trusting bond as part of the patient-
provider relationship helps both the patient and provider understand when outpatient and
home care can substitute for hospitalization. This is a particularly salient finding because
ACSC hospitalizations are potentially preventable with quality care in the outpatient
setting, and interpersonal continuity is viewed as foundational to the provision of quality
care.
We expected CoC to have a protective association with the ultimate health status
indicator – time to death. Only one CoC measure, patient-reported Duration continuity,
66
however, had a significant protective association with mortality for Medicare
beneficiaries. In contrast, for eight other CoC measures, higher CoC increased the
likelihood of death. A previous study reported a protective association of CoC with
mortality among older adults (Wolinsky et al., 2010), but found that the magnitude of the
association diminished with increased cumulative exposure to CoC. Like that study, our
analysis may suffer from confounding due to unmeasured comorbidity and condition
severity. Or, it may simply be that our results reflect the likelihood that as older patients
become more seriously ill, they tend to see their physicians more regularly, resulting in
higher CoC levels. Given the severity of their illnesses, however, even high CoC levels
may not be sufficient to alter their life course. Thus, higher CoC levels may reflect the
combined illnesses that eventually lead to their death rather than the quality of their
continuity of care. Alternatively, a comfortable and satisfactory relationship with a
provider may encourage higher CoC levels in healthier patients because they are less
inhibited about scheduling a visit even for minor medical events (Phandi, Bowers, &
Chen, 2007; Boyer & Lutfey, 2010).
Our study is not without limitations. One is that we were not able to calculate
every extant claims-based CoC measure. We were, however, able to recreate the two
measures most commonly used in assessments of health outcomes (i.e. Usual Provider of
Care and the Continuity of Care Index). Another limitation is that the cross-sectional
design of the survey made it impossible to use time-dependent patient-reported CoC
measures in our analyses, thereby limiting our ability to adequately tease out how adverse
health may affect continuity over time. Yet another limitation is that we were only able
to use the survey data and claims from respondents in FFS Medicare due to differential
reporting requirements in managed care Medicare plans. Because choice of providers
may be limited within some (especially closed panel) MMC plans, these findings may not
generalize to beneficiaries in Medicare Advantage plans. Finally, these findings relate to
67
the experiences of older Medicare beneficiaries and may not generalize to younger
people.
Continuity of care has long been advocated as an integral part of the delivery of
quality primary care and several studies have evaluated its effect on outcomes with mixed
results. In part, this heterogeneity stems from the measures used for assessing continuity.
Most studies have used administrative claims to tap provider continuity through visit-
based utilization. Few studies have included patient experiences of continuity when
assessing its relationship to health outcomes, despite the increased advocacy for including
the patient experience. The results of our study arm policymakers and health system
evaluators with the knowledge that the patient experience and the provider experience
(through billing claims) of continuity have different effects on several important health
and health service utilization outcomes for Medicare beneficiaries. In particular, the
association of high interpersonal continuity (which cannot be assessed from billing
claims) with reduced risk of ED use and preventable hospitalization suggests that health
care reform components embodied in the PCMH to enhance a strong patient-provider
relationship over time for older adults are meritorious.
68
CHAPTER 5
DISCUSSION AND CONCLUSION
Continuity of care has been studied for decades, but several aspects of the concept
warrant further consideration and study. On the one hand, there is general acceptance
that continuity of care is an essential principle underlying quality primary care, chronic
care, and geriatric medicine (IOM, 1996; Starfield et al., 2005). On the other hand, there
is little agreement about how best to assess continuity’s multidimensional components.
This leaves several crucial questions unanswered. Is the perspective of the patient
important when evaluating continuity? If so, what components of the patient experience
should be considered when evaluating continuity, especially among older adults? Can a
non-burdensome approach be developed for measuring those components using patient
self-reports? Are patient experiences adequately reflected when continuity of care
measures are derived from administrative claims data? And perhaps most important of
all, is the continuous long-term relationship between patient and provider, regardless of
how it is measured, related to improved health outcomes and health service utilization?
These questions are not new (Saultz, 2003; Jee & Cabana, 2006; Carrier et al., 2009), but
finding answers to them may help inform policymakers currently posing strategies and
targeting scarce resources towards interventions aimed at improving provider continuity
(Carrier et al., 2009; Gupta & Bodenheimer, 2013).
This dissertation explored several of these unanswered questions among Medicare
beneficiaries. In the three analytical chapters (Chapters 2, 3, and 4), I examined
continuity of care from the perspective of the patient, from the perspective of the provider
using administrative claims data on patient visits as a proxy, and whether continuity is
associated with health outcomes and use of health care services. Given concerns about
how continuity is assessed and which of its components are important to patients,
providers, and health outcomes, my results make significant contributions to health
69
services research, and to informing policymakers as they implement the health reform
initiatives in which continuity plays a key role.
Overview of the Studies’ Findings
In Chapter 2, the main objective was to evaluate a theoretically-derived, patient-
reported continuity of care model for older adults. To do this, we used patient reports
from 2,620 Medicare beneficiaries who completed all of the continuity-related items on
the 2004 National Health and Health Services Use Questionnaire (NHHSUQ). With
those data we empirically evaluated a multidimensional model of the patient experience
of both longitudinal and interpersonal continuity.
Evidence from Chapter 2 confirmed that a thirteen-item scale successfully tapped
both the longitudinal and interpersonal dimensions of continuity as experienced by
Medicare beneficiaries. This patient-reported continuity measure had four subscales,
good internal reliability, and construct validity. Moreover, it was factorially invariant
across sex, race, Medicare plan type, and general health status. The findings from this
Chapter provide initial answers to two of the unresolved questions in the continuity
literature. First, the patient experience of continuity can be reliably and validly measured
in a non-burdensome, self-reported scale. Second, while longitudinal continuity may be
evaluated from either administrative billing claims or patient experiences, interpersonal
continuity (the assessment of the patient-provider relationship established at the care
visits) must be measured based on the patient’s experiences.
In Chapter 3, the patient-reported measure of continuity developed in Chapter 2
was compared to several extant, claims-based measures of continuity. The purpose was
to see how closely claims-based indices reflect the patient experience. This analysis was
restricted to the 1,219 NHHSUQ survey participants enrolled in traditional, fee-for-
service Medicare. By examining the outpatient visits evidenced in their claims, we were
able to replicate fifteen claims-based continuity indices for comparison to the patient-
reported experience.
70
The findings from this Chapter showed that, in general, most claims-based
assessments of continuity of care did not relate to patient perceptions of their interactions
with their provider. This was the first comprehensive empirical comparison of whether
claims-based, provider-proxy measures of continuity could adequately assess the quality
of the patient-provider interaction (the essence of good continuity of care). Two findings
from this analysis filled important gaps in the continuity of care literature. First, the
importance of patient-reported interpersonal continuity and the disjuncture between
claims-based measures and patient perceptions made it clear that valid approaches for
assessing continuity of care must include both claims-based visit continuity indicators
and patient-reported measures about their visit experiences. Second, there will be
situations where scarce resources make using prospectively-gathered patient-reports
prohibitive, and secondary analyses must rely solely on administrative data. In these
cases, claims-based proxies could be calculated over at least a two-year period to provide
a more stable estimate of the patient experience and restricted to claims with evaluation
and management codes to better reflect the type of visits patients would consider when
evaluating their care experiences with their provider.
The purpose of Chapter 4 was to better understand which components of
continuity of care, if any, improved health outcomes for older adults. We examined the
relationships between longitudinal and interpersonal continuity of care using both patient-
reported and provider-based (proxy) measures with emergency department visits,
potentially preventable and non-preventable hospitalizations, and mortality. This
analysis was also restricted to the 1,219 NHHSUQ survey participants enrolled in
traditional, fee-for-service Medicare, but involved linkage to their Medicare claims for
2005 through 2009 to prospectively evaluate the effect of continuity of care over a five-
year period.
This was an innovative analysis because few measures of the patient experience
of longitudinal and interpersonal continuity of care existed until recently (Bentler et al.,
71
2013a; Uijen et al., 2011; Gulliford et al., 2006), and no one had previously compared the
associations of both patient-reported and claims-based continuity measures with health
outcomes in the same study. The results indicated that the patient and provider
experiences of continuity had different effects on several important health outcomes
among Medicare beneficiaries. Higher levels of interpersonal continuity assessed using
patient reports was significantly and substantially associated with reduced risks of
emergency department use and preventable hospitalization, even after adjusting for
potential confounders. Claims-based indicators of continuity of care were not found to be
beneficially associated with reduced preventable hospitalization and were associated with
an increased likelihood of death.
These findings have profound implications for policymakers and others involved
in deciding on health care delivery system changes designed to promote continuity of
care. For example, based on the results of a recent study showing an association between
two different claims-based continuity measures and a reduction in preventable
hospitalizations (Nyweide et al., 2013), a companion commentary suggested primary care
practices should regularly measure continuity of care and provide feedback to clinicians,
increase the number of days each clinician sees patients, institute same-day or next-day
scheduling, and train front desk staff to enforce continuity through visit scheduling
(Gupta & Bodenheimer, 2013). And, while these changes will promote visit-based,
longitudinal continuity with a provider, they may have little effect on improving what
happens with the patient during the visit, which is what my findings indicate has the
greater impact on reductions in health service utilization. In fact, some proposed changes
could be damaging to the provider-patient relationship, especially if visit length must be
shortened to accommodate more flexible scheduling. Focusing solely on promoting
longitudinal continuity without considering the impact on the patient-provider experience
could therefore prove to be counterproductive to improving health care quality.
72
Study Limitations
There are several limitations of this research that warrant discussion. They can be
classified under three main categories: 1) the assessment of patient-reported continuity, 2)
the assessment of claims-based continuity, and 3) the generalizability of the analytic
sample. Each is discussed below.
Because this is a secondary analysis of existing data, I had no input into the
potential item-pool with which to develop the patient-reported measures of care
continuity. As such, I was unable to include questions that would tap into information
continuity or management continuity. Future work evaluating continuity should include
assessments of the experience that patients have with how their medical records are used
for their care (informational continuity) and how their care is managed among their
various providers (management continuity) in order to enhance the content validity of the
patient-reported continuity of care assessments.
The ability to fully evaluate the longitudinal and interpersonal domains of
continuity was also constrained by the data. Very few survey items were available for
each subscale. This was especially problematic for the two longitudinal subscales, with
only three items for Provider Duration and two items for Care Site (with both items
coming from the same survey question, and one of those having a dichotomous
response). This limitation increased the likelihood of measurement error when using this
patient-reported measure in the analyses. Furthermore, while the interpersonal continuity
questions map well to the patient-provider continuity and relationship questions from
other well-known surveys such as the Consumer Assessment of Health Plan Survey
(Agency for Healthcare Research and Quality, 2011) and the Medicare Current
Beneficiary Survey (US Department of Health and Human Services Health Care
Financing Administration, 2012), we were unable to fine tune the questions or control the
number of items used to assess the Instrumental and Affective constructs of interpersonal
continuity.
73
The timeframe surrounding the collection of the survey data limited the patient-
reported measure for two reasons: 1) the survey information was collected at only a
single point in time (2004) and 2) the reference period for the continuity questions was
limited to experiences recalled in the past year. Asking the patient to remember their care
over a longer time frame (e.g., over the past two or three years), would have provided an
opportunity to enhance the assessment of longitudinal continuity. However, even with
the one-year timeframe, there is the possibility that survey respondents may not
accurately recall their experiences with their provider, thereby affecting the interpretation
of interpersonal continuity. And, that potential only increases when asking patients to
remember events even farther back in time. Furthermore, continuity data collected at two
or more time points over time would have strengthened the findings by providing the
ability to calculate time-dependent continuity of care measures from the patient
perspective, while limiting the potential recall bias.
Although using administrative claims allowed the creation of proxy measures of
provider continuity, I was unable to assess continuity (particularly interpersonal
continuity) directly from the patient’s providers. Thus, interpreting the interpersonal
continuity of care findings is limited to how older adult patients perceived their
relationship with their care provider, which may be quite different from how their
provider perceived the visit experience. In addition, I was unable to replicate all claims-
based continuity of care measures from the extant literature. Nonetheless, I was able to
create the most frequently cited and commonly used claims-based measures which
supports the relevance of these findings to current practice.
The analytic sample for this work was limited by two study-specific factors—the
original sampling frame and the potential for significant selection bias. Because of this,
continuity of care could only be assessed prior to 2004 which is when continuity of care
was being heavily promoted in the literature (Reid et al., 2002; Haggerty et al., 2003;
Saultz, 2003; Jee & Cabana, 2006). Thus, it is unclear how the use of more recent survey
74
data (after efforts to increase continuity were implemented) might have altered the
findings.
Arguably the most serious limitation in this work is the potential for selection bias
because of restrictions to the original sample. The original sample was reduced in four
ways: a) sampling issues at the time of the survey mailing, b) the relatively low response
rate to the survey, c) the exclusion of respondents who did not answer all of the items
when evaluating the patient-reported measure in Chapter 2, and d) the exclusion of
respondents in managed Medicare at the time of the survey from the analyses reported in
Chapters 3 & 4.
The NHHSUQ survey was initially mailed to a stratified random sample of 6,060
community-dwelling Medicare beneficiaries. Potential survey respondents were removed
from the sample as ineligible because they were found to be non-community residing,
had moved out of the geographic area targeted in the sampling frame, or had died. Thus,
the sample was reduced to 5,697 individuals eligible to respond to the survey. Of these
individuals, 53% responded to the survey which reduced the sample size to 2,997.
Unfortunately, we did not have access to information about the non-respondents for
assessing the potential impact of nonresponse bias, which limits the generalizability of
the overall findings.
In Chapter 2, the analytic sample was further reduced. We limited the evaluation
of patient-reported continuity to the 2,620 individuals who had complete data from the 13
continuity-related survey items. In this instance, we were able to compare those with
complete data to those with incomplete data and found that there were slightly more
males (51% v. 45%), whites (38% v. 30%), and educated individuals (65% v. 44%) and
slightly fewer blacks (30% v. 38%) in the analytic sample. While these differences in
demographic characteristics, perhaps with the exception of education level, are not
particularly striking, they do warrant consideration when interpreting the generalizability
of the patient experience of continuity of care.
75
Finally, the analytic sample was further limited in Chapters 3 and 4 by the
traditional and necessary restriction of the analyses to the 1,219 beneficiaries enrolled in
fee-for-service Medicare at the time of the survey. The NHHSUQ was designed so that
around half of those sampled would be enrolled in managed Medicare and half in fee-for-
service Medicare. The managed Medicare respondents had to be excluded from the
analyses in Chapters 3 & 4, however, because they would either have missing or
incomplete Part B (non-Institutional) Medicare claims due to the different billing
reporting requirements for managed Medicare plans (Asper, 2007).
The experience of continuity of care could be different for those in managed
Medicare where the choice of providers (especially in closed panel plans) may be limited,
and the primary gatekeeper may be more likely to be either a physician’s assistant or
nurse practitioner. On the one hand, a limited provider network could promote continuity
with in-network providers because of the financial disincentive for seeking care out-of-
network. On the other hand, for those who more recently switched into a managed
Medicare plan, a limited provider network could be the reason for discontinuity if their
usual provider is no longer on the plan’s panel of providers. And, if a physician’s
assistant or nurse practitioner is the primary gatekeeper, the meaning of the patient-
reported items may be different (factorial invariance). The magnitude of these
limitations, however, is somewhat diminished by the fact that the patient-reported
continuity of care model was factorially invariant between those in fee-for-service and
those in managed Medicare. Nonetheless, the overall findings from this research may not
generalize to those enrolled in managed Medicare (Medicare Advantage) plans.
Policy Implications
The limitations discussed above notwithstanding my dissertation research
provided important contributions to the literature on continuity of care and has critical
implications for health policy. Providing continuity of care is commonly accepted as a
pre-requisite for the provision of quality health care, particularly for primary care.
76
Indeed, meeting a standard for providing continuity of care is one of the quality metrics
for patient-centered medical home recognition (National Committee for Quality
Assurance, 2011). Moreover, most primary care organizations, including the American
Academy of Family Physicians, the American College of Physicians, and the Patient-
Centered Primary Care Collaborative are staunch advocates for continuity of care
because of the assumption that it might be the magic bullet that leads to achieving the
IOM’s triple aims of better care, better health, and lower costs (Wolff et al., 2002;
Nutting et al., 2003; Saultz & Lochner, 2005; Wolinsky et al., 2010).
The traditional method for evaluating whether continuity of care has been
provided was to use administrative claims to identify the particular provider with whom
the patient had a preponderance of visits (O’Malley et al. 2008; Stanek & Takach, 2010).
The results presented in my dissertation suggest that such claims-based assessments of
continuity do not provide an adequate representation of the patient-provider relationship
developed during physician visits (interpersonal continuity). This is especially
problematic because it is the interpersonal aspect of the continuous care relationship that
has been shown to have the strongest association with reduced risk of emergency
department use and preventable hospitalizations. These findings support the perspectives
of the Patient-Centered Outcomes Research Institute, the National Committee for Quality
Assurance, and the Patient-Centered Primary Care Collaborative in advocating for the
inclusion of patient experiences when evaluating health care quality and outcomes. In
fact, there is evidence for Medicare beneficiaries that the assessment of patient
experiences must be included with the review of administrative claims in order to
adequately evaluate whether or not an organization promotes continuity for its patients.
Providing continuity of care in the current health care reform climate may face the
same potentials and pitfalls faced by managed care when it was touted as the magic bullet
in the late 1980s. At that time managed care was thought to be the best method for
reducing high health care costs by negotiating lower fees with providers in exchange for
77
increased patient panels (such selective contracting led to limited provider networks for
patients), developing standard treatment protocols for particular diseases, using financial
incentives to encourage efficiency, and using disincentives to discourage waste in care
provision (Shi & Singh, 2004). In the managed care environment, primary care providers
acted as the “gatekeepers” for determining diagnostic and treatment plans (Shi & Singh,
2004).
The effects of primary care physicians as gatekeepers on the patient-provider
relationship varied. In one study, privately insured individuals enrolled in closed-panel
health maintenance organizations (the most integrated form of managed care) reported
poorer communication with and lower trust in their providers (Lake, 1999).
Discontinuity with providers was not uncommon when newly enrolled patients had
limited provider choice and had to switch to new, in-network, primary care providers
(Davis, Collins, Schoen, & Morris, 1995; Flocke et al., 1997; Mechanic & Schlesinger,
1996). Managed care plans with more provider choice and fewer gatekeeping
arrangements were more likely to actually enhance continuity in the patient-provider
relationship (Forrest, Shi, von Schrader, & Ng, 2002).
In the current environment of health care reform, similar initiatives are again at
the forefront of approaches to change health care delivery. Patient-centered medical
homes, with primary care as the hub for quality health care delivery, are being
implemented in over 40 states. Thus, the issues regarding the consequences of the
gatekeeper model on the patient-provider relationship and continuity of care will again be
in question. Accountable care organizations (ACOs) are another approach to health care
reform. ACOs are groups of health care providers who band together to coordinate care
for their patients. The goal is to provide efficient care in order to participate in the shared
savings that accrue. ACOs are promoted by the Affordable Care Act and have rapidly
been forming within Medicare, Medicaid, and private insurance programs (McClellan,
McKethan, Lewis, Roski, & Fisher, 2010). The possibility for discontinuity in the
78
patient-provider relationship as a result of ACOs being formed that may limit provider
choice is, therefore, a potential problem once again. And, with the widespread
implementation of Affordable Care Act-mandated Health Insurance Marketplaces, it is
possible that some insurers will offer health plans with limited provider networks as less
expensive options for consumers, which again may lead to forced discontinuity in care, at
least initially.
The findings and conclusions highlighted in my work provide a new knowledge
base to inform policymakers and researchers about aspects of continuity of care that are
important to maintain, not only for the health of older adults, but for the appropriate use
of health care resources. I found that the patient experience of interpersonal continuity
was clearly associated with reductions in health service utilization. At the same time, that
experience had little effect on mortality. In part, my results contrast with those recently
reported that showed minimal reductions in preventable hospitalizations with continuity
of care using claims-based indicators (Nyweide et al., 2013). Indeed, using the same
claim-based measures I was not able to replicate those findings. Thus, the cumulative
evidence supporting continuity of care as a key component in the implementation of
potentially costly changes to the primary care delivery system (Gupta & Bodenheimer,
2013) in this rapidly changing and unpredictable health care environment may be less
than compelling.
My next steps in studying continuity of care will focus on providing a clearer
overall picture of its value within the context of our rapidly changing health care delivery
system. Going forward my research agenda will focus on five themes. The first is to
pursue a better understanding of what factors might affect continuity of care ratings. For
example, how does the reason for the visit (wellness, prevention, acute care, etc.), the
visit type (in-person, phone, email consult, etc.) and the stability of the patient’s health
status influence continuity of care? The second theme is to combine patient-reported and
claims-based assessments in the same model to evaluate outcomes. The purpose of this
79
avenue of research would be to better understand if one type of measure has a stronger
effect than the other at predicting health service utilization. The third theme is to
evaluate the association of continuity of care on additional health outcomes, including
physical and mental health, and functional ability. The fourth involves conducting a
more granular approach to evaluating the association of continuity of care with health
service utilization using different types of hospitalizations (elective vs. mandatory, or
readmissions for all-cause vs. recurrent episodes), emergency department visits
(potentially avoidable vs. non-discretionary), and preventive service use (vaccination and
screening rates) as outcomes. The final theme in my research agenda will be to evaluate
the association of increased continuity of care on Medicare payments.
80
REFERENCES Agency for Healthcare Research and Quality. (2011). Expanded 12-month survey with
CAHPS patient-centered medical home (PCMH) items. Rockville, MD: Agency for Healthcare Research and Quality.
American Medical Association. (2003). Current Procedural Terminology 2004:
Professional Edition. Chicago, IL: American Medical Association. Andersen, R. M. (1968). A Behavioral Model of Families’ use of Health Services.
Chicago: Center for Health Administration Studies. Andersen, R. M. (1995). Revisiting the behavioral model and access to medical care:
Does it matter? Journal of Health and Social Behavior, 35(1), 1-10. Anderson, G. (2010). Chronic Care: Making the Case for Ongoing Care. Princeton, NJ:
Robert Wood Johnson Foundation. Asper, F. (2007). Medicare Managed Care Enrollees and the Medicare Utilization Files.
Technical Brief, ResDAC Pub. No. TN-009. Research Data Assistance Center, University of Minnesota, Minneapolis, MN.
Ben-Sira Z. (1976). The function of the professional’s affective behavior in client
satisfaction: A revised approach to social interaction theory. Journal of Health and Social Behavior, 17(1), 3-11.
Ben-Sira, Z. (1980). Affective and instrumental components in the physician-patient
relationship: An additional dimension of interaction theory. Journal of Health and Social Behavior, 21(2), 170-180.
Bentler, P. M. and Bonett, D. G. (1980). Significance tests and goodness of fit in the
analysis of covariance structures. Psychological Bulletin, 88(3), 588-606. Bentler, S. E., Morgan, R. O., Virnig, B. A., Wolinsky, F. D. (2013a). Evaluation of a
patient-reported continuity of care measure for older adults. Quality of Life Research, Published online ahead of print on July 19, 2013, doi:10.1007/s11136-013-0472-z
Bentler, S. E., Morgan, R. O., Virnig, B. A., Wolinsky, F. D. (2013). Do Claims-Based
Continuity of Care Measures Reflect the Patient Perspective? Medical Care Research and Review, Published online ahead of print on October 24, 2013, doi: 10.1177/1077558713505909
Bice, T. E., and Boxerman, S. B. (1977). A quantitative measure of continuity of care.
Medical Care, 15, 347-349.
81
Bindman A. B., Grumbach K., Osmond D., Komaromy M., Vranizan K., Lurie N., Billings J., and Stewart A. (1995). Preventable hospitalizations and access to health care. Journal of the American Medical Association, 274(4), 305-11.
Bitton A., Martin C., and Landon B. E. (2010). A nationwide survey of patient centered
medical home demonstration projects. Journal of General Internal Medicine, 25(6), 584-92.
Boyer, C. A., Lutfey, K. E. (2010). Examining critical health policy issues within and
beyond the clinical encounter: Patient-provider relationships and help-seeking behaviors. Journal of Health and Social Behavior, 51(S), S80-S93.
Breslau, N., and Reeb, K. G. (1975). Continuity of care in a university-based practice.
Journal of Medical Education, 50, 965-69. Burton, R. A., Devers, K. J., and Berenson, R. A. (2011). Patient-centered medical home
recognition tools: A comparison of ten surveys’ content and operational details. Washington, DC: The Urban Institute, Health Policy Center.
Byrne, B. M. (2001). Structural Equation Modeling with AMOS: Basic Concepts,
Applications, and Programming. Mahway, NF: Lawrence Erlbaum Associates. Cabana, M. D., and Jee, S. H. (2004). Does continuity of care improve patient outcomes?
The Journal of Family Practice, 53(12), 974-80. Carrier, E., Gourevitch, M. N., Shah, N. R. (2009) Medical homes: Challenges in
translating theory into practice. Medical Care, 47(7), 714-22. Center for Policy Studies in Family Medicine and Primary Care. (November, 2007). The
Patient Centered Medical Home: History, seven core features, evidence and transformational change. American Academy of Family Physicians: The Robert Graham Center.
Child, D. (1990). The essentials of factor analysis, second edition. London: Cassel
Educational Limited. Christakis, D. A., Kazak, A. E., Wright, J. A., Zimmerman, F. J., Bassett, A. L., and
Connell, F. A. (2004). What factors are associated with achieving high continuity of care? Family Medicine, 36, 55-60.
Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests.
Psychometrika, 16, 297-334. Davis, K., Collins, K. S., Schoen, C., and Morris, C. (1995). Choice matters: enrollees’
views of their health plans. Health Affairs (Millwood), 14, 100-112.
82
De Maeseneer, J. M., De Prins, L., Gosset, C., and Heyerick, J. (2003). Provider continuity in family medicine: Does it make a difference for total health care costs? Annals of Family Medicine, 1, 144-148.
Doescher, M. P., Saver, B. G., Fiscella, K., and Franks P. (2001). Racial/ethnic
inequities in continuity and site of care: Location, location, location. HSR: Health Services Research, 36(6), 78-89.
Ejlertsson, G., and Berg, S. (1984). Continuity of care measures: An analytic and
empirical comparison. Medical Care, 22, 231-9. Elixhauser A., Steiner C., Harris D. R., and Coffey R. M. (1998). Comorbidity measures
for use with administrative data. Medical Care, 36, 8-27. Eriksson, E. A., and Mattsson, L. G. (1983). Quantitative measurement of continuity of
care. Measures in use and an alternative approach. Medical Care, 21(9), 858-875. Ettner, S. L. (1996). The relationship between continuity of care and the health behaviors
of patients: does having a usual physician make a difference? Medical Care, 37, 547-555.
Flocke, S. A., Stange, K. C., and Zyzanski, S. J. (1997). The impact of insurance type and
forced discontinuity on the delivery of primary care. Journal of Family Practice, 45(2), 129-35.
Forrest, C. B., Shi, L., von Schrader, S., and Ng, J. (2002). Managed care, primary care,
and the patient-practitioner relationship. Journal of General Internal Medicine, 17, 270-277.
Freeman, G. K., Olesen, F., and Hjortdahl, P. (2003). Continuity of care: An essential
element of modern general practice? Family Practice, 20(6), 623-627. Ginsburg, P. B., Maxfield, M., O’Malley, A. S., Peikes, D., and Pham H. H. (December,
2008). Making Medical Homes Work: Moving from Concept to Practice. In Policy Perspective No.1, December 2008. Washington, DC: Mathematica Policy Research, Inc.
Gray, B. M., Weng, W., and Holmboe, E. S. (2012). An assessment of patient-based and
practice infrastructure –based measures of the Patient-Centered Medical Home: Do we need to ask the patient? Health Services Research, 47(1), 4-21.
Gruneir, A., Silver, M. J., and Rochon, P. A. (2011). Emergency department use by older
adults: a literature review on trends, appropriateness, and consequences of unmet health care needs. Medical Care Research and Review, 68(2), 11-55.
83
Gulliford, M. C., Naithani, S., and Morgan, M. (2006). Measuring continuity of care in diabetes mellitus: An experience-based measure. The Annals of Family Medicine, 4, 548-555.
Gupta, R. and Bodenheimer, T. (2013). How primary care practices can improve
continuity of care. Journal of the American Medical Association Internal Medicine, Published online September 16, 2013. Doi:10.1001/jamainternmed. 2013.7341.
Guttman, L. (1954). Some necessary conditions for common factor analysis.
Psychometrika, 19, 149-161. Haggerty, J. L., Reid, R. H., Freeman, G. K., Starfield, B. H., Adair, C. E., and
McKendry, R. (2003). Continuity of care: a multidisciplinary review. British Medical Journal, 331, 1219-1221.
Hu, L., and Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure
analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6, 1-55.
Institute of Medicine (IOM). (1996). Primary Care: America’s Health in a New Era.
Washington, DC: National Academy of Sciences. Institute of Medicine (IOM). (2003). Priority Areas for National Action: Transforming
Health Care Quality. Washington, DC: National Academy of Sciences. Institute of Medicine (IOM). (2006). Hospital-Based Emergency Care: At the Breaking
Point. Washington, DC: National Academy of Sciences. Institute of Medicine (IOM). (2012). Best Care at Lower Cost: The Path to Continuously
Learning Health Care in America. Washington, DC: National Academy of Sciences.
Jee, S. H., and Cabana, M. D. (2006). Indices for continuity of care: A systematic review
of the literature. Medical Care Research and Review, 63(2), 158-188. Jencks, S. F., Williams, M. V., and Coleman, E. A. (2009). Rehospitalizations among
patients in the Medicare Fee-for-Service program. New England Journal of Medicine, 360, 1418-28.
Kaiser, H. F. (1960). The application of electronic computers to factor analysis.
Educational and Psychological Measurement, 20, 141-151. Kalbfleisch J. and Prentice R. Statistical Analysis of Failure Time Data. 2nd ed. (2002).
New York, NY: John Wiley & Sons, Inc.
84
Keith, T. Z. (1997). Using confirmatory factor analysis to aid in understanding the constructs measured by intelligence tests. In D.P. Flanagan, J.L. Genshaft, & P.L. Harrison (Eds.), Contemporary Intellectual Assessment: Theories, Tests, and Issues (pp. 373-402). New York: Guilford Press.
Lake, T. (1999). Do HMOs make a difference? Consumer assessments of health care.
Inquiry, 36, 411-418. Lambrew, J., DeFriese, G., Carey, T., Ricketts, T., and Biddle, A. (1996). The effects of
having a regular doctor on access to primary care. Medical Care, 34, 138-146. Leleu, H., and Minvielle, E. (2013). Relationship between longitudinal continuity of
primary care and likelihood of death: Analysis of national insurance data. PLOS One, 8(8), e71669.doi:10.1371/journal.pone.0071669.
Love, M. M., Mainous III, A. G., Talbert, J. C., and Hager, G. L. (2004). Continuity of
care and the physician-patient relationship. The Journal of Family Practice, 49(11), 998-1004.
Magill, M., and Senf, J. (1987). A new method for measuring continuity of care in family
practice residencies. The Journal of Family Practice, 24, 165-168. Mainous III, A. G., Baker, R., Love, M. M., Pereira Gray, D. J., and Gill, J. M. (2001).
Continuity of care and trust in one’s physician: Evidence from primary care in the United States and the United Kingdom. Family Medicine, 33, 22-27.
Mainous III, A. G., and Gill, J. M. (1998). The importance of continuity of care in the
likelihood of future hospitalization: Is site of care equivalent to a primary clinician? American Journal of Public Health, 88, 1539-41.
McClellan, M., McKethan, A. N., Lewis, J. L., Roski, J., and Fisher, E.S. (2010). A
national strategy to put accountable care into practice. Health Affairs, 29(5), 982-990.
Mechanic, D. & Schlesinger, M. (1996). The impact of managed care on patients’ trust in
medical care and their physicians. Journal of the American Medical Association, 275, 1693-7.
Medicare Payment Advisory Commission (MEDPAC). (2005). A path to bundled
payment around a rehospitalization. In: Report to the Congress: reforming the delivery system. Washington, DC: Medicare Payment Advisory Commission, 83-103.
Millsap, R.E. (2011). Statistical Approaches to Measurement Invariance. New York:
Routledge.
85
Morgan, R. O., Teal, C. R., Hasche, J. C., Petersen, L. A., Byrne, M. M., Paterniti, D. A., and Virnig, B. A. (2008). Does poorer familiarity with Medicare translate into worse access to health care? Journal of the American Geriatric Society, 56, 2053-2060.
National Committee for Quality Assurance (NCQA). (2011). Standards for Patient-
Centered Medical Home (PCMH) 2011. Washington, DC: NCQA. National Committee for Quality Assurance (NCQA). (2012). PCMH: Distinction in
Patient Experience Reporting. Washington, DC: NCQA. National Partnership for Women and Families. (2009). Principles for Patient- and
Family-Centered Care: The Medical Home from the Consumer Perspective. Washington, DC: National Partnership for Women and Families.
Nutting, P. A., Goodwin, M. A., Flocke, S. A., Zyzanski, S. J., and Stange, K. C. (2003).
Continuity of primary care: To whom does it matter and when? Annals of Family Medicine, 1(3), 149-155.
Nyweide, D. J., Anthony, D. L., Bynum, J. P. W., Strawderman, R. L., Weeks, W. B.,
Casalino, L. P., and Fisher, E. S. (2013). Continuity of care and the risk of preventable hospitalization in older adults. Journal of the American Medical Association Internal Medicine, Published online September 16, 2013. Doi:10.1001/jamainternmed.2013.10059.
O’Malley, A. S., Peikes, D., and Ginsburg, P. B. (2008). Qualifying Practices as Medical
Homes. In Policy Perspective No. 1, December 2008. Washington, DC: Mathematica Policy Research, Inc.
Pandhi, N., Bowers, B., and Chen, F. (2007). A comfortable relationship: A patient-
derived dimension of ongoing care. Family Medicine, 39(4), 266-73. Parchman, M. L., Pugh, J. A., Hitchcock Noel, P., and Larme, A. C. (2002). Continuity of
care, self-management behaviors, and glucose control in patients with Type 2 diabetes. Medical Care, 40, 137-144.
Patient-Centered Outcomes Research Initiative (PCORI). (2012) Patient-Centered
Outcomes Research. Washington, DC: Patient-Centered Outcomes Research Initiative.
Phillips, K. A., Mayer, M. L., and Aday, L. (2000). Barriers to care among racial/ethnic
groups under managed care, Health Affairs, 19(4), 65-75. Prevention Quality Indicators #8 Technical Specifications. (2011). AHRQ Quality
Indicators. Version 4.3, August 2011. Rockville, MD: Agency for Healthcare Research and Quality.
86
Quan, H., Sundararajan, V., Halfon, P., Fong, A., Burnand, B., Luthi, J-C., … Ghali, W.
A. (2005). Coding algorithms for defining comorbidities in ICD-9-CM and ICD-10 administrative data. Medical Care, 43, 1130-39.
Raju, N. S., Laffitte, L. J., and Byrne, B. M. (2002). Measurement equivalence: A
comparison of methods based on confirmatory factor analysis and item response theory. Journal of Applied Psychology, 87(3), 517-529.
Reid, R. J., Haggerty, J. L., and McKendry, R. (2002). Defusing the confusion: Concepts
and measures of continuity of healthcare. Ottawa, ON: Canadian Health Services Research Foundation.
Ridd, M. J., Lewis, G., Peters, T. J., Salisbury, C. (2011). Patient-Doctor Depth-of-
Relationship Scale: Development and validation. Annals of Family Medicine, 9(6), 538-545.
Robles, J. (1995). Confirmation bias in structural equation modeling. Structural Equation
Modeling, 3, 73-83. Rodriguez, H. P., Marshall, R. E., Rogers, W. H., and Safran, D. G. (2008). Primary care
physician visit continuity: A comparison of patient-reported and administratively derived measures. Journal of General Internal Medicine, 23(9), 1499-1502.
Rodriguez, H. P., Rogers, W. H., Marshall, R. E., and Safran, D. G. (2007a). The effects
of primary care physician visit continuity on patients’ experiences with care. Journal of General Internal Medicine, 22, 787-793.
Rogers, J. and Curtis, P. (1980). The concept and measurement of continuity of primary
care. American Journal of Public Health, 70, 122-127. Saultz, J. W. and Albedaiwi, W. (2004). Interpersonal continuity of care and patient
satisfaction: A critical review. Annals of Family Medicine, 2, 445-451. Saultz, J.W. (2003). Defining and measuring interpersonal continuity of care. Annals of
Family Medicine, 1(3), 134-143. Saultz, J.W. and Lochner, J. (2005). Interpersonal continuity of care and care outcomes:
A critical review. Annals of Family Medicine, 3, 159-166. Scholle, S. H., Torda, P., Peikes, D., Han, E., and Genevro, J. (2010). Engaging Patients
and Families in the Medical Home. (Prepared by Mathematica Policy Research under Contract No. HHSA290200900019I TO2.) AHRQ Publication No. 10-0083-EF. Rockville, MD: Agency for Healthcare Research and Quality.
87
Shi, L., and Singh D.A. (2004). Delivering health care in America: a systems approach. 3rd Ed. Sudbury, MA: Jones and Bartlett Publishers.
Smedby, O., Eklund, G., Anders Eriksson, E., Smedby, B. (1986). Measures of continuity
of care: A register-based correlation study. Medical Care, 24, 511-18. Stanek, M., and Takach, M. (2010). Evaluating the patient-centered medical home:
Potential and limitations of claims-based data. Portland,ME: National Academy for State Health Policy.
Starfield, B. (1980). Continuous confusion? American Journal of Public Health, 70, 117-
8. Starfield, B., Shi, L., and Macinko, J. (2005). Contribution of primary care to health
systems and health. The Milbank Quarterly, 83(3), 457-502. Steinwachs, D. M. (1979). Measuring provider continuity in ambulatory care: An
assessment of alternative approaches. Medical Care, 17, 551-65. Sturmberg, J. P. (2002). General practice-specific care categories: A method to examine
the impact of morbidity on general practice workload. Family Practice, 19, 85-92. Sturmberg, J. P., and Schattner, P. (2001). Personal doctoring: Its impact on continuity of
care as measured by the comprehensiveness of care score. Australian Family Physician, 30, 513-18.
Tabachnick, B. G., and Fidell, L. S. (2001). Principal components and factor analysis. In
Using Multivariate Statistics. Boston: Allyn and Bacon. U.S. Department of Health and Human Services, Health Care Financing Administration.
MEDICARE CURRENT BENEFICIARY SURVEY, ACCESS TO CARE: [UNITED STATES]. 2nd ICPSR release. (2012). Baltimore, MD: U.S. Dept. of Health and Human Services, Health Care Financing Administration [producer]. Ann Arbor, MI: Inter-university Consortium for Political and Social Research [distributor].
Uijen, A. A., Schellevis, F. G., van den Bosch, W. J. H. M., Mokkink, H. G. A., van
Weel, C., and Schers, H. J. (2011). Nijmegen Continuity Questionnaire: Development and testing of a questionnaire that measures continuity of care. Journal of Clinical Epidemiology, 64, 1391-1399.
US Census Bureau. (2004). Table 2a: Projected population of the United States, by age
and sex: 2000 to 2050. Washington, DC: Census Bureau, 2004. van Walraven, C., Oake, N., Jennings, A., and Forster, A.J. (2010). The association
between continuity of care and outcomes: A systematic and critical review. Journal of Evaluation of Clinical Practice, 16, 947-956.
88
Wagner, E. H., Austin, B. T., Davis, C., Hindmarsh, M., Schaefer, J., and Bonomi, A.
(2001). Improving chronic illness care: translating evidence into action. Health Affairs, 20(6), 64-78.
Ware, J. E., Kosinski, M., Dewey, J. E., and Gandek, B. (2001). How to score and interpret the single-item health status measures: A manual for users of the SF-8 health survey. Lincoln, RI: QualityMetric Incorporated.
Wasson, J. H., Sauvigne, A. E., Mogielnicki, R. P., Frey, W. G., Sox, C. H., Gaudette, C.,
and Rockwell, A. (1984). Continuity of outpatient medical care in elderly men. A randomized trial. Journal of the American Medical Association, 252, 2413-2417.
Wei, I. I., Virnig, B. A., John, D. A., Morgan, R. O. (2006). Using a Spanish surname
match to improve identification of Hispanic women in Medicare administrative data. Health Services Research, 41(4), 1469-1481.
Weiss, L., and Blustein, J. (1996). Faithful patients: The effect of long-term physician-
patient relationships on the cost and use of health care by older Americans. American Journal of Public Health, 86, 1742-1751.
Wolff, J. L., Starfield, B., and Anderson, G. (2002). Prevalence, expenditures, and
complications of multiple chronic conditions in the elderly. Archives of Internal Medicine, 162, 2269-76.
Wolinsky, F. D., Bentler, S. E., Liu, L., Geweke, J. F., Cook, E. A., Obrizan, M., …
Wallace, R. B. (2010). Continuity of care with a primary care physician and mortality in older adults. The Journals of Gerontology, Series A, Biological Sciences and Medical Sciences, 65A(4), 421-428.
Wolinsky, F. D., Miller, T. R., Geweke, J. F., Chrischilles, E. A., An, H., Wallace, R. B.,
… Rosenthal, G. E. (2007). An interpersonal continuity of care measure for Medicare Part B claims analyses. The Journals of Gerontology, Series B, Psychological Sciences and Social Sciences, 62(3), S160-S168.
89
APPENDIX
THE 2004 NATIONAL HEALTH AND HEALTH SERVICES USE QUESTIONNAIRE
90
1. Overall, how would you rate your health during the past 4 weeks?
Excellent □
Very good □
Good □
Fair □
Poor □
Very poor □
2. During the past 4 weeks, how much did physical health problems limit your usual
physical activities (such as walking or climbing stairs)?
Not at all □
Very little □
Somewhat □
Quite a lot □
Could not do physical activities
□
3. During the past 4 weeks, how much difficulty did you have doing your daily work, both at home and away from home, because of your physical health?
None at all
□ Very little
□ Somewhat
□ Quite a lot
□ Could not do physical
activities □
4. How much bodily pain have you had during the past 4 weeks?
None □
Very little □
Somewhat □
Quite a lot □
Could not do daily work □
5. During the past 4 weeks, how much energy did you have?
Very much □
Quite a lot □
Some □
A little □
None □
The 2004 National Health and Health
Services Use Questionnaire
91
6. During the past 4 weeks, how much did your physical health or emotional problems
limit your usual social activities with family or friends?
Not at all □
Very little □
Somewhat □
Quite a lot □
Could not do social activities □
7. During the past 4 weeks, how much have you been bothered by emotional problems
(such as feeling anxious, depressed or irritable)?
Not at all □
Slightly □
Moderately □
Quite a lot □
Extremely □
8. During the past 4 weeks, how much did personal or emotional problems keep you
from doing your usual work, school or other daily activities?
Not at all □
Very little □
Somewhat □
Quite a lot □
Could not do daily activities □
9. Compared to one year ago, how would you rate your health in general now?
Much better now
□
Somewhat better now
□
About the same
□
Somewhat worse now
□
Much worse now
□ Respond “YES” if you have been told you have the condition or if you believe you are being treated for it. 10. Have you ever had:
a. A heart attack (myocardial infarction) or other heart related issues? □Yes □No
b. A stroke or brain damage from a blood clot in the brain? □Yes □No c. A broken hip? □Yes □No d. Other disabling broken bone? □Yes □No e. High blood pressure (hypertension)? □Yes □No f. High cholesterol or triglycerides (hyperlipidemia)? □Yes □No g. High blood sugar (diabetes)? □Yes □No h. Cancer (other than simple skin cancer)? □Yes □No
92
11. Do you currently have:
a. Arthritis or rheumatism? □Yes □No b. Asthma, emphysema, chronic bronchitis or COPD? □Yes □No c. Chronic low back pain? □Yes □No d. A stomach, peptic or gastric ulcer? □Yes □No e. Chronic, severe heartburn (acid reflux or GERD)? □Yes □No f. Angina, coronary artery disease, congestive heart failure or
a weak heart? □Yes □No
g. Memory problems that interfere with your daily activities? □Yes □No h. Depression that interferes with your daily activities? □Yes □No i. Trouble seeing with one or both eyes, even when wearing
glasses? □Yes □No
j. Trouble hearing with one or both ears? □Yes □No k. Prostate disease? (for men only) □Yes □No
12. Do you smoke?
□ Yes □ No 13. In the past month, how many days per week or per month did you drink ANY
alcoholic beverages on average?
□ None □ 1-3 days per month □ 1 day per week □ 5 days per week □ 2 days per week □ 6 days per week □ 3 days per week □ 7 days per week □ 4 days per week
14. On the days when you drank, about how many drinks did you drink on average?
□ Never □ 1 drink per day □ 4 drinks per day □ 2 drinks per day □ 5 drinks per day □ 3 drinks per day □ 6 or more drinks per day
15. How tall are you? ______ feet ______ inches 16. How much do you weigh? ______ pounds
93
17. Thinking about the health care you receive, how would you rate the following?
Please respond to EACH statement.
Excellent Very Good
Good
Fair
Poor
a. Your health care in general □ □ □ □ □ b. Your access to life saving or urgent
medical care □ □ □ □ □
c. Your access to routine or non-urgent medical care
□ □ □ □ □
d. Your ability to get affordable prescription drugs
□ □ □ □ □
e. Your ability to get affordable medical care
□ □ □ □ □
f. Thoroughness of your primary doctor’s examinations
□ □ □ □ □
g. Accuracy of your primary doctor’s diagnoses
□ □ □ □ □
h. The explanations you are given of medical procedures and tests
□ □ □ □ □
i. Your primary doctor’s interest in you □ □ □ □ □ j. Your primary doctor’s interest in your
medical problems □ □ □ □ □
18. Of the places you go for medical care, where do you go most often for care if you
are sick or need advice about your health? Choose ONE that you use most often.
□ Doctor’s office, clinic or health center (for example, neighborhood/hospital clinic) □ Walk-in Urgent Care Center or Emergency Room □ United States Veterans Affairs medical facility (also called the VA) □ There is no specific place I visit most often for care. □ Other (Please specify)_______________________________
19. Approximately how long have you been receiving your care at this place?
□ Less than 6 months □ 6 months to 1 year □ 1 year to 2 years □ 2 years to 5 years □ 5 years or more □ There is no specific place I normally visit for care.
94
20. When you go for regular medical care, is there a particular doctor that you usually visit? This can be a general doctor, a specialist doctor, an herbal medicine doctor, a nurse practitioner or a physician assistant. □ Yes □ No GO TO QUESTION 22
21. Approximately how long have you been going to this primary doctor?
□ Less than 6 months □ 6 months to 1 year □ 1 year to 2 years □ 2 years to 5 years □ 5 years to 10 years □ 10 or more years
22. Have you changed your primary doctor within the last 12 months?
□ No, I have not made a change. □ Yes, I changed because of a change in my healthcare needs. □ Yes, I made a change for some other reason. □ Yes, my doctor made a change. □ Yes, my health plan made a change. □ I do not have a particular doctor I usually visit.
23. How satisfied are you with your overall health care?
□ Very satisfied □ Somewhat satisfied □ Somewhat dissatisfied □ Very dissatisfied
24. How comfortable are you with your primary doctor or with the providers at your usual place of care?
□ Very comfortable □ Somewhat comfortable □ Somewhat uncomfortable □ Very uncomfortable □ Not sure □ There is no specific place I normally visit for care.
95
25. How knowledgeable about your health and health care is your primary doctor or the providers at your usual place of care?
□ Very knowledgeable □ Somewhat knowledgeable □ Not knowledgeable □ Not sure □ There is no specific place I normally visit for care.
26. In the past 12 months, how many times have you visited a health care provider for
your own medical care? Do NOT include visits to an Urgent Care Center, Emergency Room or overnight hospital stays.
□ None □ 1 □ 2-4 □ 5-9 □ 10 or more
27. In the past 12 months, have you delayed visiting or not visited a health care
provider because of the cost? □ Yes □ No
28. In the past 12 months, how many times have you gone to an Urgent Care Center or
an Emergency Room for your own medical care?
□ None □ 1 □ 2-4 □ 5-9 □ 10 or more
29. In the past 12 months, have you delayed going or not gone to an Urgent Care
Center or an Emergency Room because of the cost?
□ Yes □ No
96
30. During the past 12 months, how many times have you been a patient in a hospital
overnight or longer?
□ None □ 1 □ 2-4 □ 5-9 □ 10 or more
31. In the past 12 months, have you delayed entering or not entered a hospital for
care because of the cost?
□ Yes □ No 32. In the past 12 months, did you get—
a. A flu shot? □ No □ Yes, from my primary place of care. □ Yes, from some other place.
For Women:
b. A mammogram? □ No □ Yes, from my primary place of care. □ Yes, from some other place.
For Men:
c. A prostate exam? □ No □ Yes, from my primary place of care. □ Yes, from some other place.
33. Approximately how many different prescription drugs do you take a day?
□ None □ 1 □ 2-4 □ 5-9 □ 10 or more
97
34. In the past 12 months, have you delayed filling or not filled/refilled a prescription because of the cost?
□ Yes □ No 35. Please check ALL sources of coverage that you used to get your prescription drugs
over the past 12 months.
□ Medicaid □ A state, county or local program □ Medigap supplemental insurance (including (United)/AARP Medigap coverage) □ Supplemental insurance through a union or employer (yours or your spouse’s
health plan) □ A veteran or military service-related program (the VA, CHAMPVA,
CHAMPUS/TRICARE) □ I paid some or all of my prescription drug expenses out of my own pocket.
(Please do not include copayments.) □ Other (Please specify)____________________________________ □ I do not use prescription drugs.
36. Please check the ONE source of coverage that you used to get MOST of your
prescription drugs over the past 12 months.
□ Medicaid □ A state, county or local program □ Medigap supplemental insurance (including (United)/AARP Medigap coverage) □ Supplemental insurance through a union or employer (yours or your spouse’s
health plan) □ A veteran or military service-related program (the VA, CHAMPVA,
CHAMPUS/TRICARE) □ I paid all or most of my prescription drug expenses out of my own pocket.
(Please do not include copayments.) □ Other (Please specify)____________________________________ □ I do not use prescription drugs.
37. If you have prescription coverage through any source (for example, supplemental
insurance, an HMO or the VA), did you reach a limit (“cap”) on your prescription drug benefit during the past 12 months?
□ I do NOT have prescription drug coverage. □ Yes, I reached a cap on my prescription drug coverage. □ No, I did not reach a cap on my prescription drug coverage. □ There is no cap. □ I am not sure.
98
38. Did you use any of the following to LOWER your prescription drug costs over the past 12 months? Please check ALL that apply.
□ Mail order □ Samples from your doctor □ Someone else’s prescription drugs □ Internet prescription services □ Purchase of prescription drugs from another country (e.g., Mexico, Canada) □ Other (Please specify)____________________________________ □ None of the above □ I do not use prescription drugs.
39. How knowledgeable are you about the new Medicare prescription drug discount
cards?
Very knowledgeable
Somewhat knowledgeable
Not knowledgeable
Not sure
□ □ □ □ 40. Do you have one of the new Medicare prescription discount cards?
□ Yes □ No, but I plan to get one. □ No □ I do not know if I have one.
41. In an average month, what is your “out-of-pocket” prescription drug cost,
including copayments? □ $20 or less □ $21-$50 □ $51-$100 □ $101-$200 □ More than $200 □ I do not use prescription drugs.
99
42. How satisfied are you with your Medicare coverage?
Very Satisfied
Somewhat Satisfied
Somewhat Dissatisfied
Very Dissatisfied
□ □ □ □ 43. In addition to your Medicare coverage, what other health care coverage do you
have? Check ALL items that apply. □ Medicaid □ A state, county or local program □ Medigap supplemental insurance (including (United)/AARP Medigap coverage) □ Supplemental insurance through a union or employer (yours or your spouse’s
health plan) □ A veteran or military service-related program (the VA, CHAMPVA,
CHAMPUS/TRICARE) □ Other (Please specify)____________________________________ □ I do not have any supplemental health insurance or program assistance.
44. How familiar are you with Original Medicare?
Very familiar Familiar Unfamiliar
Very unfamiliar
□ □ □ □ 45. How familiar are you with Medicare+Choice?
Very familiar Familiar Unfamiliar
Very unfamiliar
□ □ □ □
100
The next question asks you to compare Original Medicare to Medicare+Choice. If you feel you do not have enough information about BOTH Medicare options to form an opinion, please check “I do not have enough information.” It is important that you respond to each item.
46. In your opinion: Original Medicare
Medicare+ Choice
health plan There’s no difference
I do not have enough
information a. Which option is more
likely to provide care when you need it?
□ □ □ □
b. Which provides more choices of physicians?
□ □ □ □
c. Which offers more generous benefits?
□ □ □ □
d. Which is more affordable?
□ □ □ □
e. Which offers higher quality of care?
□ □ □ □
47. As mentioned earlier, Medicare records show that in July of this year, you were
enrolled in Original Medicare. Are you still enrolled?
□ Yes, I am still enrolled in Original Medicare. □ No, I am now enrolled in a Medicare+Choice plan. GO TO QUESTION 50
48. Did you know you had the option to enroll in a Medicare+Choice plan?
□ Yes □ No GO TO QUESTION 50
101
49. The following items ask about your reasons for not enrolling in a Medicare+Choice health plan:
How important was each of the following:
Very important
Somewhat important
Not important
a. I prefer to choose my own doctors and/or
hospital(s).
□ □ □
b. I did not like the choice of doctors or had difficulty seeing the doctors I wanted to see.
□ □ □
c. I did not like the choice of hospital(s).
□ □ □
d. They did not have a prescription drug plan or the prescription drug plan was limited.
□ □ □
e. The plan’s premiums, deductibles and/or copayments were too expensive.
□ □ □
f. I do not know anything about the plan(s) offered where I live.
□ □ □
g. Other (Please specify)__________________
□ □ □
50. Which of the following best describes you?
□ White □ African American or Black □ Hispanic, Spanish or Latino □ American Indian or Alaska Native □ Asian, Asian American or Pacific Islander Are you Filipino? □Yes □No □ Other (describe):______________________________________
51. Is your biological mother of Hispanic, Spanish or Latino origin or descent?
□ Yes □ No □ I am not sure.
102
52. Is your biological father of Hispanic, Spanish or Latino origin or descent?
□ Yes □ No □ I am not sure.
53. What is your current marital status?
□ Now married □ Living in a marriage-like relationship □ Widowed □ Divorced or separated □ Never married
54. What is the highest level of schooling you have completed? Please check ONE.
□ 11th grade or less □ High school/graduate equivalency or GED □ Some college/vocational or trade school training □ College degree (BA, BS, etc.) □ Post-Graduate training
55. What was your total household income from all sources for the past year? Include
income from jobs, Social Security, Railroad Retirement, other retirement income, Supplemental Security Income (SSI), pensions, interest and any other sources.
□ Less than $5,000 □ $25,001 - $35,000 □ $5,001 - $10,000 □ $35,001 - $45,000 □ $10,001 - $15,000 □ $45,001 - $55,000 □ $15,001 - $20,000 □ More than $55,000 □ $20,001 - $25,000
56. Are you a veteran? That is, have you served on active duty in the US Armed Forces,
Military Reserves or National Guard? (Active duty does not include training for the Reserves or National Guard, but does include activation, for example, for the Persian Gulf War.) Please check ALL that apply.
□ No, I am not a veteran. □ Persian Gulf War (August 1990 through present) □ Vietnam Era (August 1964 – April 1975) □ Korean Conflict (June 1950 – January 1955) □ World War II (September 1940 – July 1947) □ World War I (April 1917 – November 1918) □ I served during some other period. (Please specify)______________________