OBJECTIVE—The purpose of this study was to determine whether multidisciplinary team-based care guided by the chronic care model can reduce medical payments and improve quality for Medicaid enrollees with diabetes.

RESEARCH DESIGN AND METHODS—This study was a difference-in-differences analysis comparing Medicaid patients with diabetes who received team-based care versus those who did not. Team-based care was provided to patients treated at CareSouth, a multisite rural federally qualified community health center located in South Carolina. Control patients were matched to team care patients using propensity score techniques. Financial outcomes compared Medicaid (and Medicare for dually eligible patients) payments 1 year before and after intervention. Trends over time in levels of A1C, BMI, and systolic blood pressure (SBP) were analyzed for intervention patients during the postintervention period.

RESULTS—Although average claims payments increased for both the CareSouth patients and control patients, there were no statistically significant differences in total payments between the two groups. In the intervention group, patients with A1C >9 at baseline experienced an average reduction of 0.75 mg/dl per year (95% CI 0.50–0.99), patients with BMI >30 at baseline had an average reduction of 2.3 points per year (95% CI 0.99–3.58), and patients with SBP >140 mmHg at baseline had an average reduction of 2.2 mmHg per year (95% CI 0.44–3.88).

CONCLUSIONS—Team-based care following the chronic care model has the potential to improve quality without increasing payments. Short-term savings were not evident and should not be assumed when designing programs.

The Institute of Medicine has cited the growing prevalence of individuals with chronic conditions and deficiencies in chronic care management as two of the biggest challenges facing the U.S. health care system (1). Individuals with chronic conditions account for disproportionately high health care costs, often experience losses in productivity, and, on average, receive only 56% of recommended care (24). Health care organizations increasingly have implemented quality improvement strategies targeted at better management of chronic conditions. The strategies generally fall into two categories: first, the chronic care model (CCM) advocates redesign of care delivery at the practitioner level using evidence-based guidelines, multidisciplinary treatment teams, decision support systems, and planned visits (5); and second, disease management, typically implemented through health plans (either directly or via disease management vendors), provides primarily telephonic interactions via remotely located nurses with the objective of improving patients’ self-management skills (6). In contrast to the CCM approach, which advocates delivery system redesign, disease management is primarily a means of supplementing care provided in physician's offices (7).

Multiple studies have documented the effectiveness of CCM- and disease management–based strategies in improving quality of care (810). Advocates of quality improvement strategies have hypothesized that improvements in chronic care management could also result in financial benefits through the prevention or delay of expensive complications (11). However, results from empirical studies examining cost savings from CCM and disease management programs have been inconclusive (1216). The evidence suggests that although quality improvement initiatives have the potential to result in better quality of care, it is uncertain whether these strategies will lead to lower costs. Given the substantial financial burden imposed by chronic conditions on patients, payers, and employers, assessment of the financial impact of chronic care management strategies remains a key health policy issue.

In 1998 the Bureau of Primary Health Care, began a 6-year effort, known as the Health Disparities Collaboratives, to improve the quality of care in federally qualified community health center (FQHCs) (17). As part of this initiative, the federal government provided funds and technical assistance to FQHCs to implement team-based, patient-centered, primary care to individuals with chronic conditions. The initiative included “Breakthrough Series” training programs for clinic personnel provided by the Institute for Healthcare Improvement, in conjunction with the Improving Chronic Illness Care organization (18,19). The Breakthrough Series targets redesign of a provider's delivery system to be consistent with the CCM by use of a rapid cycle quality improvement process (Plan, Do, Study, Act). CareSouth, a private nonprofit FQHC, providing primary health care services in 10 clinics in and around Hartsville, South Carolina, was an early participant in this program and began implementation in 1999. The objective of our study was to assess the impact of CareSouth's program on short-term Medicaid payments (and also Medicare payments for beneficiaries with dual eligibility) and on key clinical diabetes indicators.

Clinical intervention

CareSouth serves 35,000 patients in an area that is rural, predominantly low income, African American, and elderly. The intervention used collaborative team-based treatment with teams comprising a physician or nurse practitioner, care manager (RN or LPN), medical assistant, information specialist, and a part-time social worker. To facilitate continuity of care, patients are assigned to specific teams. Guided by evidence-based treatment protocols, the teams provide care via planned visits capitalizing on the relevant expertise of each team member (20). In addition to providing medical management, the teams encourage and facilitate patient self-management. The staff also provides telephone support to answer patients’ questions and check on their progress.

The intervention incorporates the Bureau of Primary Health Care-provided Patient Evaluation and Care System (PECS) patient registry system, which captures key clinical and administrative information at the patient level and is available to clinicians at the point of service. The system prompts key guideline requirements to practitioners (e.g., screening for A1C), provides up-to-date test results, and facilitates planned visits addressing multiple diseases, including asthma, cardiovascular disease, depression, and diabetes. CareSouth's program was pilot tested in one site in 1999 and gradually rolled out to their remaining clinics over 4 years.

Data

We obtained all Medicaid claims between 1997 and 2005 for individuals with ICD-9-CM codes of 250.x0 and 250.x2 indicating type 2 diabetes as a primary or secondary diagnosis from the South Carolina Office of Research and Statistics. Of the CareSouth patients, 43% had dual eligibility for Medicaid and Medicare, and we also obtained Medicare payment data from the South Carolina Office of Research and Statistics for the dually eligible patients. Patients in the CareSouth program were compared with control patients who were Medicaid and dually eligible patients from similar FQHCs located in South Carolina that had not converted to team-based care during the study period. Information about the mode of care delivery within FQHCs (team-based or conventional) was obtained from the South Carolina Primary Health Care Association. Laboratory values from serial measurements of A1C, SBP, and BMI were acquired from the CareSouth PECS registry. However, because PECS was implemented at the start of the intervention, clinical data were available only for CareSouth patients during the postintervention period. Comparative information was not available from control sites.

Our primary outcome variable for the financial analysis was the difference between 1-year costs before and after the start of the intervention. Therefore, to be included in the analysis, individuals were required to be continuously enrolled in Medicaid during the entire pre- and postperiods and to have had a diagnosis of diabetes, defined as a claim with an ICD-9-CM code for diabetes, before the start of the preperiod. The intention of the latter criterion was to include only patients who were receiving treatment for diabetes in both the pre- and postperiods. Because entry of registry data only commences when a CareSouth patient begins participation in team care, we used each patient's initial date of registry data as the starting point of the intervention.

We identified 2,572 patients with type 2 diabetes in the CareSouth registry. Of these, 621 had a Medicaid claim at a CareSouth clinic. Limiting these to patients who were continuously enrolled in Medicaid during the pre- and postperiods reduced the sample of CareSouth patients to 399. Further restricting the sample to patients in whom diabetes was diagnosed >1 year before the start date of the intervention yielded 199 patients meeting all inclusion criteria. We identified 43,133 potential control patients with type 2 diabetes among Medicaid claims. Of these, 36,213 were eligible for Medicaid throughout the pre- and postperiods, and only 8,179 had at least one clinic visit. Limiting eligibility to patients with a clinic visit to a FQHC without team-based care reduced the sample to 3,140, and further restriction to patients in whom diabetes was diagnosed >1 year before the start date of the intervention yielded a potential sample of 1,868 control patients.

Propensity score matching

There were differences between the 199 patients treated at CareSouth clinics and the other 1,868 Medicaid patients used as control patients. Our initial analysis of patient characteristics showed that CareSouth patients were typically older than control patients (P < 0.0001), were less likely to be African American (P < 0.001), and had more severe comorbidities (P < 0.0007) as measured by the Charlson Comorbidity Index (21,22). To control for these differences, we further refined our selection of control patients using propensity score matching. Patients were matched on the following characteristics: age, sex, race, dual eligibility, use of antidepressants, and the Charlson Comorbidity Index.

The propensity score matching proceeded in two steps. In the first step, the likelihood of being a CareSouth patient was modeled in a logistic regression as a function of the characteristics described previously. From this regression, the predicted probability, or propensity score, was computed for each patient. Given the relatively small number of CareSouth patients, we selected control patients using a nearest neighbor matching method, which selects a control patient who has the closest propensity score to the CareSouth patient (23). Matching was done by clinic and without replacement, and we chose only one control patient per patient to ensure balance in a way that is both conservative and enhances the interpretability of results. Once matched control patients were selected, we verified that the distribution of covariates matched that of the CareSouth patients. The final sample for the financial analysis consisted of 193 CareSouth patients and 193 control patients. We were unable to find matched control patients for 6 CareSouth patients.

Analysis of financial data

Our statistical analysis was designed to compare total Medicaid payments for patients enrolled in the CareSouth program to the propensity score-matched Medicaid patients who did not receive team-based care. For patients who were dually eligible, payments included the sum of Medicaid and Medicare payments. Our analysis included assessment of both total payments and subcategories: inpatient hospital payments, outpatient hospital payments, nonhospital outpatient payments, and pharmacy payments. Financial data included the sum of Medicaid payments plus Medicare payments for dually eligible patients for 1 year before and after the start of the intervention.

We then analyzed these payments for matched CareSouth and control patients using a “difference-in-differences” approach. This analysis compared the differences in payments between year 1 (preintervention) and year 2 (postintervention) in the CareSouth group with differences between year 1 and year 2 in the control group. Our hypothesis was that patients in the CareSouth and control groups had the same expected payments before the intervention, but that there were significant differences in payments between CareSouth and control patients after the intervention. The difference-in-differences model captures these effects. Our generalized linear regression model was

\[g(E{[}y_{it}{]}){=}{\beta}_{0}{+}{\beta}_{1}t_{i}{+}{\beta}_{2}\mathrm{CS}_{it}{+}{\beta}_{3}(\mathrm{CS}{\cdot}t)_{it}\]

where i indexes patients and t indexes the time period. Thus, yit is total Medicaid payments (plus Medicare payments for dually eligible patients) and payments broken down by category (e.g., inpatient, pharmacy, and so on). As eq. 1 illustrates, t is a binary variable that takes on a value of 1 if the expenditure occurred in the postintervention period, CSit is a binary indicator variable for the CareSouth group, and the coefficient on the interaction of t and CS, β3, provides evidence for whether the post/predifference in total payments between the CareSouth and the control group was significantly different from 0. The coefficient on the interaction term is the quantity of interest because it represents the difference in expenditures between pre- and postperiods for CareSouth patients and control patients. Finally, g is a link function for the generalized linear model. Because the expenditure data were highly skewed, as is often the case when health expenditures are examined, we fit the data to a generalized linear model, assuming a gamma distribution and a log link function. Note that the log link function transforms the difference-in-differences to a ratio of ratios and the results are in terms of multiplicative effects of team care. Because patients were matched on demographic and clinical characteristics, no additional covariates were included in the model (24).

Analysis of clinical data

The objective of the statistical analysis was to determine whether patients enrolled in the CareSouth program experienced improvements in A1C, BMI, and SBP over time. We modeled trends in A1C, BMI, and SBP for the propensity score-matched CareSouth patients in the postintervention period only because comparable data were not available for the control group or for the CareSouth patients before the intervention. In addition, not all of the CareSouth patients had measures beyond baseline. Therefore, the number of patients studied in the SBP, A1C, and BMI analyses were 193, 171, and 67, respectively. The dependent variables were serial observations of the clinical measures. Because these are repeated measures, we controlled for clustering using random effects. The covariate of interest was time, measured in days since the first laboratory result in the registry. We also controlled for age, sex, and race. CareSouth clinic location was controlled for using clinic fixed effects. The final model was as follows:

\[m_{it}{=}{\beta}_{0}{+}{\beta}_{1}t{+}{\beta}_{2}x_{it}{+}{\beta}_{3}z_{i}{+}{\delta}_{i}{+}{\varepsilon}_{it}\]

where i indexes patients and t indexes time (measured in days from the first measurement in the registry), mit is a clinical measure for patient i in time t, xit is a vector of patient characteristics (age, sex, and race), zi is a vector of CareSouth clinics, δi is a zero-mean, normally distributed random effect, and εit is a zero-mean, normally distributed error term. For this analysis we first studied all CareSouth patients with two more measurements as described above. We then repeated the analysis using only those patients whose baseline measures were particularly high, defined as A1C >9, BMI >30, and SBP >140.

Financial data

Our analysis showed that average 1-year payments at baseline were significantly lower for CareSouth patients for nonhospital-based outpatient care ($2,096.60 vs. $2,940.80, P = 0.025) and significantly higher for hospital-based outpatient care ($445.70 vs. $260.50, P = 0.012) (Table 1). Differences in other subcategories of payments were not statistically significant, either at baseline or after the intervention (Table 1). For CareSouth patients, average 1-year payments before and after the intervention rose in the postintervention period for all types of care except hospital-based outpatient care; for control patients, these payments rose for all types of care except inpatient care (Table 1). Figure 1 presents the estimates and CIs of the parameters in the difference-in-differences regressions. None of the differences noted were statistically significant except for hospital-based outpatient payments, which was significantly lower for the CareSouth group. However, the apparent advantage of team care in this category resulted from differences in preintervention rather than postintervention costs.

Clinical data

Results in Fig. 2 suggest that among all CareSouth patients, A1C did not change significantly over time, but among patients with a baseline A1C >9 mg/dl, A1C decreased significantly over time. For this subset, the average ± SD baseline A1C was 9.75 ± 2.25 mg/dl and fell ∼0.75 mg/dl per year (P < 0.0001). CareSouth patients had a starting BMI of 35.0 ± 10.4 kg/m2 and experienced an average decrease of 1.9 points per year (P < 0.0001). Among CareSouth patients with a starting BMI >30 kg/m2 (40.1 ± 9.5), BMI fell ∼2.3 points per year (P = 0.001). Similarly, among all CareSouth patients, SBP decreased significantly over time (−0.88 mmHg per year, P = 0.014) and patients with a baseline SBP >140 mmHg also had a significant drop in blood pressure over time (−2.2 mmHg per year, P = 0.035).

Our analysis suggests that patients enrolled in the CareSouth program did not experience significantly lower total Medicaid and Medicare expenditures than similar patients who did not receive team-based care. These results were true for specific cost categories as well, except for hospital-based outpatient visits, for which there was a small but statistically significant reduction for CareSouth patients. The results are consistent with findings from prior studies using less rigorous designs. Our analysis demonstrated clinically and statistically significant improvement over time in A1C, BMI, and SBP for the CareSouth patients, particularly for those patients starting with worse baseline levels. However, because of a lack of data availability, as described above, we were not able to measure clinical improvements relative to the control group. Nonetheless, the magnitude of our estimates for the clinical indicators is in the range of those found in the literature for similar interventions targeted at A1C and SBP in patients with diabetes (10,25). The drop in the BMI, both across all patients and for patients with baseline values in the high range of this indicator, is both clinically significant and unusual. The fact that care improved without changes in drug costs is noteworthy. This fact, coupled with improvement in BMI, makes it reasonable to hypothesize that better lifestyle management may have been the big driver.

Although we were unable to measure the intensity of the implementation of their quality strategies, CareSouth's intervention incorporated key elements of the CCM including a patient registry, patient education, facilitation of self-management skills, and, perhaps most importantly, the use of multidisciplinary teams. A meta-analysis by Shojania et al. (10), comparing the most commonly used strategies targeted at reducing glycemic levels in individuals with diabetes, showed that interventions using multidisciplinary teams were the most effective.

As with all studies in this area, ours is subject to several important limitations. First, we did not have detailed clinical data for the CareSouth patients in the preperiod or for the control group in either the pre- or postperiods. This is not unusual, as these data typically are collected only when an intervention has begun. Second, our study may be underpowered to detect differences in the annual costs of medical care for patients with diabetes. This is an important limitation and raises the possibility that real savings from team-based care may not be detectable owing to chance, given our small sample sizes. Unfortunately, our exclusion restrictions, which are designed to identify a meaningful intervention sample, reduce the overall size of the analytic sample. Third, our study only examined costs for 1 year after implementation of the intervention at the CareSouth clinics. Selby et al. (26) provided a conceptual discussion regarding the returns to managing diabetes care and concluded that if returns are likely, they may not be realized until well into the future. A multiyear study by Wagner et al. (27) that compared patients with diabetes who achieve sustained decreases in A1C levels (i.e., a reduction of ≤1 percentage point) with patients who did not show improvement, noted that savings in total health care costs did not occur until the second year of the study. Unfortunately, our study did not have the data to follow the sample beyond 1 year, so we cannot rule out the possibility that the CareSouth patients might be less costly over the long run.

Finally, our sampling inclusion criteria mandated continuous Medicaid enrollment for 1 year after the intervention. Unfortunately, we were not able to identify the reasons for patients dropping out of the Medicaid program and consequently may have excluded individuals who died during the intervention. This factor could have an impact on the results if the distribution of exclusions due to deaths was significantly different between CareSouth and control patients.

Despite these limitations, we believe that our study advances the scant literature in the area of the financial and clinical impact of care management programs for diabetes. The strength of our study, relative to the existing literature, is the detailed control strategy to allow for comparisons with the intervention group in the financial analysis. And although we cannot rule out the possibility of longer-term savings associated with these programs, the fact that immediate savings are not found is an important message for policy makers and purchasers (including those in the Medicaid and Medicare programs) who often believe that shorter-term savings are likely.

We should note that CareSouth's motivation for this initiative was quality improvement rather than cost reduction. Value in health care often is defined as cost/quality. Under ideal circumstances, costs would decrease and quality would improve. However, greater value also can be achieved if either the numerator or the denominator changes in the appropriate direction. Pressure to constrain rising health care costs may cause purchasers and policy makers to place undue emphasis on the cost-saving potential of chronic care management strategies. However, until definitive evidence regarding cost implications becomes available, perhaps a more realistic perspective is warranted. Our findings suggest that although short-term savings are unlikely, the proverbial glass may, in fact, be half full. Even if longer-term savings do not materialize, the findings do suggest that payers, in this case Medicaid and Medicare, received greater value for their dollars in the CareSouth sites after the intervention. Nevertheless, future researchers should seek to follow control and treatment groups for extended periods to provide better evidence on whether there is a return on investment associated with these programs over time.

Figure 1—

Differences between CareSouth patients and control subjects in changes of annual payments from the year before the interventions to the year after. The black boxes represent the point estimates of the difference in differences, and the lines represent the CIs of the parameters.

Figure 1—

Differences between CareSouth patients and control subjects in changes of annual payments from the year before the interventions to the year after. The black boxes represent the point estimates of the difference in differences, and the lines represent the CIs of the parameters.

Close modal
Figure 2—

Summary of trends in clinical measures among CareSouth patients, adjusted for demographics, comorbidities, and patient level clustering. *P <0.05; **P <0.01; ***P <0.0001. Patients with two, three, or four (or more) observations of BMI were 28, 12, and 27, respectively. Comparable numbers for A1C were 35, 28, and 108 and for SPB 11, 15, and 167.

Figure 2—

Summary of trends in clinical measures among CareSouth patients, adjusted for demographics, comorbidities, and patient level clustering. *P <0.05; **P <0.01; ***P <0.0001. Patients with two, three, or four (or more) observations of BMI were 28, 12, and 27, respectively. Comparable numbers for A1C were 35, 28, and 108 and for SPB 11, 15, and 167.

Close modal
Table 1—

Medicaid and Medicare payments in CareSouth patients versus matched control patients before and after implementation of team care by cost category

Before intervention
After intervention
CareSouthControlP valueCareSouthControlP value
Inpatient care 1,230.26 ± 3,623.82 1,611.06 ± 5,336.316 0.4126 1,470.85 ± 5,340.159 1,567.21 ± 5,320.329 0.8591 
Nonhospital outpatient care 2,096.63 ± 3,104.133 2,940.78 ± 4,335.243 0.0254 3,022.65 ± 5,695.479 3,490.40 ± 6,034.75 0.4341 
Hospital outpatient care 445.65 ± 915.4637 260.50 ± 457.4701 0.0124 280.40 ± 584.9204 281.64 ± 477.1866 0.9818 
Pharmacy 2,479.20 ± 2,091.833 2,499.67 ± 2,290.305 0.9270 2,709.67 ± 2,223.474 2,887.51 ± 2,742.313 0.4845 
Total 6,251.74 ± 6,648.942 7,312.01 ± 8,748.297 0.1809 7,483.57 ± 9,834.168 8,226.76 ± 10,512.23 0.4737 
Before intervention
After intervention
CareSouthControlP valueCareSouthControlP value
Inpatient care 1,230.26 ± 3,623.82 1,611.06 ± 5,336.316 0.4126 1,470.85 ± 5,340.159 1,567.21 ± 5,320.329 0.8591 
Nonhospital outpatient care 2,096.63 ± 3,104.133 2,940.78 ± 4,335.243 0.0254 3,022.65 ± 5,695.479 3,490.40 ± 6,034.75 0.4341 
Hospital outpatient care 445.65 ± 915.4637 260.50 ± 457.4701 0.0124 280.40 ± 584.9204 281.64 ± 477.1866 0.9818 
Pharmacy 2,479.20 ± 2,091.833 2,499.67 ± 2,290.305 0.9270 2,709.67 ± 2,223.474 2,887.51 ± 2,742.313 0.4845 
Total 6,251.74 ± 6,648.942 7,312.01 ± 8,748.297 0.1809 7,483.57 ± 9,834.168 8,226.76 ± 10,512.23 0.4737 

Data are means ± SD in USD.

This research was supported by a grant from the California HealthCare Foundation.

We are grateful to Sophia Chang, our project officer, for useful feedback and to the following individuals for providing the study data and background information about CareSouth and the South Carolina Medicaid program: Ann Lewis, Heather Kirby, Kevin Rogers, and Lathran Woodard.

1.
Institute of Medicine:
Crossing the Quality Chasm: A New Health System for the 21st Century.
Washington, DC, National Academy Press,
2001
2.
Anderson G, Horvath J: The growing burden of chronic disease in America.
Public Health Rep
119
:
263
–270,
2004
3.
American Diabetes Association: Economic costs of diabetes in the U.S. in 2007.
Diabetes Care
31
:
596
–615,
2008
4.
McGlynn EA, Asch SM, Adams J, Keesey J, Hicks J, DeCristofaro A, Kerr A: The quality of health care delivered to adults in the United States.
N Engl J Med
348
:
2635
–2645,
2003
5.
Wagner EH, Austin BT, Davis C, Hindmarsh M, Schaefer J, Bonomi A: Improving chronic illness care: translating evidence into action.
Health Aff (Millwood)
20
:
64
–78,
2001
6.
Ellrodt G, Cook DJ, Lee J, Cho M, Hunt D, Weingarten S: Evidence-based disease management.
JAMA
278
:
1687
–1692,
1997
7.
Leeman J, Mark B: The chronic care model versus disease management programs: a transaction cost analysis approach.
Health Care Manage Rev
31
:
18
–25,
2006
8.
Knight K, Badamgarav E, Henning JM, Hasselblad V, Anacleto AD, Ofman JJ, Weingarten SR: A systematic review of diabetes disease management programs.
Am J Manag Care
11
:
242
–250,
2005
9.
Ofman JJ, Badamgarav E, Henning JM, Knight K, Levan R, Gur-Arie S, Richards M, Hasselblad V, Weingarten S: Does disease management improve clinical and economic outcomes in patients with chronic diseases? A systematic review.
Am J Med
117
:
182
–192,
2004
10.
Shojania KG, Ranji SR, McDonald KM, Grimshaw JM, Sundaram V, Rushakoff RJ, Owens DK: Effects of quality improvement strategies for type 2 diabetes on glycemic control: a meta-regression analysis.
JAMA
296
:
427
–440,
2006
11.
Villagra V: Strategies to control costs and quality: a focus on outcomes research for disease management.
Med Care
42
:
III24
–III30,
2004
12.
An analysis of the literature on disease management programs [article online], Congressional Budget Office,
2004
. Available from http://www.cbo.gov/showdoc.cfm?index=5909&sequence=0. Accessed 8 January 2007
13.
Fireman B, Bartlett J, Selby J: Can disease management reduce health care costs by improving quality?
Health Aff
23
:
63
–75,
2004
14.
Gilmer TP, Philis-Tsimikas A, Walker C: Outcomes of Project Dulce: a culturally specific diabetes management program.
Ann Pharmacother
39
:
817
–822,
2005
15.
Villagra V, Ahmed T: Effectiveness of a disease management program for patients with diabetes.
Health Aff
23
:
255
–266,
2004
16.
Brown R, Peikes D, Chen A, Ng J, Schore J, Soh C:
The Evaluation of the Medicare Coordinated Care Demonstration Project: Findings for the First Two Years.
Princeton, NJ, Mathematica Policy Research, Inc., Document No. PR07-08,
2007
17.
Landon BE, Hicks LS, O'Malley AJ, Lieu TA, Keegan T, McNeil BJ, Guadagnoli E: Improving the management of chronic diseases at community health centers.
N Engl J Med
356
:
921
–934,
2007
18.
Institute for Healthcare Improvement: Home page,
2006
. Available from http://www.ihi.org/ihi/about. Accessed 7 May 2006
19.
Improving Chronic Illness Care: Home page,
2006
. Available from http://www.improvingchroniccare.org/index.html. Accessed 7 May 2006
20.
Wagner EH: The role of patient care teams in chronic disease management.
Br Med J
320
:
569
–572,
2000
21.
Charlson ME, Pompei P, Ales KL, MacKenzie CR: A new method of classifying prognostic comorbidity in longitudinal studies: development and validation.
J Chronic Dis
40
:
373
–383,
1987
22.
Deyo RA, Cherkin DC, Ciol MA: Adapting a clinical comorbidity index for use with ICD-9-CM administrative databases.
J Clin Epidemiol
45
:
613
–619,
1992
23.
Rosenbaum PR, Rubin DB: The central role of the propensity score in observational studies for causal effects.
Biometrika
70
:
41
–55,
1983
24.
Mullahy J: Interaction effects and difference-in-difference estimation in loglinear models [article online],
1999
. National Bureau of Economic Research Technical Working Paper Series No. 245. Available from http://www.nber.org/papers/t0245. Accessed 19 May 2008
25.
Chodosh J, Morton SC, Mojica W, Maglione M, Suttorp MJ, Hilton L, Rhodes S, Shekelle P: Meta-analysis: chronic disease self-management programs for older adults.
Ann Intern Med
143
:
427
–438,
2005
26.
Selby JV, Scanlon DP, Lafata JE, Villagra V, Beich J, Salber PR: Determining the value of disease management programs.
Jt Comm J Qual Saf
29
:
491
–499,
2003
27.
Wagner EH, Sandhu N, Newton KM, McCulloch DK, Ramsey SD, Grothaus LC: Effect of improved glycemic control on health care costs and utilization.
JAMA
285
:
182
–189,
2001

Published ahead of print at http://care.diabetesjournals.org on 4 August 2008.

Readers may use this article as long as the work is properly cited, the use is educational and not for profit, and the work is not altered. See http://creativecommons.org/licenses/by-nc-nd/3.0/ for details.

The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked “advertisement” in accordance with 18 U.S.C Section 1734 solely to indicate this fact.