Children’s Hospital creates system for safe patient handoffs | Boston Globe

Is that really what they’ve done at Boston Children’s?

This Boston Globe article is based on results from a JAMA study published today entitled, “Rates of Medical Errors and Preventable Adverse Events Among Hospitalized Children Following Implementation of a Resident Handoff Bundle.” [1] Essentially, they studied rates of medical errors and preventable adverse events before and after training residents in proper handoffs and providing new structures/tools for handoffs. This study is an example of an interrupted time-series analysis, a study design with increasing popularity among quality improvement professionals.

Like any study design, it has many benefits and weaknesses. Most notable among its weakness is that “secular trends unrelated to the intervention may result in an improvement in outcome on the posttest.” For this reason, examining the study periods for potential confounding is essential in critically evaluating any time-series study. From the study:

Preintervention data were collected from July through September 2009. All components of the resident handoff bundle were implemented during October 2009, and postintervention data were collected from November 2009 through January 2010.

Something very important happens in July every year—new interns begin working as actual doctors for the first time. Some significant evidence suggests errors increase during this time and resolve as the interns become more experienced. This effect could account for at least a part, if not all, of the improvements seen in this study. The authors dutifully acknowledge this fact:

…our study design has the potential for confounding because the preintervention data were collected during the summer and fall, and postintervention data were collected during the subsequent winter. Therefore, increased resident experience over time, differences in patient populations, or other ongoing patient safety interventions might have contributed to reductions in overall error rates.

However, most studies of what is sometimes termed the July effect have either found that it does not exist or that it is small in magnitude. In a systematic review of prior studies of the July effect, Young et al found that 55% of higher-quality studies showed no effect; of the 45% of higher-quality studies that showed a relationship between mortality and the July effect, effect size ranged from an increase of 4.3% to 12%.

The data from the Young study can be interpreted several ways. The Annals of Internal Medicine editors note in their ‘Implications’ statement, “Changeover that occurs when experienced housestaff are replaced with new trainees can adversely affect patient care and outcomes.” [2] More importantly, if the authors of this paper recognized this to be a serious limitation—so serious that they spend 2 paragraphs in their discussion attempting to explain it away—why did they not (1) conduct the study over a longer period of time, (2) start later in the year so July and August would not be included in the preintervention period, or (3) attempt to include a control group?

Every study has limitations. It is important to recognize those limitations so that we can draw our own conclusions and not simply swallow the authors’ and media’s explanations.


  1. ‘Bundle’ is a hot word. Be sure to include it in all of your quality improvement projects for easier publication.  ↩

  2. From the first line in the discussion of the Young paper: “Mortality and efficiency of care tend to worsen at the time of academic year–end changeovers, although the studies do not describe potential contributing causes or, as a result, provide specific guidance for solutions.”  ↩