Why Doctors Make Great Object-Oriented Software Designers | FastCo Labs

Ciara Byrne:

The pair decided to start a company to tackle the problem [e.g.—lack of modern software tools for doctors] and immediately hit an impasse: What doctors actually do all day can be exceedingly complex, making the process of spec’ing an app nearly impossible.

This is one of many reasons why current generation electronic medical records are so terrible.

The article goes on to detail a company called Modernizing Medicine that is solving this problem by using physicians to actually code new electronic tools for physicians. They also discuss how the company is aggregating their data to turn “every private practice in the United States [into] a lab.” Really fascinating.

Children’s Hospital creates system for safe patient handoffs | Boston Globe

Is that really what they’ve done at Boston Children’s?

This Boston Globe article is based on results from a JAMA study published today entitled, “Rates of Medical Errors and Preventable Adverse Events Among Hospitalized Children Following Implementation of a Resident Handoff Bundle.” [1] Essentially, they studied rates of medical errors and preventable adverse events before and after training residents in proper handoffs and providing new structures/tools for handoffs. This study is an example of an interrupted time-series analysis, a study design with increasing popularity among quality improvement professionals.

Like any study design, it has many benefits and weaknesses. Most notable among its weakness is that “secular trends unrelated to the intervention may result in an improvement in outcome on the posttest.” For this reason, examining the study periods for potential confounding is essential in critically evaluating any time-series study. From the study:

Preintervention data were collected from July through September 2009. All components of the resident handoff bundle were implemented during October 2009, and postintervention data were collected from November 2009 through January 2010.

Something very important happens in July every year—new interns begin working as actual doctors for the first time. Some significant evidence suggests errors increase during this time and resolve as the interns become more experienced. This effect could account for at least a part, if not all, of the improvements seen in this study. The authors dutifully acknowledge this fact:

…our study design has the potential for confounding because the preintervention data were collected during the summer and fall, and postintervention data were collected during the subsequent winter. Therefore, increased resident experience over time, differences in patient populations, or other ongoing patient safety interventions might have contributed to reductions in overall error rates.

However, most studies of what is sometimes termed the July effect have either found that it does not exist or that it is small in magnitude. In a systematic review of prior studies of the July effect, Young et al found that 55% of higher-quality studies showed no effect; of the 45% of higher-quality studies that showed a relationship between mortality and the July effect, effect size ranged from an increase of 4.3% to 12%.

The data from the Young study can be interpreted several ways. The Annals of Internal Medicine editors note in their ‘Implications’ statement, “Changeover that occurs when experienced housestaff are replaced with new trainees can adversely affect patient care and outcomes.” [2] More importantly, if the authors of this paper recognized this to be a serious limitation—so serious that they spend 2 paragraphs in their discussion attempting to explain it away—why did they not (1) conduct the study over a longer period of time, (2) start later in the year so July and August would not be included in the preintervention period, or (3) attempt to include a control group?

Every study has limitations. It is important to recognize those limitations so that we can draw our own conclusions and not simply swallow the authors’ and media’s explanations.


  1. ‘Bundle’ is a hot word. Be sure to include it in all of your quality improvement projects for easier publication.  ↩

  2. From the first line in the discussion of the Young paper: “Mortality and efficiency of care tend to worsen at the time of academic year–end changeovers, although the studies do not describe potential contributing causes or, as a result, provide specific guidance for solutions.”  ↩

Controlling Health-Care Costs | The New Yorker

James Surowiecki:

The slowdown in spending is due in part to the recession and the tepid recovery—but not as much as you’d think. A recent paper by the Harvard economists David Cutler and Nikhil Sahni estimated that the recession explained scarcely more than a third of the spending slowdown. Oddly enough, the public debate over Obamacare has also played a role. Bob Kocher, who was a special assistant for health care in the White House in 2009 and 2010, did a report for Lawrence Summers on the past sixty years of health-care legislation, and found that when Congress seriously considered enacting health-care reform the rate of health-care spending often slowed for a year or two.

Sooooo, we just need to have an endless healthcare debate. Not too excited about that prospect and I like to talk health policy.

The article goes on to talk about how other changes—such as insurers seeking out cheaper quality care and the impact of ‘never events’—are also contributing to reduced growth of health care spending. A lot of talk, even before Obamacare, occurred in recent years about various ways to rein in costs. Some of that talk was put into action; now we get to see some of the results. Hopefully, some of the schemes will not only work well, but also scale.

Ten companies paid Happtique for health app certification | MobiHealthNews

The ‘paid’ part of that headline is what jumped out at me. I was curious exactly how much it would cost to get an app certified by Happtique. Unsurprisingly, it is not easily found. It’s not located on their developer page, but on the app submission portal (under ‘Fee Schedule’ in the sidebar). The baseline initial fee is $3,000 for a two-year certification…other fees apply.

As a previous medical app developer, I don’t see much value, but apparently 10 developers have found some.

A Surgeon's Review of Google Glass in the Operating Room | Fast Company

A cardiothoracic surgeon at UCSF has been trialing Google Glass in the operating room for the past three months.

Kathy Chin Leong:

His conclusion so far: the technology is indeed useful in the operating room as an adjunct device in delivering necessary information, but it still has miles to go as a product…

He also found that Glass is weak in responding to voice commands. As a result, Theodore placed a technician in the operating room to keep things running smoothly. The technician would transmit up to a dozen X-rays to him during the procedure…The X-rays in Glass were sometimes hard to see–they look best in low ambient light, just as a movie is best viewed in a darkened movie theater. But he couldn’t dim the lights during surgery, so the tech would enlarge the images as needed.

For a product that relies almost exclusively on voice commands, that’s a bit of a problem. Miles to go indeed…

Computer science course fills seats, needs at Harvard | Boston Globe

Just as we expect core curricula to include math, basic science, literature, composition, history, and philosophy, so too should we now expect it to include computer science. Technology is changing our world and the next generation of leaders needs an understanding of tech beyond how to email and use Facebook. This is evident even in medicine where basic biomedical science increasingly requires programming knowledge and clinical medicine needs knowledgable leaders to usher in the next generation of digital tools for doctors.

The problem with peer review in scientific publishing | KevinMD

Great review of some of the problems with our current peer review system. This system came into being when such reviews were conducted by mailing around paper copies. Today, we have incredible online tools that provide facile publishing outlets and interactive forums. Peer review needs to move into the public, online space. PeerJ Preprints are an example of this at work, but we really need the heavy hitters—New England Journal of Medicine and JAMA—to get onboard.

Collaboration is Sexy | Roni Zeiger

Roni Zeiger MD:

A new drug often makes waves in the headlines and the stock market when it’s shown to improve relevant outcomes. Somehow it’s not quite as newsworthy when remission rates increase from 55% to 68% in kids with Crohn’s disease… if it happens simply by improved communication among doctors.

In short, we need to invest more in quality improvement and we need to invite all players—doctors, nurses, pharmacists, therapists, administrators, tech people, and especially patients—to join in on the investment.

One Drug, Two Names, Many Problems | NY Times

One place where we can control this—academic medical centers. Medical educators should never use brand names in their lectures and prohibit the use of brand names on rounds. In my experience, most attendings adhere to this but not as stringently as we all should.

When Memorization Gets in the Way of Learning | The Atlantic

Ben Orlin:

Memorization is a frontage road: It runs parallel to the best parts of learning, never intersecting. It’s a detour around all the action, a way of knowing without learning, of answering without understanding.

Far too much of medical education relies on memorization (the bulk of the first two years are memorization) [1]. Lack of understanding in medicine is high stakes. Understanding is requisite to account for the variability inherent in practicing human medicine; it is impossible to teach or experience all permutations of disease presentations and management. Without understanding, doctors might miss a unique presentation, new disease, or a novel treatment.

One solution–open-book tests. When practicing medicine in the ‘real-world’ physicians now have the world’s medical knowledge at their fingertips. Why handicap students and not allow them to practice the vital skills of finding, assessing, and applying knowledge [2]? Additionally, this would force professors to write higher-level questions that must pass the Google Test [3]. Students would learn how to focus on important concepts instead of furiously trying to memorize every word [4].

Not mentioned by Mr Orlin, memorization has the further consequence of squashing creativity. If you only know the pieces but don’t understand how they work together, how can you think of new ways for them to work together? Now more than ever, we need creativity in medicine to solve the most vexing medical problems and ways to efficiently deliver those solutions. Memorization won’t get us there.

[via Ryan Madanick MD]


  1. The Krebs Cycle is the classic example. How many medical students are required to memorize the Krebs Cycle? How many practicing physicians can recite the Krebs Cycle from memory? How many use it in their daily practice? Need more evidence? Just look at the proliferation of flashcards and mnemonics in medical education.  ↩

  2. Javier Benitez MD argues in the this piece that we should be teaching ‘information management’ skills (like finding, assessing, and applying knowledge) in addition to evidence-based medicine.  ↩

  3. Essentially, if a simple Google search can provide an answer within seconds, then it is not a very high level question.  ↩

  4. Some students in my class would simply copy the text from a professor’s PowerPoint slide and paste it into an electronic flashcard, with the prompt for the flascard being the title on that PowerPoint slide. Talk about simple memorization…  ↩

Embracing the Millennials’ Mind-Set at Work | NY Times

Tom Agan:

Social media permeate the personal, academic, political and professional lives of millennials, helping to foster the type of environment where innovation flourishes. So when compared with older generations, millennials learn quickly — and that’s the most important driver of innovation.

Medicine represents a unique environment for this generation to work in. Information is siloed. Common tools–like text messaging–run afoul of HIPAA and other institutional regulations. Innovation moves at a glacial pace owing to bureaucracy and regulation. And let’s not forget the hierarchical structure of medicine itself (something I underestimated until I was in it). We’ve seen some of these forces at work in recent events surrounding 23andMe.

More than the cultural and bureaucratic barriers in medicine, I believe we may be putting too much faith in millennials’ capacity to leverage technology for change. While this generation has grown up with these digital tools, by and large most of them do not understand how to create them. Too many of my peers possess a very, very basic technology literacy. Yes, they know how to use Facebook and Instagram, but not much beyond that. They do not understand that today’s smartphones are almost miraculous. They don’t understand how complex something seemingly simple like taking a picture and sharing it with Instagram actually is. It won’t be enough to have millenials simply move up the ranks. They will do a good job with adapting to things like EMR changes or new messaging systems because they’ve moved from MySpace to Facebook to Twitter/Instagram. They will fail at designing the technological tools that could change the health care landscape.

Residency Program Visits: What do you ask the Residents? | Ken Iserson's Global Life

Talking with residents has been the number one piece of advice I've received during interview season. They are in the position you are applying for and can give the best sense of what life is like for a resident at that particular institution. Also, they are more likely to give 'real' responses rather than sidestep embarrassing aspects of a program. But, what do you ask them? These three questions from Ken Iserson are a good start.

Personally, I do not have set questions to ask the residents. I just try to talk to them to get a sense for who they are and what kind of residents compose a program. Are these people like me in a broad sense? Can I see myself working with them (especially if you talk to interns because they will be your senior residents)? Do they appear happy (without directly asking them)? This requires careful listening but can be much more insightful than asking, 'Why did you choose this program.'

So much data-gathering, so little doctoring | LA Times

The refrain—‘I was such a better doctor when I wrote paper notes’—is, by now, trite and cliché. The current problems with EMRs represent design flaws, both with the technological implementation and physicians’ insistence on maintaining traditional habits and note formats from the paper era [1]. Electronic medical records are part of the future, for many reasons. Instead of composing opinion pieces about the virtues of paper or, even worse, actually re-implementing paper charts, let’s agree to work together on creating next generation EMRs.


  1. We not only need to re-imagine EMR designs and workflows, but also the very notion of notes themselves. Think about the evolution of personal letters from handwritten notes delivered via the US Postal Service to email to text messages. At each step along the way, the typical format and composition changed. Our medical notes need to evolve in a similar fashion and they are, but haphazardly.  ↩

Ohio Hospital Puts Docs On The Spot To Lower Costs | NPR

Lisa Chow:

It’s scary showing doctors data that’s saying they’re not doing as well as they could.

Medicine needs a cultural shift so that physicians welcome quality and cost data as part of their pursuit of better medicine. Incorporating quality improvement education—like the AAMC’s hot spotting initiative—into med school and resident curricula is crucial to bringing about such cultural change. We also need more biostatistics and epidemiology education so that doctors can effectively analyze and incorporate data into their practice.

When 23andMe gives results that no one knows how to manage | Dr Jen Gunter

Jen Gunter MD:

But now you have the result and you tested positive. Now you have the result that if you act upon it might cause more harm than good. You just don’t know. What do you do?

Testing of any sort, genetic or otherwise, in medicine is fraught with dangers. Medical students and residents are constantly taught to only order a test when they know what to do with the result--have a plan for if it is positive or negative. Direct-to-consumer genetic testing has always lacked a plan for what do with the result for many of the included tests. This has always been known; it's good to see the FDA's action putting a spotlight on this issue once again.

If Inaccuracy Were Illegal, The Feds Would Have To Regulate Most Health Gadgets | TechCrunch

Gregory Ferenstein:

But, it’s hard to see a difference between genetics and exercise, especially if a calorie counter leads users to dramatically cut their food intake.

Equating genetics testing for life-threatening medical conditions and personal activity monitors is just plainly wrong. Increasing one’s physical activity or eating healthier is not the same as subjecting someone to the stress of thinking they will develop a disease or, even worse, putting them through needless medical procedures. Those who change their habits due to a physical activity monitor have a feedback loop—as they lose weight, they can adjust their diet and activity accordingly. No such feedback loop exists for the most serious medical conditions tested with these personal genomics companies. Of course their is risk in dramatically changing one’s lifestyle based on an activity monitor, but those risks are smaller, less emotionally taxing, and typically can be avoided.

Note—It is not shocking these personal activity devices are inaccurate. Many, many studies have shown wide variation in the ability of accelerometers to accurately capture such data.