Why the Creative Destruction of Healthcare May Not Be Such a Good Idea | The Health Care Blog

‘Disrupt’, ‘creatively destroy’, ‘flip’, ‘hack’, etc are tech buzzwords now commonplace in the healthcare public discourse. But, are they helpful or even accurate?

As Dr Bill Crounse points out in this piece and Brain Palmer noted in a Slate article about medical hackathons, lots of smart people within the medical world have been working on many of these tough problems for years. ‘Destructive’ language minimizes their work and the scope of the problems.

Nevermind these words are so widely used they have virtually lost all meaning.

Health care is far from perfect and change is needed, but what we need is actual improvement not overwrought rhetoric.

Are Hackathons the Future of Medical Innovation? | Slate

Brian Palmer:

There’s an element of hubris to medical hackathons that can’t be ignored. Medical experts around the world have been trying to solve most of these kinds of problems for years.

Excellent examination of medical hackathons' power and limitations. It's very difficult to solve medicine's most vexing problems in a 24 hour binge, but you can chip away at the edges and generate many good, albeit nascent ideas.

A Google Glass App That Would Be Hard for Even the Haters to Hate | recode

Or he could've just asked one of a dozen people in the room with him (i.e.—nurses, technicians, therapists, residents, med students, etc) to look at the record for him...

More importantly, anecdotal evidence, while compelling, is...anecdotal.

Also note, the Google Glass being used at BIDMC is not stock:

Wearable Intelligence strips and replaces the Google Glass software with a reformatted version of Android, so it can be locked down for specific uses and specific contexts. Doctors don’t have the option to tweet photos of patients, check their Facebook, or even take the device off the hospital Wi-Fi network. Google’s on-board speech recognition technology is replaced with a more specialized medical dictionary from Nuance.

More cost, more complexity to complete a rather inane task.

Big data: are we making a big mistake? | FT Magazine

With the shortcomings of Google Flu Trends exposed last month, many have jumped at the chance to critique ‘big data’. A recent NY Times article on the subject has been widely circulated.

Rather than spouting off a grocery list of issues, this FT Magazine article provides some insight into the core problems with ‘big data’.

Most notably, they draw a distinction between ‘big data’ and ‘found data’:

But the “big data” that interests many companies is what we might call “found data”, the digital exhaust of web searches, credit card payments and mobiles pinging the nearest phone mast…Such data sets can be even bigger than the [Large Hadron Collider] data – Facebook’s is – but just as noteworthy is the fact that they are cheap to collect relative to their size, they are a messy collage of datapoints collected for disparate purposes and they can be updated in real time.

The ease and inexpensiveness of ‘found data’ leads to “theory-free analysis of mere correlations” which often breakdown due to the old statistical curmudgeons—sampling error and sampling bias.

The whole article is well-worth the time to gain some insight into ‘big data’.

A new game plan for concussions | Apple

Beautiful page dedicated to how athletic trainers can use an iPad with the C3 Logix app from the Cleveland Clinic to capture data about concussions on the field.

Is Maintenance of Certification Our Next Tuskegee? | Dr Wes

Dr Wes—a cardiac electrophysiologist and clinical teacher at the University of Chicago—takes the American Board of Internal Medicine to task over their newly mandated Maintenance of Certification (MOC) process. He argues that this new process violates the ethical standards promulgated in the 1979 Belmont Report.

This is a long, well-written critique of the ABIM’s MOC and well worth the time to read it. A few thoughts:

  • I find the comparison of the MOC to the Tuskegee Syphilis Study wholly inappropriate. Internal medicine physicians today are a far cry from poor African American sharecroppers from the rural South in the 1930s. Drawing corollaries between the two is disingenuous. Those in the Tuskegee Study were never told they had a disease, thus they had no recourse and many died due to a treatable disease. Physicians have been told about the MOC process and can formally address their complaints through the ABIM or, as Dr Wes is doing, seek redress through public discussion and pressure on the ABIM. I think it is suitable to frame the discussion within the principles outlined in the Belmont Report, but grossly inappropriate to make a comparison to the Tuskegee Syphilis Study.
  • The unproven nature of the MOC process could be translated to virtually all board exams. Little to no evidence exists demonstrating the value of USMLE Step Exams and speciality board exams. We need to critically evaluate how we demonstrate competency in medicine.
  • The costs for all of these unproven exams and certifications is staggering. The ABIM MOC program fee is $1,940 plus an additional $775 exam fee. For a subspecialist, the MOC fee is $2,560. Why do subspecialists have to pay $500 more?!? Seems like brazen profiteering off their colleagues.

✚ More information about Apple's Healthbook

I first wrote about Healthbook back in early February when vague, disparate rumors began to coalesce.

Over the past week, more information has been leaked about the forthcoming product. 9to5Mac continues to provide the most information with detailed descriptions and screenshots of what Healthbook may eventually look like. In a a piece later in the week, 9to5Mac profiled Vital Connect’s HealthPatch—a temporary patch worn on the chest to track various biologic parameters. As noted in the article, several Vital Connect employees have recently been hired by Apple.

After 9to5Mac’s piece last Monday, Wired weighed in on the subject, speculating Apple’s move into this space could take the quantified self/mHealth movements mainstream with far reaching implications.

As I said back in February, I am excited to see what Apple can bring to health and fitness. But I also want to re-iterate:

The big elephant in the mHealth/quantified self room is that no one has quite figured out what to do with all the data. Some highly motivated quantified selfers are using it to change their habits, but what impact will it have on the rest of the world?

To this point I also want to add that as consumer apps and devices move more and more into the medical world—through measurement of biologic data such as blood sugar levels or pulse oximetry—the need for evidence in terms of safety and efficacy will grow stronger. Not only will doctors want to know they can rely on the data, the FDA will demand the evidence for safety. 23andMe ran afoul of the medical regulatory culture. We are comfortable with novel devices counting steps without much research, but not so much when it comes making therapeutic decisions based on data from untested devices. But, maybe Apple has learned from 23andMe’s missteps.

This summer should be interesting.

Doctors and Tech: Who Serves Whom? | The Atlantic

Too many technological systems are built in ways that make sense to computer engineers but not to doctors…

This is the fundamental problem with current EMR systems. We can try to solve this problem by including more doctors in the design process, but ultimately we need more physicians with backgrounds in computer science and design. Medical schools should be actively recruiting computer science majors. And we need to find ways to incentivize developers who are currently working on weather and podcasting apps to develop the next great EMR.

“Every innovation should be tested not just to see if it increases revenue or cuts costs,” [Dr. Paul Weygandt] says, “but also to ensure that it enhances the doctor-patient relationship.”

Providing better electronic tools specifically tailored to facilitate the workflows of physicians will increase revenue and cut costs.

Why is EHR usability overlooked for the sake of innovation? | EHR Intelligence

“My mission is to relieve physician suffering by improving usability of the software they use,” [Jeff Belden, MD] explains. “The problem right now is that doctors have to think really hard and what we know is that a lot of this stuff could be offloaded.”

Providing context-sensitive, timely information to physicians should be the ultimate goal for an EMR. Doctors should not spend much time at all gathering patient information to make a decision. Some companies are doing this by creating custom views for specific conditions like diabetes or patients requiring anti-coagulation.

What we really need is either (1) the ability to easily create our own custom views or (2) APIs that give developers access to the nuts and bolts of an EMR so they can create their own apps for viewing/entering data and writing orders.

The second solution is more difficult but far more appealing in that it has the potential to provide more robust solutions.

Why Doctors Still Use Pen and Paper | The Atlantic

David Blumenthal, former National Coordinator for Health Information Technology:

The reason why the medical profession has been so slow to adopt technology at the point of contact with patients is that there is an asymmetry of benefits.

I disagree with Blumenthal's assessment that this is a marketplace problem. It's a usability problem.

Usability of current EMRs is so terrible that physicians do not see a benefit over pen and paper. Think about it in terms of email—would you use email if it required you to write out the message by hand, scan it into a computer, and then took 4 days to deliver it? Absolutely not. Email provides tremendous advantages in terms of convenience and speed over traditional mail. Doctors are not seeing advantages with EMRS in their day-to-day work over paper records.

With better usability, physicians would be able to do their jobs more easily and efficiently, with the hope of spending more time with patients and less time doing paperwork. Adoption will go through the roof with EMRs that are truly useful for doctors.

Electronic Health Records—Expensive, Disruptive And Here To Stay | Forbes

According to Dr. Handler the answer is simple, “Computers need to do work for physicians rather than making physicians do work for the computer. Technologies should make it faster and easier for the treating physician to view relevant information, to document a useful patient story, and to make the best care decisions."

Exactly.

Typing And Screening My Own Blood | RK.md

Rishi Kumar—an anesthesiology resident—talks about the experience of going to the lab and doing his own type and cross-match (the thing we do to make sure you get matched blood products).

I think we need more of this in medical education. I think there could be an entire class (during 4th year would be a good spot) dedicated to laboratory medicine. Students would actually get to perform the tests we often flippant order without a second thought to the labor required to produce the result. Not only would we imbue respect for the tests, but we would learn some of their shortcomings and nuances to their execution.

Malcolm Gladwell: Tell People What It's Really Like To Be A Doctor | Forbes

Gladwell discussing the clerical side of being a physician:

You don’t train someone for all of those years of medical school and residency, particularly people who want to help others optimize their physical and psychological health, and then have them run a claims-processing operation for insurance companies.

Such a profound insight from someone who is not a doctor.

I think in the future, well-designed electronic medical records will help cut down on some of these clerical burdens. There’s absolutely no reason why the bulk of data insurance companies want can’t be automatically generated from electronic records. Unfortunately, nobody seems to have done this yet (and current EMRs are poorly designed).

If we continue along our current trajectory, two things will happen:

  • More and more doctors will take salaried positions within large practices/hospitals in order to avoid the backbreaking work of running an independent small business in addition to practicing medicine.
  • Direct primary care will become more and more popular so that neither doctors nor patients will have to deal with insurance bureaucracies.

✚ Why bad research makes it into good medical journals—a critique of the Ontario surgical checklist study

This past week, a study in the New England Journal of Medicine called into question the effectiveness of surgical checklists for preventing harm. Atul Gawande—one of the original researchers demonstrating the effectiveness of such checklists and author of a book on the subject—quickly wrote a rebuttal on the The Incidental Economist. He writes, “I wish the Ontario study were better,” and I join him in that assessment, but want to take it a step further.

Gawande first criticizes the study for being underpowered. I had a hard time swallowing this argument given they looked at over 200,000 cases from 100 hospitals. I had to do the math. A quick calculation shows that given the rates of death in their sample, they only had about 40% power [1]. Then I became curious about Gawande’s original study. They achieved better than 80% power with just over 7,500 cases. How is this possible?!?

The most important thing I keep in mind when I think about statistical significance—other than the importance of clinical significance [2]—is that not only does it depend on the sample size, but also the baseline prevalence and the magnitude of the difference you are looking for. In Gawande’s original study, the baseline prevalence of death was 1.5%. This is substantially higher than the 0.7% in the Ontario study. When your baseline prevalence approaches the extremes (i.e.—0% or 50%) you have to pump up the sample size to achieve statistical significance.

So, Gawande’s study achieved adequate power because their baseline rate was higher and the difference they found was bigger. The Ontario study would have needed a little over twice as many cases to achieve 80% power.

This raises an important question: why didn’t the Ontario study look at more cases?

The number of cases in a study is dictated by limitations in data collection. Studies are generally limited by the manpower they can afford to hire and the realistic time limitations of conducting a study. However, studies that use existing databases are usually not subject to these constraints. While creating queries to extract data is often tricky, once you have setup your extraction methodology it simply dumps the data into your study database. You can extend or contract the time period for data collection by simply changing the parameters of your query. Modern computing power means there are few limitations on the sizes of these study databases and the statistical methodologies we can employ. Simply put, the Ontario study (which relied on ‘administrative health data,’ read: ‘existing data’) easily could have doubled the number of cases in their study.

Exactly how did they define their study group? As Gawande points out in his critique, the Ontario study relied on this bizarre 3-month window before and after checklist implementation at individual hospitals. Why 3 months? Why not 6 or 12 or 18? They even write in their methods:

We conducted sensitivity analyses using different periods for comparison. [3]

They never give the results of these sensitivity analyses or provide sound justification for the choice of a 3-month period. Three months not only keeps their power low, but it fails to account for secular trends. Maybe something like influenza was particularly bad in the post-checklist period, leading to more deaths despite effective checklist use. Maybe a new surgical technique or tool was introduced, like DaVinci robots, or many new, inexperienced surgeons were hired that increased mortality. In discussing their limitations, they address this:

Since surgical outcomes tend to improve over time, it is highly unlikely that confounding due to time-dependent factors prevented us from identifying a significant improvement after implementation of a surgical checklist.

I will leave it to you to decide if you think this is an adequate explanation. I’m not buying it.

Gawande concludes that this study reflects a failure of implementation of using checklists, rather than a failure of checklists themselves. I’m inclined to agree.

Ultimately, I don’t wonder why this study was published; bad studies are published all the time (hence the work of John Ioannidis). I wonder why this study was published in the New England Journal of Medicine. NEJM is supposed to be the gold-standard for academic medical research. If they print it, you should be confident in the results and conclusions. Their editors and peer reviewers are supposed to be the best in the world. The Ontario study seems to be far below the standards I expect for NEJM.

I think their decision to accept the paper hinged on the fact that this was a large study that showed a negative finding on a subject that has been particularly hot over the past few years [4]. Nobody seemed to care that this was not a particularly well-conducted study; this is the sadness that plagues the medical research community. Be a critical reader.


  1. Remember, we conventionally aim for a power of 80% (or better).  ↩

  2. Clinical significance refers to the importance of a finding in terms of its impact on something clinically meaningful. To use data for the Ontario study as an example, they show a statistically significant drop in the length of hospital stays from 5.11 days to 5.07 days. Despite this finding’s statistical significance, who cares?! You’re still in the hospital 5 days roughly.  ↩

  3. I am taking ‘sensitivity analysis’ to mean in this case that they actually looked at various time periods—maybe 6 or 12 or 18 months—to see how their results changed. Usually when people do this, they give some indication of the results of their sensitivity analyses and why they decided to stick with the original plan.  ↩

  4. Yes, checklists are hot. I mean, Atul Gawande wrote a best-selling book about them. Granted, he’s such a great writer that he could spend 300 pages expounding upon why the sky is blue and it would sell.  ↩

Ignorance is not bliss when it comes to health literacy | Healthcare Leadership Blog

Colin Hung:

As I looked at the screen filled with lab results, I realized that I didn’t have a clue what the information was telling me. I had no idea whether the Complete Blood Counts (CBC) or Electrolytes meant the fictitious patient needed an immediate trip to the Emergency Room or a high-five for being so healthy. 

Giving patients access to their medical data is not a complex problem. Developing strategies for helping them make sense of their medical data is incredibly complex. Patients should absolutely have easy access to their medical data, but keep in mind that access will not automatically produce meaningful action.