Genentech writes a check and PatientsLikeMe agrees to share “de-identified” data for 5 years | MedCity

Apparently, PatientsLikeMe’s ‘Openness is a good thing’ philosophy doesn’t apply to their own business—the amount of the payment from Genentech hasn’t been disclosed.

From their ‘Openness Philosophy’ page:

Furthermore, we believe data belongs to you the patient to share with other patients, caregivers, physicians, researchers, pharmaceutical and medical device companies, and anyone else that can help make patients’ lives better.

Shouldn’t it be up to the patients participating in the site to decide who they share the data with? Shouldn’t the patients themselves profit from that sharing?

The Quest To Predict Flu Outbreaks Moves From Google To Wikipedia | Fast Co Exist

Follow-up for my previous post on Google Flu Trends and big data.

You know what may be better than Google or Wikipedia searches—UpToDate data for both influenza-related searches and oseltamivir indications/dosing queries.

In my limited experience, doctors go to UpToDate when they have a patient in front of them they highly suspect has a specific condition and they want to verify either (1) criteria for diagnosis or (2) treatment options and/or dosing. Seems like you would get a high signal-to-noise ratio looking at UpToDate for tracking influenza.

The only hurdle is that pesky problem of getting a private company to turn over their data…

Google Glass is now ‘try before you buy’ | VentureBeat

Kia Kokalitcheva:

Since Glass is obviously not quite “socially acceptable” yet, this could be yet another small attempt at making it seem … normal.

Back in February, Google posted a friendly list of “dos and don’ts” for Glass Explorers, a first step in combating the anti-social aura of Glass.

[…]

Google is trying so hard to make “Glass” happen.

So are many physicians

Quantified Self Public Health Symposium | Susannah Fox

Interesting discussion in the comments section of this post about the need for physicians who have a deep understanding of data.

Dr Bryan Vartabedian (of 33 charts fame) writes:

Just like we need physician scientists and physician creatives, we need physician data scientists. I suspect these are unicorns…

As usual, Dr V’s analysis is spot on. We do need physicians who understand data and they do seem to be quite rare.

However, I object to adding more tech/media jargon to the discussion. [1] What we need are physician-biostatisticians.

These are people with a MD and a PhD in biostatistics. The field and programs to go into the field already exist at schools of public health around the country. Unfortunately, they don’t seem to be very popular. Most MD-PhDs seem to pursue basic science fields like molecular biology, biochemistry, physiology, physics, etc and we need them in those fields to advance the science of medicine. But, with more and more large data sets being created through the spread of electronic medical records, we are going to need some really good physician-biostatisticians.

I briefly considered this path. Ultimately, I decided it was too long of a road when I was already starting my medical career a little late. It is a very difficult path. Of the 5 MD-PhD candidates in my original med school class, one dropped out of the PhD program and another one is unhappy with their PhD work. We need some strong, smart people (and there are many in medical schools around the country) to take up this yoke.


  1. I don’t know what a ‘data scientist’ is. I know what a statistician and biostatistician are.  ↩

If you want a CT scan that costs $802 less, go to Canada | Vox

It’s no secret that many (most?) health care costs are lower in other developed countries. The real question is—why?

Is the $802 difference solely due to higher reimbursement paid to radiologists in the US to read the scan? [1] Or is it due to a relentless pursuit of advanced technology for a competitive edge in America’s capitalist health care system that means we are using unnecessarily advanced CT scanner compared to Canada? Or is it pure profit motives? Or is some of the increased cost due to health insurance overhead and malpractice costs?

While it’s interesting to look at these 15 charts comparing costs in the US to other developed countries, it provides very little insight into the actual problems.


  1. It’s unclear from this data what the figures actually include (e.g.—are physician fees included in these costs) and if the figures are even comparable across countries due to differential bundling of facility and physician fees.  ↩

The Mayo Clinic's New Doctor-in-an-iPhone | Fast Company

For approximately $50 a month, the Mayo Clinic is offering unlimited access to the famed hospital's nurses through a smartphone app.

If that's the opening line of your article, how can you possible justify the use of 'doctor' in your title?!?

Terrible, hyperbolic headline.

How Information Overload Sabotages Our Observation Skills | 99u

Bob Hambly:

It's our job to be better observers. We've become lazy about it.

In this talk, Hambly is addressing designers but I think his lessons are not only pertinent but prescient for physicians.

The basic job of a doctor is to take in information and make observations in order to generate creative ideas. Then, order those ideas in terms of their likelihood and again generate creative ideas for solutions. The medical colloquialisms are 'differential diagnosis' and 'treatment'. But, don't be fooled—these are creative enterprises.

In the world of electronic medical records, molecular testing, advanced imaging, and genomic data, we face the threat of information overload. Reams of information are at our fingertips. No longer do we have to be exhaustive historians, staunch observers of signs and symptoms, or careful practitioners of the physical exam. We've become lazy observers.

Yet, valuable information not contained in the records lies within the history and physical. The subtle symptom overshadowed by more salient problems; the travel history or environmental exposure not previously asked about; the close family member with a similar problem. How do we improve our observational skills?

For designers, Hambly suggests documenting observations in various media. For physicians, I think the key is long-form narratives of the History of Present Illness and careful documentation of the physical exam (our versions of documenting our observations). In our quest for efficiency (and more patients and more pay), we've reduced the HPI to a grocery list of signs and symptoms and copied templates of physical exams. While these may fulfill billing requirements, they are not useful later on when things don't turn out how we anticipated and we need to revisit the original patient presentation. Holding ourselves to a high standard also pushes us to gather enough information from the patient to form a thorough narrative. You'll see how much detail you've missed when you try to form a comprehensive, readable narrative.

The next best tool for avoiding information overload to improve our observational skills...don't look at the chart before seeing a patient. Gather all the information yourself. Don't rely on the admitting HPI or previous hospitalization notes for the basis of your narrative. Take the patient's own words; then use the other records and clinical data to enrich the narrative.

Such process takes time, a valuable commodity in medicine. But, practice breeds proficiency and speed. We (especially those of us still in training) need to ensure we are practicing the correct process. 

Why the Creative Destruction of Healthcare May Not Be Such a Good Idea | The Health Care Blog

‘Disrupt’, ‘creatively destroy’, ‘flip’, ‘hack’, etc are tech buzzwords now commonplace in the healthcare public discourse. But, are they helpful or even accurate?

As Dr Bill Crounse points out in this piece and Brain Palmer noted in a Slate article about medical hackathons, lots of smart people within the medical world have been working on many of these tough problems for years. ‘Destructive’ language minimizes their work and the scope of the problems.

Nevermind these words are so widely used they have virtually lost all meaning.

Health care is far from perfect and change is needed, but what we need is actual improvement not overwrought rhetoric.

Are Hackathons the Future of Medical Innovation? | Slate

Brian Palmer:

There’s an element of hubris to medical hackathons that can’t be ignored. Medical experts around the world have been trying to solve most of these kinds of problems for years.

Excellent examination of medical hackathons' power and limitations. It's very difficult to solve medicine's most vexing problems in a 24 hour binge, but you can chip away at the edges and generate many good, albeit nascent ideas.

A Google Glass App That Would Be Hard for Even the Haters to Hate | recode

Or he could've just asked one of a dozen people in the room with him (i.e.—nurses, technicians, therapists, residents, med students, etc) to look at the record for him...

More importantly, anecdotal evidence, while compelling, is...anecdotal.

Also note, the Google Glass being used at BIDMC is not stock:

Wearable Intelligence strips and replaces the Google Glass software with a reformatted version of Android, so it can be locked down for specific uses and specific contexts. Doctors don’t have the option to tweet photos of patients, check their Facebook, or even take the device off the hospital Wi-Fi network. Google’s on-board speech recognition technology is replaced with a more specialized medical dictionary from Nuance.

More cost, more complexity to complete a rather inane task.

Big data: are we making a big mistake? | FT Magazine

With the shortcomings of Google Flu Trends exposed last month, many have jumped at the chance to critique ‘big data’. A recent NY Times article on the subject has been widely circulated.

Rather than spouting off a grocery list of issues, this FT Magazine article provides some insight into the core problems with ‘big data’.

Most notably, they draw a distinction between ‘big data’ and ‘found data’:

But the “big data” that interests many companies is what we might call “found data”, the digital exhaust of web searches, credit card payments and mobiles pinging the nearest phone mast…Such data sets can be even bigger than the [Large Hadron Collider] data – Facebook’s is – but just as noteworthy is the fact that they are cheap to collect relative to their size, they are a messy collage of datapoints collected for disparate purposes and they can be updated in real time.

The ease and inexpensiveness of ‘found data’ leads to “theory-free analysis of mere correlations” which often breakdown due to the old statistical curmudgeons—sampling error and sampling bias.

The whole article is well-worth the time to gain some insight into ‘big data’.

A new game plan for concussions | Apple

Beautiful page dedicated to how athletic trainers can use an iPad with the C3 Logix app from the Cleveland Clinic to capture data about concussions on the field.

Is Maintenance of Certification Our Next Tuskegee? | Dr Wes

Dr Wes—a cardiac electrophysiologist and clinical teacher at the University of Chicago—takes the American Board of Internal Medicine to task over their newly mandated Maintenance of Certification (MOC) process. He argues that this new process violates the ethical standards promulgated in the 1979 Belmont Report.

This is a long, well-written critique of the ABIM’s MOC and well worth the time to read it. A few thoughts:

  • I find the comparison of the MOC to the Tuskegee Syphilis Study wholly inappropriate. Internal medicine physicians today are a far cry from poor African American sharecroppers from the rural South in the 1930s. Drawing corollaries between the two is disingenuous. Those in the Tuskegee Study were never told they had a disease, thus they had no recourse and many died due to a treatable disease. Physicians have been told about the MOC process and can formally address their complaints through the ABIM or, as Dr Wes is doing, seek redress through public discussion and pressure on the ABIM. I think it is suitable to frame the discussion within the principles outlined in the Belmont Report, but grossly inappropriate to make a comparison to the Tuskegee Syphilis Study.
  • The unproven nature of the MOC process could be translated to virtually all board exams. Little to no evidence exists demonstrating the value of USMLE Step Exams and speciality board exams. We need to critically evaluate how we demonstrate competency in medicine.
  • The costs for all of these unproven exams and certifications is staggering. The ABIM MOC program fee is $1,940 plus an additional $775 exam fee. For a subspecialist, the MOC fee is $2,560. Why do subspecialists have to pay $500 more?!? Seems like brazen profiteering off their colleagues.