✚ 'Real Science' is so much more than controlled trials

During this past week’s #meded chat [1], I stumbled upon the following tweets from Joel Topf MD:

The thing about resident research is that real science is multi-center with hard outcomes. #MedEd 1/3 [original]

The time and $ limitations of residency prevent implementation of these types of research studies so… #MedEd 2/3 [original]

We really are teaching residents to think small and how to do poor research. This is not a good idea. #MedEd 3/3 [original]

The same day, I heard a prominent researcher giving Grand Rounds state, ‘We have to do real science and randomized-controlled trials are real science.’

The notion that ‘real science’ is solely the purview of large, multi-center randomized-controlled trials is a dangerous one. Yet it is a refrain I hear often.

All things being equal, a large, multi-center, randomized, placebo-controlled trial is the strongest method for answering a question [2]. But things are never equal. Research is constrained by the real world. Ethics, time, and money all participate in the process of systematically answering a clinical question.

Often, it is not ethically possible to randomize one group to treatment and one to a placebo. This is generally true when a known effective treatment exists—antibiotics are a good example—and it would be harmful for the patient to withhold the existing treatment. In such cases, non-inferiority or equivalence trials are conducted.

Time also plays a major factor in answering clinical questions in which the clinical sequelae take years or decades to develop. Cancer studies are the classic example here [3]. It is nearly impossible to conduct a randomized trial of almost any exposure we believe leads to cancer (or is protective) because cancers can take decades to manifest themselves. Such a study would be plagued by loss-to-follow-up and high costs. In such situations, we use case-control studies.

Even though we don’t like to admit it, money frequently determines research priorities and design. One area where large, multi-center, randomized, placebo-controlled trials are routinely conducted is pharmacologic prevention of heart disease. Why? Because pharmaceutical companies can make billions of dollars on a single blockbuster drug to support such research. They use very large trials because these are convincing to doctors and they invariable demonstrate ‘statistical significance’ even for very small improvements. In most other areas—especially pediatrics—there is generally not such a free flow of money for expensive controlled trials.

This past week, we were reminded about the Surgeon General’s 1964 report on the ill effects of smoking. This report relied on over 7,000 documents. None of the evidence used in this report included human randomized trials. In fact, the lack of controlled trials is what the tobacco companies used for years as a rebuttal to the medical evidence against smoking. Are we any less convinced today of smoking’s ill effects because it is not supported by the ‘real science’ of controlled trials?

The BMJ illustrated the many limitations of randomized-controlled trials in their classic article, ‘Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomised controlled trials.’ Given such constraints, controlled trials are only possible for a fraction of our clinical questions, yet we answer meaningful questions with good research all the time. Our problem is not a dearth of controlled trials, but an overabundance of bad research of all types. A bad, poorly designed randomized trial never trumps good cohort or case-control studies. Our focus should not be on study type, but the overall quality of research.

To Dr Topf’s point about ‘teaching residents to think small and how to do poor research,’ I will grant that I think this happens frequently [4]. However, it’s not because time and money constraints preclude us from engaging them in randomized trials, but because mentors seem complacent with involving residents in bad research. Too often, resident research projects seem to be poorly designed chart reviews. This is an exact recipe for how to turn residents off to research—tedious data collection with little prospect of wide interest.

I am enthusiastic about research in large part because I have avoided projects that require lots of tedious data collection and because I’ve had some early success. The first paper I led was picked up by the mainstream media and has been cited dozens of times. This should never be one’s sole measure of success, but it certainly helps to know that some people think the work you’re doing is worthwhile.

How do we ensure similar success for residents? First and foremost, make sure we are asking clinically meaningful questions. Mentors are crucial for this. While a resident (or med student) may see a pattern in their clinical experience and want to explore that further, a mentor must guide the question with their knowledge of the research world’s context. This involves shaping the original question to be meaningful in light of existing research. Mentors have the unique perspective of ‘knowing the field’. I have proposed many research questions that were subsequently modified or outright turned down because my mentors correctly recognized they would be low-value in light of existing or planned research.

Second, use the power of electronic medical records to avoid tedious data collection. EMRs and large databases allow us to extract datasets in a fraction of a second to answer clinical questions. I truly believe if residents are given the opportunity to spend the bulk of their time manipulating data, instead of collecting it, they will enjoy the research process much more. Alternatively, give residents access to an existing dataset for subanalysis; anything to avoid tedious data collection.

Finally, it is important to champion residents’ research. While it is great to support the dissemination of resident research through an institution’s own conference or research day, it much more fruitful to present elsewhere. Here, mentors can make sure they target the proper conference or journal for dissemination. Such targeting should begin as early as the conceptualization and design phase (an ‘insider’s trick’ not often talked about).

I truly believe research should be an integral part of medical education, both at the undergraduate and graduate levels. Evidence-based medicine is pervasive throughout health care today. Trainees need an understanding of the utility of evidence and, more importantly, its shortcomings. The best way to gain this understanding is to participate in generating such evidence. It gives trainees a rare, behind-the-scenes look at the outwardly glossy, but inwardly messy research world.


  1. I was not able to participate in this #meded chat. If you also missed it, you can read up using the transcript.  ↩

  2. Arguably, a meta-analysis is the definitive research study. However, in terms of single studies, controlled trials still reign.  ↩

  3. To be clear, I’m talking about carcinogenic exposures, not oncologic therapeutics.  ↩

  4. Prior to medical school, while working full-time as a researcher, I had the opportunity to work with a few residents on research projects. I’m not coming to this topic completely cold.  ↩