This headline depresses me.
It comes from a research letter recently published in JAMA Internal Medicine. In this study, the researchers asked a convenience sample of 61 attendings, residents, and med students the following question:
If a test to detect a disease whose prevalence is 1 out of 1,000 has a false positive rate of 5 percent, what is the chance that a person found to have a positive result actually has the disease?
Why do I find these results depressing? Because understanding this concept is fundamental to the practice of medicine. We are talking about test characteristics, more specifically positive predictive value (PPV). PPV is dependent on the prevalence of a disease—the higher the prevalence, the greater the PPV. This is why we gather patients’ histories and physicals. By asking a few questions and examining a patient, we can identify risk factors—that place the patient in a group with higher prevalence of a condition—that allow us to choose appropriate tests with a good chance that a positive result will actually mean the patient has the disease.
I truly believe if we had a better understanding of this concept, we would order fewer diagnostic tests and save both some anguish and money.
That is, if you define having a correct answer as being exactly correct, only 14 (23%) provided such a response. The researchers, in their Methods, have a little bit looser (but, in my opinion, acceptable) definition of the correct response. ↩
To be fair, this was not the most rigorously conducted study. It wasn’t a random or complete sample, nor was the sample very large. The question they posed was not validated, nor did they pose more than one question about this concept. But, how rigorous of a study do we need? The best data might be on biostats and epi questions from board exams, except reviewing such concepts is a routine part of prep for those exams. ↩