As far as I can tell, there are 2 ways to read electrocardiograms. The first is to head to the library, grab a few textbooks, squint until your eyesight is blurry, and try desperately to avoid filleting your fingers while running them up and down those jagged lines — all with the hope of eventually internalizing the dozens of nuances implicit in interpreting those cryptic cardiac fingerprints. After what I’m sure will only feel like thousands of hours, you’ll be reasonably skilled at deciphering those mysterious strips, and you’ll probably also have developed some excellent coping mechanisms for managing the omnipresent specter of having missed something. Easy as pie.
If, on the other hand, all of that strikes you as needlessly old-fashioned and labor intensive, you always have the option of just sneaking a peek at the top right corner of the electrocardiogram (ECG) printout. There you will find a preliminary diagnosis as interpreted by an unseen, but apparently self-assured, computer. With the help of this machine’s invisible brain, an 8-year-old — or even an intern — can read an ECG, and the whole process is compressed into approximately 3 seconds.
The automated diagnosis suggestion at the top of the ECG strip is the output of what is known as a decision support system (DSS). These systems have crept into every crack and crevice of contemporary clinical medicine. They do everything from helping radiologists interpret mammograms to triaging potential intensive care unit admissions and guiding primary care doctors as they funnel patients to the right specialist.
Since these tools can dramatically decrease the time and manpower needed to perform a variety of tasks, hospital chief financial officers may be tempted to view these systems principally as drivers of efficiency gains and thus intrinsically valuable, regardless of actual diagnostic performance. That analysis may be as short-sighted as it is cynical. Instead, it should go almost without saying that from a medical (and more to the point, patient-centric) standpoint, these adjuncts are only truly valuable if their presence pushes doctors towards higher quality decisions or diagnoses.
That being said, many DSS results are actually pretty accurate. Take ECG interpretation for example. A study of internal medicine residents showed that while they made the right call in 49% of cases with no automated help, their diagnostic accuracy jumped by more than 10% when they had the computer as a co-pilot.1 That’s reassuring, but not entirely unexpected. After all, these systems are in widespread use precisely because we have an intuitive (if not always empirical) understanding that DSSs have the power to improve our diagnostic accuracy. What’s more interesting is that in the cases in which the DSS made the correct diagnosis, the residents’ judgment was improved by approximately 30%. However, when the computer was wrong, the residents’ accuracy fell by 15% from what it otherwise would be. And this isn’t an isolated finding — a study of mammogram readers showed a 14% decrease in diagnostic sensitivity when the image was accompanied by the DSS’ interpretation.2 It seems that, at least in some cases, using DSS as a crutch has atrophied our diagnostic muscles.
If the problem is just atrophy, then the solution is clear: exercise. The availability of limitless digital storage should make it simple to design training software to help keep our ECG or mammogram (or whatever else) diagnostic skills sharp, and periodic demonstration of diagnostic competence should be incorporated and emphasized in licensure requirements.
However, unless we truly believe that we’re on the cusp of a world in which DSS is both entirely flawless and completely pervasive, it’s still critical that doctors be capable of performing these fundamental tasks — if only so that patients aren’t left in a lurch when the wireless internet is down or the power is out. Unfortunately, the idea of mandated practice as a way to maintain our competencies does nothing to address the potentially larger concern that we’re becoming increasing reliant on computers and algorithms to do our doctoring for us.
Since the subjects in the studies I previously mentioned are human, they exhibit something called anchoring bias, which is to say that their eventual conclusions are substantially influenced by information presented at the outset of the analysis. Anchoring bias is a powerful thing — it’s in play every single time we make a judgment under uncertainty3 — and, for better or worse, has also crept into every crack and crevice of contemporary medicine.4,5
I’d say better, at least in this case. Even though the term “bias” is beset by obvious negative connotations, these studies and others demonstrate that DSSs are, by and large, better at these sorts of tasks than humans are. Following this data to its logical conclusion inevitably leads us to a paradox: the stronger the anchoring bias, the better the diagnostic accuracy.
Consequently, the only rational, patient-centric response (aside from rigorously identifying the particular situations where DSSs perform poorly) is to lean into the bias, and task these tools with handling as many diagnostic problems as we can. Besides, the opposite approach — paring back our use of DSSs — is quixotic to the point of being untenable. We can’t un-ring that bell, and nobody wants to go back to churning our own butter.
But don’t get me wrong. Doctors — the organic version — will always be indispensable: as holistic thinkers, as solvers of complex problems, and even as fail-safes. But computerized diagnostic aids are here to stay. I, for one, welcome our new DSS overlords.
References
- Tsai TL, Fridsma DB, Gatti G. Computer decision support as a source of interpretation error: the case of electrocardiograms. J Am Med Inform Assoc. 2003;10(5):478-483.
- Povyakalo AA, Alberdi E, Strigini L, Ayton P. How to discriminate between computer-aided and computer-hindered decisions: a case study in mammography. Med Decis Making. 2013;33(1):98-107.
- Tversky, Amos, and Daniel Kahneman. Judgment under uncertainty: Heuristics and biases. In: Utility, Probability, and Human Decision Making. Springer Netherlands:Dordrecht, The Netherlands, 1975; pp 141-162.
- Cain DM, Detsky AS. Everyone’s a little bit biased (even physicians). JAMA. 2008;299(24):2893-2895.
- Croskerry P. Achieving quality in clinical decision making: cognitive strategies and detection of bias. Acad Emerg Med. 2002;9(11):1184-1204.
This article originally appeared on Dermatology Advisor