An inch deep and a mile wide

Craig-JosephRecently, Dr. CT Lin, CMIO at University of Colorado Health, wrote about his fear of “automation complacency” when it comes to practicing medicine in the era of the large language model (LLM) and generative AI. Allow me to explain. Physicians are quite elated about the possibility of having an AI take over administrative tasks for which we are currently responsible, but about which we have minimum enthusiasm. I’m referring to jobs such as responding to routine patient messages, dealing with third-party groups such as insurance companies, and searching vast electronic records for pertinent clinical information. These are, relatively speaking, low-hanging fruit for ChatGPT-like tools.

One major burnout-inducing task all physicians take on is documentation. We have to write progress notes whenever we see a patient in the office or the hospital. In the not-too-distant past, this involved pen and paper. It was quick and relatively painless. To say we wrote concisely is putting it mildly. There were many cryptic abbreviations, but once you got past those, it didn’t take long to get up to speed with the important aspects of the patient’s care.

Today, there are precious few hospitals or offices that use paper for documentation. The electronic health record (EHR) is omnipresent, and with it, we’ve seen short notes become long notes (i.e., note bloat). It’s easy to bring in discrete data such as lab results, imaging reports, and even other doctors’ notes into the daily progress note. To add to the mayhem, regulators and payers have been demanding more information in the past decades before agreeing to accept bills, so clinicians have become accustomed to adding the kitchen sink to the note just in case.

Whether these notes are written in chicken scratch on a piece of paper or typed into a fancy EHR, Dr. Lin posits that the process of generating the progress note is important. It’s often the time that the physician uses to reconsider the history, physical exam findings, lab and imaging results, and other data points before committing to a plan of care. There’s a valid concern that if we hand off the writing of most of the progress note to an AI, we’ll miss out on that essential time to ponder.

The concept that ChatGPT or something like it could listen in to the office visit or hospital rounds and write our notes for us is not some futuristic fantasy. It’s happening now, but we’ve not seen widespread adoption because the tools are not yet ready for primetime. We’re getting there, but it’s taking time. However, with the phenomenal improvement in generative AI in the last six months, physicians are now asking why they can’t have this tool right now.

What about automation complacency? This has been seen in non-healthcare industries for years. Airplane pilots must actively work to keep up their skills so that if/when an emergency happens, they’ll have the ability to take over tasks that are largely handled by computers and their algorithms. It’s easy to think that “I don’t need to know how to do that because the system handles that.” And it’s true … until it isn’t.

We see automation complacency when physicians complain that the EHR’s clinical decision support (CDS) module didn’t remind them to order that lab or not order that expensive medicine. While it’s true that it would be great if CDS were always spot on, that’s not how it works. The computer only “knows” what we tell it, and further, only what we tell it so it can “understand.” If the tobacco history isn’t documented in the tobacco history box, then our fancy tools won’t work as designed.

How do we fight against the tendency to let the computer do the thinking? What steps can we take to make it easy for physicians to use the technology in the most responsible way? I think the answers involve educating physicians on how these AIs work while simultaneously configuring the technology to stay in its lane.

Doctors do not need to become software developers or data scientists, but they do need to learn about the limits of the technology. Many years ago, I spent some time with medical records coders. These folks are not clinicians but have lots of training in medical terminology and the rules for how we assign billing codes to office visits and hospital stays. They would ask me to look at a doctor’s note and then answer questions that they had. For instance, did this surgeon perform an excisional or incisional biopsy? Or do I think the doctor meant anemia from blood loss or iron deficiency?

As I interacted with these coders, I began to forget that they didn’t have any actual clinical experience. They never took care of patients. They just read the documentation. Yet still, they could go toe-to-toe in the world of “doctor speak.” I was reminded that their medical knowledge was a mile wide but only an inch deep when I occasionally engaged them in discussion as if they were a doctor or a nurse. That’s when I remembered that, in fact, they weren’t clinicians. They could throw the big words around (tachycardia, anyone?) but typically couldn’t talk about the physiology or anatomy causing it.

This is what I think physicians need to understand about AI. Right now, its “knowledge” is wide but not deep. The words are there, but the underlying reasoning and rationale aren’t, at least not yet. Hence, it’s imperative that doctors leverage the tool without overestimating its abilities. Of note, tests being done now comparing GPT-3.5 to GPT-4 are showing exponential improvements in “understanding,” so perhaps by the time comes for physician training, I’ll have to seriously reconsider this position!

We should also consider not allowing an AI to autogenerate the assessment and plan part of the note. Or at the very least, we should ensure that doctors spend much of their time in the note on the assessment and plan. That’s where the value is, and to have an AI create it without significant oversight and editing by the clinician would be a mistake.

Dr. Lin and his colleagues are correct in worrying about physicians becoming overly reliant on technology, whether AI or something else. Yet I think through education and by setting limits in the EHR, we can come to a sweet spot in how we use these tools to take care of those entrusted to our care.

Topics: featured, Healthcare

Module heading text

Get the highest quality chemistry and microbiology testing services aligned closely with current good manufacturing practices (CGMP) for all types of products across all phases of development.

Subscribe to receive blog updates