Artificial Intelligence

UNC Health CMIO: Clinicians and AI Must ‘Synergistically Work Together’

AI is meant to have a synergistic relationship with clinicians rather than replace them, pointed out David McSwain, chief medical information officer at UNC Health, during an interview at HIMSS24. In his view, both humans and AI are fallible, but their errors are usually of a different type — so both should be working together to reduce the overall error rate.

In the past decade or so, there has been a lot of debate about how to responsibly integrate AI into clinical care. When having these conversations, it’s essential to remember that AI is meant to have a synergistic relationship with clinicians rather than replace them, pointed out David McSwain, chief medical information officer at UNC Health, during an interview this month at the HIMSS conference in Orlando.

Healthcare leaders often promote the importance of keeping a human in the loop when it comes to using AI for clinical use cases, he noted. 

“We do this not because humans are infallible and algorithms are fallible — but rather because they are both fallible, but generally their errors are of a different type. So, they actually can synergistically work together to reduce the overall error rate,” McSwain explained.

At least for right now, it’s vital to have clinicians in the loop, he said. In his view, this not only facilitates patient safety but also gives clinicians an opportunity to really learn from the new generative AI tools hitting the market.

McSwain thinks it is likely there will eventually be AI tools that can automate clinical tasks entirely. In this moment, he encourages clinical leaders to come together to figure out what drives generative AI hallucinations, determine what can be done to prevent these hallucinations, and identify the highest-risk hallucination types. AI hallucinations are false or misleading output or conclusions.

He also said clinicians must collaborate to better understand the human factors that may lead to an AI hallucination going unnoticed.

“The human in the loop focus isn’t just about not trusting the AI. The human in the loop approach is about learning — as clinicians and as technology experts — how to handle [AI] most effectively, as well as have it be a safer, higher quality experience for our patients. If we go fully automated now, people aren’t going to understand the technology nearly as well, and there may be a misplaced sense of trust,” McSwain declared.

To illustrate the importance of clinicians learning about AI, he mentioned that UNC is testing an Epic generative AI feature that seeks to help clinicians by drafting responses to patient messages in the EHR. 

Initially, when Epic’s tool didn’t know how to respond to a message, it would draft a response telling the patient to schedule an appointment with their provider, McSwain said. When UNC noticed this, it was concerned that the tool seemed to be reducing efficiency for its clinicians by leading to potentially unnecessary appointments.

By setting this as the default response for messages the tool can’t answer, Epic was “missing a golden opportunity,” McSwain remarked. He believes this would be the perfect place to emphasize to clinicians where their expertise is needed — and the fact that their work is being augmented rather than replaced.

“The AI can insert a wildcard there instead of requesting an appointment. This would actually draw the clinician’s attention to that spot, with the understanding that this is the part that the AI can’t answer. This is the part where it needs the clinician to answer. That’s how humans and AI can actually really empower each other,” he stated.

Photo: Natali_Mis, Getty Images