Health Tech Artificial Intelligence,

How Can Healthcare Ensure Responsible AI Use?

At a recent conference, executives from across the industry shared their thoughts on how the healthcare sector can ensure its use of AI is ethical and responsible. They highlighted the need for collaboration, trust building and pragmatic thinking.

Over the past decade or so, leaders across the globe have debated how to responsibly integrate AI into clinical care. Though there have been many discussions on the topic, the healthcare field still lacks a comprehensive, shared framework to govern the development and deployment of AI. Now that healthcare organizations have become entangled in the broader generative AI frenzy, the need for this shared framework is more urgent than ever.

Executives from across the industry shared their thoughts on how the healthcare sector can ensure its use of AI is ethical and responsible during the HIMSS24 conference, which took place last month in Orlando. Below are some of the most notable ideas they shared.

Collaboration is a must

While the healthcare industry lacks a shared definition for what responsible AI use looks like, there are plenty of health systems, startups and other healthcare organizations that have their own set of rules to guide their ethical AI strategy, pointed out Brian Anderson, CEO of the Coalition for Health AI (CHAI), in an interview.

Healthcare organizations from all corners of the industry must come together and bring these frameworks to the table in order to come to a shared consensus for the industry as a whole, he explained.

In his view, healthcare leaders must work collaboratively to provide the industry with standard guidelines for things like how to measure a large language model’s accuracy, assess an AI tool’s bias, or evaluate an AI product’s training dataset.

Start with use cases that have low risks and high rewards

Currently, there are still many unknowns when it comes to some of the new large language models hitting the market. That is why it is essential for healthcare organizations to begin deploying generative AI models in areas that pose low risks and high rewards, noted Aashima Gupta, Google Cloud’s global director for healthcare strategy and solutions.

She highlighted nurse handoffs as an example of a low-risk use case. Using generative AI to generate a summary of a patient’s hospital stay and prior medical history isn’t very risky, but it can save nurses a lot of time and therefore be an important tool for combating burnout, Gupta explained. 

Using generative AI tools that help clinicians search through medical research is another example, she added.

Trust is key

Generative AI tools can only be successful in healthcare if their users have trust in them, declared Shez Partovi, chief innovation and strategy officer at Philips.

Because of this, AI developers should make sure that their tools offer explainability, he said. For example, if a tool generates patient summaries based on medical records and radiology data, the summaries should link back to the original documents and data sources. That way, users can see where the information came from, Partovi explained.

AI is not a silver bullet for healthcare’s problems

David Vawdrey, Geisinger’s chief data and informatics officer, pointed out that healthcare leaders “sometimes expect that the technology will do more than it is actually able to do.” 

To not get stuck in this trap, he likes to think of AI as something that serves a supplementary or augmenting function. AI can be a part of the solution to major problems like clinical burnout or revenue cycle challenges, but it’s unwise to think AI will eliminate these issues by itself, Vawdrey remarked.

Photo: chombosan, Getty Images