Healthcare providers must start thinking about ways to build consumer trust in generative AI in order for the industry to fully harness the technology’s potential, a new report suggests.
The report, released Thursday by Deloitte, is based on a March survey of more than 2,000 U.S. adults. Its findings shows that consumers’ use of generative AI for health reasons — to answer questions about symptoms, help them find a provider in their network or provide guidance on how to care for a loved one — has not increased from 2023 to 2024.
In fact, consumers’ generative AI usage for health reasons has decreased slightly. The report showed that 37% of consumers are using generative AI for health reasons in 2024, compared to 40% of consumers last year. A major reason for this stagnant adoption is consumers’ growing distrust of the information that generative AI provides, the report stated.
At ViVE 2024, Panelists Share Prior Authorization Progress and Frustration in Payer Insights Program
At the Payer Insights sessions on Day 1 of ViVE 2024, a panel on prior authorization offered compelling insights from speakers who shared the positive developments in this area after years of mounting frustration. Speakers also shared challenges as they work with providers to figure out how policy developments and technology will work in practice.
Hospitals can build greater patient trust in generative AI models — through methods like having transparent conversations, asking for patients’ consent to use the tools and training models on internal data, said an AI expert at Deloitte and clinical leaders at health systems.
What do hospitals need to know about consumer attitudes toward generative AI?
Compared to Deloitte’s 2023 survey on consumers’ attitudes toward generative AI in healthcare, distrust in the technology has increased for all age groups — with the sharpest jumps occurring among Millenials and Baby Boomers. Millennials’ distrust in the information provided by generative AI rose from 21% to 30%, while Baby Boomers’ distrust rose from 24% to 32%.
Consumers have free reign to experiment with generative AI and use it in their daily lives, thanks to the availability of public models like OpenAI’s ChatGPT or Google’s Gemini, pointed out Bill Fera, who is a principal and head of AI at Deloitte. Many Americans have ended up receiving questionable or inaccurate information when using these models, and these experiences may be causing people to view the technology as unfit for use in healthcare settings, he explained in an interview.
Unlocking Transparency in PBM Pricing
The TSX Venture Exchange has a strong history of helping early-stage health and life sciences companies raise patient capital for research and development.
“I think the take-home message for hospitals is that if they’re going to use these large language models — which we think they should and they will — they need to have them trained on more of a medical database to create better results,” Fera remarked.
Free, publicly available large language models aren’t trained on specific patient data and therefore aren’t always accurate when answering healthcare-related questions. A recent study found that ChatGPT misdiagnosed 83 out of 100 pediatric medical cases.
Ideally, hospitals should be training their generative AI models on their own patient data, using synthetic data or data from similar healthcare providers to fill in any gaps, Fera said.
In addition to this, Fera recommended that hospitals educate their patients about how and why generative AI is being used at their organization — as well as pay attention to patients’ feedback. He noted that this type of transparency should be “non-negotiable.”
And it would go a ways to build trust, given that patients are demanding this transparency as well. Deloitte’s report showed that 80% of consumers want to be informed about how their healthcare provider is using generative AI to influence care decisions and identify treatment options.
If hospitals take the time to walk patients through the generative AI models they’re applying to patient care and what benefits these models are designed to deliver, patients can gain a true understanding that the AI isn’t there to replace their doctor, but rather augment the doctor’s abilities to provide better quality care, Fera said.
For example, a clinical documentation tool is not designed to completely automate the generation of clinical notes. Rather, it’s there to collect data and create a draft of the note for the clinician to edit and ultimately approve. This doesn’t take the task of documentation away from the clinicians, but it greatly reduces the amount of time they spend on this menial process, allowing them to practice at the top of their license.
In Fera’s view, being clear about the benefits that generative AI could provide will be a key way for healthcare providers to engender patient trust in the technology going forward.
Explaining the benefits
Americans’ understanding of technology differs greatly from person to person, and large portions of the population may not know exactly what the term AI refers to, pointed out Deb Muro, chief information officer, at Bay Area-based El Camino Health. Because of this, some patients might feel frightened when they initially hear that a non-human form of intelligence is being used in their care — but their feelings will most likely change once the technology is thoroughly explained to them, Muro noted.
“When I talk with other leaders in information technology, we all comment that we’ve been using AI for years. It’s just that generative AI has added an additional flavor. That’s exciting,” she declared.
In Muro’s view, generative AI can be thought of as a research partner. The technology uses data to produce content for clinicians, such as the draft of a clinical note, summary of patient records and or overview of medical research. Clinicians always have the final say in care decisions, so generative AI is by no means replacing their expertise. Instead, it is reducing the amount of mundane, data-oriented tasks clinicians have to complete so they can spend more time with patients.
When having conversations with patients, providers must make sure that they understand this, Muro remarked.
Providers should also be clear about the specific use cases to which they are applying generative AI, as explaining these use cases will give patients a better idea of how the technology might stand to benefit them, she added.
For example, a doctor treating a patient may want to check how patients with similar symptoms and profiles were cared for in the past. Instead of digging through records and filtering them, the clinicians can ask a generative AI tool a simple question and get started on the process of devising a care plan for their patient much sooner, Muro explained.
Relationships are at the heart of trust
Another health system executive — Patrick Runnels, chief medical officer at Cleveland-based University Hospitals — agreed that providers need to take the time to explain how AI is being used to enhance care. He thinks these conversations are most meaningful when they happen directly between a patient and their care team.
Strong provider-patient relationships are key to building trust in the healthcare world, Runnels pointed out. Patients are more likely to understand and accept the benefits of generative AI tools when they are explained by a provider who they know and are comfortable with, he explained.
“Patient-provider connectedness has to stay front and center,” Runnels declared. “You can say, ‘We’re your care team — generative AI is helping us sort out your care on the back end, but you will always have a connection with your nurse or social worker or doctor.’ You have to centralize that idea and demonstrate that is always the case. AI does not take away relationships, which are central to trust. And if you don’t have trust, then everybody’s paranoia is gonna go nuts.”
Ashis Barad, chief digital and information officer at Pittsburgh-based Allegheny Health Network (AHN), also stated it should be the care team’s responsibility to inform patients about generative AI use cases.
For instance, AHN is preparing to roll out an inpatient virtual nursing program that involves generative AI. When the program is launched, AHN’s nurses will be trained on how to carefully explain the new technology-enabled care model to patients.
The nurses’ training will prepare them to communicate that they are still present and active members of the patient’s care team, Barad explained. He said the central message of these conversations should let patients know that nurses aren’t being replaced, but rather given tools to help them better care for patients.
Emphasize data protections and ask for consent
Another important way to build consumers’ trust in generative AI is to be transparent about the data these models are trained on, Barad said.
AHN recently rolled out a new generative AI tool called Sidekick, which Barad said can be thought of as the health system’s own version of ChatGPT. The tool is available to all of AHN’s 22,000 employees, as well as all 44,000 employees employed by its parent company Highmark Health. It was trained solely on AHN’s and Highmark’s own data, Barad noted.
The fact that AHN and Highmark jointly developed their own tool using data specific to their patient populations should make people feel much more comfortable than if AHN were to use an AI tool trained on general data, he explained.
“It’s trained on our own data, and it’s closed, so there’s no leakage whatsoever. That allows us to then put anything we want into it. And we have firewalls between Highmark and AHN, so as far as PHI (personal health information) protection is concerned, it’s all been figured out,” Barad said.
He also noted some generative AI use cases may require express consent from the patient before deployed. Ambient listening tools during a physician-patient visit are a key example of this.
These tools — made by companies such as Nuance, DeepScribe and Abridge — listen to and record patient-provider interactions so they can automatically generate a draft of a clinical note. Like many other health systems across the country, AHN is using ambient documentation technology and asking patients for their consent before each visit, Barad said.
When talking to patients about these AI models, clinicians explain that the tools are designed to prevent them from having to type throughout the entire visit, therefore giving them more time to maintain eye-contact with patients and be present.
Barad is a clinician himself — a pediatric gastroenterologist who still practices. He said that because he clearly explains the benefits of ambient listening technology to patients, he hasn’t had one patient withhold their consent.
The industry may need to collaborate to establish patient education standards
AHN’s neighbor health system, UPMC, is also using ambient documentation technology and requires verbal consent before the tool is deployed during an appointment. To Robert Bart, UPMC’s chief medical information officer, this is a use case that clearly necessitates patient consent since they are being recorded. But there is no industry standard, he declared.
“It remains to be seen as to what additional types of consents and or documentation need to occur in the future,” Bart said.
For example, Deloitte’s report suggested that in coming years, hospitals may start putting disclaimers on clinical recommendations that were produced with the assistance of generative AI. There is no industry standard to let hospitals know when that is necessary and when it’s not, Bart pointed out.
But the healthcare industry may need to start establishing standardized protocols for patient education around generative AI use sooner rather than later — because utilization of this technology is only going to grow, he said.
“I inherently believe that the artificial intelligence-enabled physician will be better able to make the best decisions for patients than those who are naive to artificial intelligence in the future,” he declared.
Photo: steved_np3, Getty Images