In the realm of healthcare, trust in AI looms as a significant challenge and an essential puzzle to solve. With technological advancements like ChatGPT and large language models (LLMs), the healthcare sector is brimming with possibilities. However, the need to address and dispel mistrust and apprehension has never been more critical. To build a foundation of trust for AI in healthcare, we must first examine the core elements of the trust dilemma and explore strategies for implementing transparency and validation.
A recent McKinsey report has brought to light a compelling revelation – 71% of respondents consider a company’s AI policy as a crucial factor in their decision-making process. This underscores the paramount importance of transparency and openness regarding AI applications in healthcare. While advocating for organizations to be forthcoming about AI usage is significant, building trust starts with demonstrating that both the companies pioneering AI-driven solutions and the technologies themselves are deserving of confidence.
Unlocking Transparency in PBM Pricing
The TSX Venture Exchange has a strong history of helping early-stage health and life sciences companies raise patient capital for research and development.
The role of validation in healthcare AI
In healthcare, trust has been traditionally built upon a well-established framework of rigorous regulations. This framework spans education, training, and certification of healthcare professionals, where the journey of a physician’s training alone extends from seven to 15 years in the United States. To expect users to readily transfer this level of trust to AI systems developed by data scientists who may or may not have medical backgrounds is, indeed, an unrealistic prospect.
Comparing the validation of AI systems to the testing of medical professionals through licensing exams, such as the USMLE, falls short in substantiating the validity of these systems. Models like Med-Palm2 and GPT-4 consistently outperform human averages in such tests. However, surpassing these benchmarks is not enough to guarantee the safety of AI systems for everyday use, primarily due to the lack of explainability in their responses.
A call for new laws and regulations
Using Informed Awareness to Transform Care Coordination and Improve the Clinical and Patient Experience
This eBook, in collaboration with Care Logistics, details how hospitals and health systems can facilitate more effective decision-making by operationalizing elevated awareness.
In any industry revolution, the establishment of new laws and regulations is a necessity. The AI domain is no exception. AI systems should be subjected to a dedicated validation process, ensuring they consistently deliver explainable and accurate results.
Given that healthcare directly impacts people’s lives, AI-related healthcare systems must undergo rigorous regulatory scrutiny to mitigate risks such as biases or hallucinations affecting patients’ decision-making processes. Transparency in the development, maintenance, and validation of these systems is paramount.
Navigating trust incrementally
Nurturing trust is a step-by-step process, especially in the context of healthcare AI. Despite remarkable advancements, expecting individuals to wholeheartedly embrace AI-driven diagnostic systems overnight is unrealistic. Therefore, a phased and gradual implementation strategy is essential.
In this approach, medical professionals retain a firm grip on the reins, overseeing AI systems as they support physicians in automating administrative tasks, offering valuable insights into patient health conditions, and assisting with various responsibilities. However, pivotal decisions remain under the direct control of physicians who meld their expertise, training, and experience with AI-generated insights to provide patients with optimal advice. Over time, as AI continues to evolve and trust in this technology grows, these systems can gradually assume more significant responsibilities allowing physicians to focus on the patients.
Balancing innovation and compassion
The patient-physician relationship extends far beyond mere diagnoses and treatment plans. Patients highly value soft skills such as compassion, empathy, and active listening in their interactions with healthcare providers. A recent study revealed that 57% of respondents expressed concerns that AI in healthcare might erode the patient-provider relationship. Clearly, this is an area where trust-building is of utmost importance.
Trust is the bedrock upon which healthcare is built, and it has traditionally thrived on human connections, empathy, and expertise. AI’s introduction into this realm presents a formidable challenge – how to seamlessly integrate advanced technology while preserving the trust and rapport between patients and healthcare providers. The journey to trust in healthcare AI is a complex, ongoing process. Through rigorous validation processes, unwavering transparency, gradual implementation, and a steadfast commitment to preserving the human touch, healthcare IT providers can unequivocally demonstrate that AI is a reliable and trustworthy ally in the intricate and evolving landscape of healthcare.
Photo: ipopba, Getty Images
Piotr Orzechowski is the founder and CEO of Infermedica, a digital health company specializing in AI-powered solutions for symptom analysis and patient triage, which he founded in 2012. Piotr began his professional career in the gaming industry as an engineer and software developer and worked for several smaller businesses and startups before founding Infermedica with the goal of making healthcare more accessible and convenient for everyone.
This post appears through the MedCity Influencers program. Anyone can publish their perspective on business and innovation in healthcare on MedCity News through MedCity Influencers. Click here to find out how.