MedCity Influencers, Artificial Intelligence

Fulfilling the Quadruple Aim of Healthcare with AI

Putting all the AI pieces into place effectively requires clarity of what organizations want to accomplish – and for much of healthcare, the Quadruple Aim remains at the very top. Although how to get there will look different, clarity on principles and values that will guide the development of what comes next is paramount to setting us up for years to come.

The future of healthcare holds unlimited possibility with technology, but the way we get there matters.

As a resident, I used to have to run and grab the MRI films from the basement of the building to visualize my patients’ brains. In fact, if you went down while the patient was still in the scanner, you could grab the films while they were still warm off the printer. When other teams wanted to see the imaging, they used to have to page you to find them and a light box to review them. Today, I can see images from just about anywhere – my car, my home, a different state. Technology transformed the way we all cared for patients and made us more efficient, albeit with less steps running to the basement.

In the 1950s, medical knowledge doubled every 50 years, in the 1980s, it was every seven years. Now, it is estimated to double every 73 days. It has literally exceeded the capacity of anyone’s hippocampal ability to keep up. In the same vein, technology continues to innovate the healthcare landscape and harnessing these applications across the care continuum is critical, but doing so effectively and ethically is needed.

Why the quadruple aim of healthcare is at risk

Physicians spend over 15 and a half hours per week on paperwork and administrative tasks according to new research, and nearly an hour and half of “pajama time,” or after hours work, on their administrative tasks. For nurses, they’re spending more than half their shift dealing with electronic health record documentation (EHR). To say the least, healthcare teams are tasked with handling a lot of data, but managing the EHR documentation and patient portal communications can take critical time away from patients and from clinicians’ own lives – and further compound burnout issues. In fact, over 38% of healthcare employees report being at risk of burnout and 39% are actively considering leaving their organization, leading to ongoing challenges of staffing, long patient wait times, and lack of satisfaction for patients and providers alike. The “Quadruple Aim of Healthcare” – achieving quality, safety and efficiency, and decreasing burnout – is at risk.

I often hear executives talk about building the plane while flying it when they reference innovation – we live in fee-for-service environments when others are seeking actual value, and people are being asked to do more with less. New buildings being built, more slots being added, and online booking as efforts across healthcare to grow, and yet some patients still can’t access care easily. Clinicians now have the ability to sign forms online, iPhones are replacing pagers, and records we would hire couriers to get are now visible in the EHR – and yet we still spend too much time typing.

Utilizing technology to digitize processes can not only take administrative work off of overrun and burnt out clinicians, but it can streamline operations for the entire healthcare ecosystem, including for patients. For instance, artificial intelligence (AI) can review unstructured data from patient messages and calls in the context of their history and clinical diagnoses, to help triage for priority and respond automatically to less critical messages. Alternatively, AI can suggest the next best action for a given patient both clinically – and even more interestingly – emotionally. Remote monitoring paired with AI can help the industry better predict and support life saving clinical decisions, allowing timely interventions, tactical care plans, and better outcomes for patients.

The jobs to be done by AI

  • Clerical burden

So if clinicians are exhausted by clerical burden, which they are, the first job to be done is to alleviate it. Some issues don’t need more study: documentation, insurance denials, and prior authorizations. Organizations have been trying with virtual scribes or dictation services into a phone, but many organizations cannot afford these services for everyone. Imagine a first edition powered by AI wherein notes are summarized via ambient listening directly in the EHR. A next chapter would include notes that were both summaries, but also created in context of the patient and their existing data leveraging generative AI and identifying the most important clinical elements in their care to help focus attention. Another chapter would review patient messages through portals and phone calls with triage, and resolve without any human intervention because the job to be done would be readily identifiable with AI. Prior authorizations and insurance denials — not only time-consuming, but also soul-sucking — are addressable with AI today. Prioritizing the administrative burden as a focus in transformation is especially important because all the empathy and well-being efforts on the planet can’t fix inefficient and broken processes that are present for clinicians every day.

  • Quality and safety

Healthcare systems everywhere are on journeys to deliver highly reliable care, yet this will forever be elusive if staff are burned out. A myriad of data points to higher complication rates, substantial malpractice costs, frequent medical errors, and near misses when staff are not at their best. Predictive algorithms for sepsis, readmissions, and DVTs have blossomed, and patient engagement tools have repeatedly demonstrated impact. AI will accelerate quality and safety efforts through analytics of large data sets and generation of early signals of risk and harm in places we just don’t have the bandwidth to see yet.

  • Experience

Often we think of experience as distinct from quality and safety efforts, and yet the engagement of the patient on the care team is a critical component of how we partner as a team. Activated patients – those who are taking maximal advantage of the resources available to them to manage their health – have annualized costs which are less than those who are less so. AI will bring more effective ways to communicate with patients, deliver education, and recommend care paths that connect in ways that will resonate. Several studies continue to also show that AI – when compared to human generated empathy – can consistently be more empathic. That is not meant to say that we don’t need humans or that I can’t continue to deeply connect with the person in front of me, but rather AI will help scale connection. Large language models are already producing more accurate detection of emotion, and understanding it over time and space, to personalize in ways that are meaningful. We will do a better job predicting when patients are going to leave or not get care they might need, and how emotion changes over the employee lifecycle.

What’s ethical is not always legal and what’s legal isn’t always ethical

The future holds possibility with technology constantly evolving, but the way we get there matters. Using augmented intelligence, which bakes in human oversight to artificial intelligence, is critical. Doing so ensures datasets are inclusive and representative, controlled for bias, and are both high quality and protected – putting ethics at the forefront of care.

In the new era of augmented intelligence in healthcare, the four principles of medical ethics — beneficence, nonmaleficence, autonomy, and justice — need to be front and center. While advanced technology like AI and ChatGPT can aid in these administrative tasks by directly engaging and triaging patients – it’s imperative there is human oversight.

There are innumerable possibilities of how AI can be used. But rather than start there, organizations should take a look at what their articulated strategy already is. Depending on what’s most important to accomplish, AI likely already has a role. Standing up governance and guiding principles of AI that are aligned to values is harder than it sounds because it also requires alignment on who should be at the table. One important group to include at the start is ethicists to help tackle how complex questions of autonomy, rights, and equity will be prioritized in any solution. To the extent possible, there needs to be clear and specific guidance for patients to understand how their data is being used now and in the future. And to carry it forward a bit, informed consent. We’re not talking about an “accept here” button with terms and conditions being fifty pages long that only a small number of individuals will understand. Ensuring information is accessible for all patients – in native languages received in the communication channel of their choice – builds trust among patients and their care team, and simply put, is the right thing – and the ethical way to do it.

Putting the puzzle pieces together

Patients still want live humans for many circumstances in healthcare, so the ongoing staffing crisis has amplified both the opportunities and critical need for non-human touchpoints and AI. To create more bandwidth for overburdened clinicians, AI tools can manage the mundane tasks we don’t need humans for – giving directions, asking about insurance coverage, or providing basic medical education – and the technology will be able to read non-verbals and respond with empathy. Yet, it’s the situations that are more emotional and personal that we should be preserving our human beings for — service recovery, communication in complex medical conditions, and breaking bad news – to name a few.

Putting all the pieces into place effectively requires clarity of what organizations want to accomplish – and for much of healthcare, the Quadruple Aim remains at the very top. Although how to get there will look different, clarity on principles and values that will guide the development of what comes next is paramount to setting us up for years to come.

Photo: metamorworks, Getty Images

Dr. Adrienne Boissy is the Chief Medical Officer at Qualtrics as well as a practicing neurologist at the Cleveland Clinic, where she was formerly the Chief Experience Officer. A healthcare industry pioneer, Adrienne envisions an integrated patient and employee experience in healthcare powered by empathy, technology, co-design and evidence-based research. She publishes extensively on the future of experience design and metrics that matter in humane experiences of health.

This post appears through the MedCity Influencers program. Anyone can publish their perspective on business and innovation in healthcare on MedCity News through MedCity Influencers. Click here to find out how.