Could an artificial intelligence approach to prior authorization be more human? That’s the title and premise of a paper published last month in the Journal of the American Medical Informatics Association.
Authored by physicians Leslie Lenert, MD, MS, FACP, FACMI, Ramsey Wehbe, MD, MS, and Health Gorilla Chief Medical Officer Steven Lane, MD, MPH, the paper asserts that prior authorization has become an informatics issue.
As a primary care provider for more than three decades and the Director of Clinical Informatics at Sutter Health before joining Health Gorilla, Dr. Lane knows a thing or two about that.
“Prior auth is well known for its long delays, administrative burdens, and a lack of transparency,” he said. “We spend an inordinate amount of resources and time and create so much inconvenience for both patients and providers. We know we’re not doing a good job of it at all.”
The process, which is meant to ensure that patients receive appropriate care and to avoid unnecessary healthcare costs, has become a thorn in the side of most participants in the healthcare ecosystem.
“AI can do it faster and cheaper and potentially better than the current human-based system and perhaps much more conveniently as well,” Dr. Lane says. “I think when you talk about healthcare payment and operations and the opportunities that we have to drive waste and delays out of the system, I think this it's a great example of where and how we could do that.”
Although augmented intelligence (AI) has been a hot topic recently with the advent of ChatGPT and other AI tools flooding the market, the convergence of prior authorization and AI has been a few years in the making.
Prior authorization is “perhaps the least satisfying business process for both patients and providers,” with published statistics showing that it is a major source of physician and staff burnout as well as job dissatisfaction. Furthermore, the process can result in delays to patient care that can be harmful.
The paper proposes three innovations:
- Provider submission of an appropriately detailed and standardized set of clinical data to support ALL requests for prior authorization or authorization of direct query of the electronic health record (EHR) for a standardized and appropriately limited set of information using Fast Healthcare Interoperability Resources (FHIR) data standards.
- Use of deep learning AI methods to analyze the submitted information, along with other data available to the payer, to inform the review and approval process, with the specific goal of simulating consensus expert human judgment of the appropriateness of the product or service for the patient. AI should include explainable methodologies to ensure transparency.
- Objective public review and certification of AI algorithms against a panel of clinical cases juried by national clinical leaders to ensure transparency and performance at the level of medical experts. The human panels used to train the algorithms should include patient representatives.
Dr. Lane says transparency plus human governance over the AI to make sure that it was doing at least as well as humans or better are key.
“An alternative approach based on the evolution of artificial intelligence (AI) technologies that, paradoxically, might also offer a more ‘human’ approach, by allowing computer-based prior authorization to mirror assessment of the appropriateness of medical procedures performed by panels of human experts,” the paper says.
This paradox was explored in a recent episode of InteropTalk, Health Gorilla’s monthly assemblage of interoperability experts – including Dr. Lane – who discuss the issues of the day.
Dave Cassel, Chief Customer Officer at Health Gorilla, pointed out that an expedited process can hasten what is often a very stressful period of waiting to find out whether a medication or procedure is going to be covered by insurance.
“But that’s a double-edged sword,” he said. “If it’s faster and I like the outcome, then wonderful. But if it’s faster to reject this treatment that I need, I think people would have in the back of their minds, ‘Is that algorithm going to do as good a job? Is it going to make a compassionate decision? Is it going to recognize the nuances of my case?’ ”
Jennifer Blumenthal, Product Director, OneRecord, at Milliman IntelliScript, and a regular on InteropTalk, pointed out that humans are not always the best at those things. “We have our own flaws,” she said.
Cassel agreed: “It goes both ways. There are some times when an unbiased algorithm may in fact provide a better outcome – and bias may not even be the issue. It could just be that the person had a bad day and missed that line in your medical record, whereas the algorithm is going to find it.”
Algorithm transparency is also addressed in a recent HTI-1 NPRM proposal from the Office of the National Coordinator for Health Information Technology (ONC). Dr. Lane was co-chairman of the agency’s task force providing detailed recommendations on this proposed rule, which would make the bases of Decision Support Interventions (DSI) more transparent – especially when using AI and other more advanced computational tools.
Read the full text of “Could an artificial intelligence approach to prior authorization be more human?” in JAMIA here.
Find InteropTalk discussions, including this one, on our YouTube channel or subscribe to the audio on Spotify or Apple Podcasts.