‘Risks of supercharged flaws’ persist in AI-driven prior authorization, Stanford researchers say

Advertisement

A team of Stanford (Calif.) University researchers specializing in health law, AI, ethics and medicine identified “risks of supercharged flaws” in harnessing AI for prior authorization in a Jan. 6 Health Affairs report.

Prior authorization is a longstanding sore spot for both payers and providers. The authors acknowledged the potential for AI to alleviate this, such as by automating prior authorization and claims approvals, filling information gaps in requests, and lowering barriers to appeal.

At the same time, the “AI arms race” driving further payer-provider tension is no secret: Insurers blame providers for clinical documentation tools that could prompt overcoding while providers blame insurers for tools that could lead to unfair denials. Providers can then hit back with AI-drafted appeals  — the authors listed a handful of vendor tools that can do just that.

Some states are attempting to mitigate AI use, as well, even as insurers defend their commitments to keeping “humans in the loop.” Still, researchers said they believe one data point is missing to fully reflect whether insurers with faster turnaround times have automated approvals or are just acting too quickly.

“More relevant is the time that humans spend reviewing files that are ultimately denied — information that insurers have not shared,” they said.

Human reviewers may also have an anchoring bias as a result of scanning an AI-generated case summary first or face organizational pressures to side with the tool’s recommendations. Automation bias — or excessive trust of computerized decision support — could be another issue. The team also said users lacking clinical expertise could prompt AI hallucinations to become even more exacerbated.

“No studies have compared rates of denials or wrongful denials (those reversed on appeal) in reviews with and without AI, making it difficult to disentangle potential causes of rising denial rates or assess the impacts of AI use,” researchers said.

As Stanford Health Care itself carried out ethical assessments of AI tools, some staff were unaware of AI bias or any weaknesses, the researchers said.

Other concerns include how social determinants are accounted for in predictive models, provider-facing tools that could unfairly reinforce payer patterns and disparate governance practices.

The researchers called for stronger governance, monitoring for underperformance, staff training and “meaningful” human engagement in reviews.

Advertisement

Next Up in Research & Analysis

Advertisement