Humana used an artificial intelligence tool owned by UnitedHealth Group to wrongfully deny Medicare Advantage members' medical claims, according to a class-action complaint filed Dec. 12.
The lawsuit was filed in the U.S. District Court for the Western District of Kentucky and is the latest legal action against major insurers such as UnitedHealthcare and Cigna for allegedly using automated data tools to wrongfully deny members' claims.
The complaint against Humana, the country's second-largest Medicare Advantage insurer, accuses the company of using an AI tool called nH Predict to determine how long a patient will need to remain in post-acute care and overrides physicians' determinations for the patient. The plaintiffs claim Humana set a goal to keep post-acute facility stay lengths for MA members within 1% of nH Predict's estimations. Employees who deviate from the algorithm's estimates are "disciplined and terminated, regardless of whether a patient requires more care," the lawsuit alleges. When decisions made by the algorithm are appealed, they are allegedly overturned 90% of the time.
"Despite the high rate of wrongful denials, Humana continues to systemically use this flawed AI model to deny claims because they know that only a tiny minority of policyholders will appeal denied claims," the plaintiff's attorneys wrote.
The nH Predict tool was created by naviHealth, a care management company acquired by Optum in 2020. The tool is not used to make coverage determinations, an Optum spokesperson previously told Becker's.
"The tool is used as a guide to help us inform providers, families and other caregivers about what sort of assistance and care the patient may need both in the facility and after returning home," the spokesperson said. "Coverage decisions are based on CMS coverage criteria and the terms of the member's plan."
Humana told Becker's it does not comment on pending litigation, but a spokesperson confirmed the company uses "various tools, including augmented intelligence, to expedite and approve utilization management requests and ensure that patients receive high-quality, safe and efficient care. By definition, augmented intelligence maintains a 'human in the loop' decision-making whenever AI is utilized. Coverage decisions are made based on the healthcare needs of patients, medical judgment from doctors and clinicians, and guidelines put in place by CMS. It's important to note that adverse coverage decisions are only made by physician medical directors."
The lawsuits come amid broader ongoing conversations among policymakers around insurers' use of algorithms and artificial intelligence when processing claims or prior authorization requests.
States are ramping up scrutiny over how payers across industries are deploying AI for underwriting purposes, Bloomberg reported Nov. 30. At the federal level, lawmakers asked CMS in November to increase its oversight of AI and algorithms used in Medicare Advantage prior authorization decisions. In their letter, lawmakers pointed to advocacy group reports that indicate use of AI in Medicare Advantage prior authorization decisions is resulting in care denials that are more restrictive than traditional Medicare. They asked CMS to require MA plans to report prior authorization data, including reasons for denials; compare guidance generated by AI tools to actual Medicare Advantage coverage decisions; and assess if AI-powered algorithms used in prior authorization are self-correcting.