UnitedHealth, Cigna face lawsuits over alleged automated claims denials

UnitedHealthcare and Cigna Healthcare are facing lawsuits from members or their families alleging the organizations use automated data tools to wrongfully deny members' medical claims. The allegations come amid broader ongoing conversations among policymakers around insurers' use of algorithms and artificial intelligence when processing claims or prior authorization requests.

A lawsuit was filed against UnitedHealthcare on Nov. 14 in a federal court in Minnesota by the families of two deceased Medicare Advantage members. The families allege their relatives were wrongfully denied coverage of medically necessary post-acute care by UnitedHealthcare through the use of an AI-powered algorithm called nH Predict. The algorithm was created by naviHealth, a care management company acquired by Optum in 2020.

The lawsuit alleges the algorithm predicts how long a patient will need to remain in skilled nursing care and overrides physicians' determinations for the patient. The plaintiffs claim UnitedHealth set a goal to keep skilled nursing facility stay lengths for MA members within 1% of nH Predict's estimations. Employees who deviate from the algorithm's estimates are "disciplined and terminated, regardless of whether a patient requires more care," the lawsuit alleges. When decisions made by the algorithm are appealed, they are allegedly overturned 90% of the time.

The naviHealth Predict tool is not used to make coverage determinations, an Optum spokesperson told Becker's. 

"The tool is used as a guide to help us inform providers, families and other caregivers about what sort of assistance and care the patient may need both in the facility and after returning home," the spokesperson said. "Coverage decisions are based on CMS coverage criteria and the terms of the member's plan. This lawsuit has no merit, and we will defend ourselves vigorously." 

The Cigna Group is also facing lawsuits from members and a shareholder following a ProPublica report that alleges the company uses an algorithm to deny large batches of members' claims without individual review, thereby denying them coverage for certain services.

In March, ProPublica reported that Cigna may be violating state laws by allowing its medical directors to deny large batches of claims without reviewing individual members' files using an automated claims review process called PxDx. The report said Cigna physicians denied more than 300,000 claims over two months in 2022 through the system, which equated to 1.2 seconds of review per claim on average.

In California, two Cigna members filed a class-action complaint against the insurer in July over the alleged issues. Many states, including California, require physicians to review patient files and coverage policies before denying claims for medical reasons. The July complaint claims Cigna bypassed those steps using the PxDx tool.

Following the ProPublica report, state insurance commissioners and federal lawmakers publicly raised concerns and requested more information from Cigna about the process, with some calling for an investigation. In Pennsylvania, lawmakers have introduced legislation that would require payers to disclose how they use AI in claims review, citing the ProPublica report.

Cigna has said the ProPublica report is "riddled with factual errors and gross mischaracterizations." The company said its claims review process follows industry standards, including processes that have been used by CMS. It also noted that the technology behind PxDx is more than a decade old and does not involve algorithms, artificial intelligence or machine learning.

"PxDx allows us to automatically pay providers for claims that are submitted with the correct diagnosis codes and prioritizes our medical directors' time for more complex reviews," a Cigna spokesperson told Becker's in July. "It does not create any impediments to or denials of care because it takes place after a patient receives the service, and even a denial does not result in any additional out-of-pocket costs for patients using in-network providers."

At the federal level, lawmakers asked CMS in November to increase its oversight of artificial intelligence and algorithms used in Medicare Advantage prior authorization decisions. In their letter, lawmakers pointed to advocacy group reports that indicate use of AI in Medicare Advantage prior authorization decisions is resulting in care denials that are more restrictive than traditional Medicare. They asked CMS to require MA plans to report prior authorization data, including reasons for denials; compare guidance generated by AI tools to actual Medicare Advantage coverage decisions; and assess if AI-powered algorithms used in prior authorization are self-correcting.

On Nov. 1, Vice President Kamala Harris spoke about the potential for harm that artificial intelligence poses within the health insurance industry.

"There are additional threats that also demand our action — threats that are currently causing harm and which, to many people, also feel existential," she said. "Consider, for example: When a senior is kicked off his healthcare plan because of a faulty AI algorithm, is that not existential for him?"

The vice president's speech came after President Joe Biden issued an executive order Oct. 30 requesting the federal government's health agencies devise a strategy for overseeing AI with provisions related to the healthcare sector. Under the order, HHS will be tasked with developing a safety initiative dedicated to gathering information on AI-related practices that are unsafe and pose potential harm. 


Copyright © 2024 Becker's Healthcare. All Rights Reserved. Privacy Policy. Cookie Policy. Linking and Reprinting Policy.

 

Top 40 articles from the past 6 months