States are ramping up scrutiny over how insurers across industries are deploying artificial intelligence for underwriting purposes, Bloomberg reported Nov. 30.
The state-level actions come as federal lawmakers increasingly question health insurers' use of AI and automated tools for internal processes such as claims review and prior authorization requests. In addition, UnitedHealthcare and Cigna are facing lawsuits from members or their families alleging the organizations use automated data tools to wrongfully deny members' medical claims.
According to Bloomberg, Colorado is the first state to adopt regulations focused on transparency around insurance algorithms, specifically for life insurers. Pennsylvania and New Jersey have both introduced legislation aimed at requiring payers to disclose how they use AI to review claims or ban discriminatory practices that may come from non-human decision systems. New York, California, Connecticut and Washington, D.C., have issued warnings and notices to insurers about avoiding discriminatory practices.
Colorado "is just the first mover," Bryan Simms, president of Mammoth Life & Reinsurance Co., told Bloomberg. "It's wise for us to consider this as the beginning of state-by-state rulings."
The National Association of Insurance Commissioners published a bulletin in October asking insurers to develop internal guidelines around responsible and nondiscriminatory AI use.
At the federal level, lawmakers asked CMS in November to increase its oversight of artificial intelligence and algorithms used in Medicare Advantage prior authorization decisions. In their letter, lawmakers pointed to advocacy group reports that indicate use of AI in Medicare Advantage prior authorization decisions is resulting in care denials that are more restrictive than traditional Medicare. They asked CMS to require MA plans to report prior authorization data, including reasons for denials; compare guidance generated by AI tools to actual Medicare Advantage coverage decisions; and assess if AI-powered algorithms used in prior authorization are self-correcting.
On Nov. 1, Vice President Kamala Harris spoke about the potential for harm that artificial intelligence poses within the health insurance industry.
"There are additional threats that also demand our action — threats that are currently causing harm and which, to many people, also feel existential," she said. "Consider, for example: When a senior is kicked off his healthcare plan because of a faulty AI algorithm, is that not existential for him?"
The vice president's speech came after President Joe Biden issued an executive order Oct. 30 requesting the federal government's health agencies devise a strategy for overseeing AI with provisions related to the healthcare sector. Under the order, HHS will be tasked with developing a safety initiative dedicated to gathering information on AI-related practices that are unsafe and pose potential harm.