Wednesday, November 27, 2024

New York Proceeds Cautiously in Regulating Insurers’ AI Risks

Share

New York’s financial regulator is pushing to work cooperatively with insurers to eliminate unfair discrimination caused by the use of artificial intelligence models in underwriting practices.

The Empire State’s Department of Financial Services issued a proposal last week asking all insurers in New York to show how they oversee their use of AI tools. The guidance—together with a newly adopted regulation in Colorado and a bulletin from the National Association of Insurance Commissioners—offers a model for states across the country to scrutinize insurers’ AI practices, industry attorneys say.

State regulators are dialing up their demands for insurers to justify their use of consumer data and predictive models to market and price policies and handle claims. The added regulatory pressure comes as insurance giants including State Farm Mutual Automobile Insurance Co., Cigna Corp., and UnitedHealth Group Inc. have been hit with proposed class actions in recent months alleging AI-related discrimination. Policyholder attorneys expect more consumer litigation in the coming year and have started probing insurers about their AI usage.

New York’s proposal—issued as a Jan. 17 circular letter—is less stringent than Colorado’s, avoiding step-by-step rules for insurers to show how they oversee AI. The fact that DFS is even seeking feedback on the guidance shows it wants to cooperate with the roughly $270 billion insurance industry in the state to rein in risks stemming from AI and big data, agency watchers say.

The state regulator issues one to two dozen circular letters a year, but Clifford Chance attorney and former DFC counsel Gene Benger said he’s “never seen one that’s just been published as a proposal and asking for comments.”

“I haven’t seen a circular letter being issued publicly, but not being enforced,” he added.

The regulator is “working a little more gingerly” when it comes to new technology, Benger said. “They want to make sure they’re not making it so difficult that no one’s going to be able to comply.”

Ice Cream Test

New York’s guidance is “more realistic than the Colorado approach” and offers more flexibility on how insurers can build out their AI governance programs, said Myriah Jaworski, a data privacy attorney at Clark Hill PLC.

Colorado, for instance, outlined specific steps for insurers to test for bias, while New York suggested goals without saying exactly how insurers should achieve them.

In addition, New York’s proposal focuses only on underwriting and pricing—unlike the Colorado regulation and others dealing with AI-related bias across all insurance practices.

Read More: Insurers’ AI Use for Coverage Decisions Targeted by Blue States

Insurers appreciate DFS recognizing there is no one-size-fits-all approach to managing data, said David Snyder, a vice president at the American Property Casualty Insurance Association.

DFS is, however, asking all covered insurers—including those selling health, life, auto, and home policies—to justify the data they feed into algorithmic models and the models themselves.

Requiring all insurers to explain how data inputs and predictive AI models lead to underwriting decisions is new for New York, insurance attorneys say.

Insurers in the state would have to spell out in a qualitative assessment why they use a particular data source on consumer behaviors to determine insurance rates, and whether it makes “intuitive” sense to do so.

A model may happen to show, for instance, that it rains on days when a certain person eats ice cream, but it wouldn’t make sense for an insurer to predict rain based on ice cream consumption, Benger said. The insurer would need to show “what intuitive or logical impact eating ice cream could have on causing the rain. If there isn’t one, then despite what the model shows, it’s not going to work,” he said.

Insurers will have to find the true cause of insurance losses instead of relying on statistical correlations yielded by predictive models.

“It’s not enough for insurers to have all of these objective tests which can be manipulated to show whatever they want,” Benger said. DFS is asking insurers to show “whether using a person’s shopping habits, their address, or their color of eyes is a fair way to determine risk” regardless of what the models say, he added.

Data Quality

New York’s financial regulator is also asking insurers for the first time to show consumers which information the AI models used to deny insurance applications or suggest a higher rate.

The “black box” problem of AI—reaching conclusions without showing the underlying data and processes—can make it harder for insurers to explain their underwriting decisions. Insurers also lack control over how external data vendors, largely unregulated, gather information on consumer behaviors.

Nevertheless, the circular letter makes clear that “the ultimate responsibility rests with the insurer to comply with antidiscrimination laws,” said John R. Ewell, a Cozen O’Connor attorney who represents insurers.

Relying on a vendor’s assessment that its data is unbiased won’t be enough; insurers should do their own testing and auditing to determine the quality of the consumer data they purchase, attorneys cautioned.

“New York was saying, insurers could work with data brokers all you want, but it is you, the insurer, who will be regulated and punished,” Clark Hill’s Jaworski said.

Read more

Local News