Public Policy Analysis & Opinion
By Kevin P. Hennosy
2020: A RISK ODYSSEY
Have producers reached a John Henry moment, and does the NAIC care?
Late this summer, the always timely, credible, and unimpeachably disciplined National Association of Insurance Commissioners (NAIC) unanimously adopted “guiding principles” for the use of artificial intelligence (AI) by the business of insurance.
What will this historic proclamation mean to the insurance sector and its varied markets? Not much.
That is not to say that AI is not an important issue of insurance public policy. As with most commercial sectors, AI is changing the business of insurance.
The nation’s insurance regulators through their NAIC fig leaf simply seem to lack the stamina necessary to integrate AI into a real regulatory framework.
The Center for Insurance Policy and Research (an NAIC shadow group) describes the change as follows: “In the insurance industry, AI is transforming areas such as underwriting, customer service, claims, marketing and fraud detection.”
AI relies on the existence, storage, and analysis of “big data.” This means uncounted bits of personal information generated by individuals, which make it easier for commercial actors to understand the individual’s motivations, behaviors, and decision-making processes. This understanding facilitates sales.
[T]he NAIC presents five key tenets. At best, these tenets offer aspirational catch phrases. The tenets do nothing to recommend an actual regulatory framework.
The discussion on the NAIC website continues, “Insurers are sitting on a treasure trove of big data, the main ingredient AI requires to be successful. The abundance (of) unstructured data can be leveraged through AI to increase customer engagement, create more personalized service and more meaningful marketing messages, sell the right product to customers, and target the right customer.”
The vastness of the data retained by insurers and available from other sources is changing insurance markets and mechanisms. New insurers benefit from constructing AI operations from the bottom up and appear best suited to implement AI, according to the regulators.
Nevertheless, the NAIC observes: “[W]hile AI provides opportunities for traditional insurers to modernize themselves, implementing AI is not straightforward. Traditional insurers could face challenges integrating AI into their existing technology due to issues such as data quality, privacy, and infrastructure compatibility.”
Certainly, there is a danger of offending someone when a magazine column labeled as Analysis & Opinion expresses a colorful observation; however, we should recognize that AI poses an alternative means to the use of licensed agents for carriers to attract and service insurance business.
This may be the “John Henry moment” for many insurance producers.
The NAIC may want to create the pretense that AI does not replace licensed agents and brokers, but the association’s recommendations include no prohibitions against that kind of transition.
Furthermore, AI operates tirelessly 24 hours a day, 7 days a week, 365 days a year, and it never asks for compensation. The desire for profit is a powerful thing.
Task force
The NAIC convened a task force on AI issues in 2019. The association describes it as follows: “The Task Force provides a forum for the discussion of innovation and technology developments in order to educate state insurance regulators on how these developments will affect consumer protection, insurer and producer oversight, and the state insurance regulatory framework.”
The effort to develop AI general guidance appears to respond to a statement on AI from the Organsation for Economic Co-operation and Development, a group of 42 nations that grew out of the Marshall Plan following World War II.
The vagueness and imprecise language that the NAIC guiding principles use seem designed to avoid prohibiting any use of AI technology in the business of insurance. The guiding principles do not even reach the level of a “pretense of regulation.”
For example, the NAIC did not develop policy recommendations in the form of a model law, model regulation, or even its usual neutered fallback work product called “guidelines.” The term “guiding principles” carries with it no legally definable authority or state action.
NAIC statement
The NAIC explains the purpose of the guiding principles in an introductory statement: These principles were established to inform and articulate general expectations for businesses, professionals, and stakeholders across the insurance industry as they implement AI tools to facilitate operations.
The phrase “articulate general expectations” catches the eye of an informed observer because it does not define whose expectations the guiding principles express. By default, a reader may assume that the statement refers to the NAIC’s expectations, but what standing does the Delaware-chartered corporation possess? The NAIC holds no regulatory or policymaking authority under the federal law that governs insurance regulation.
The statement could read “articulate the general expectations of insurance regulators,” which would still be vague to the point of being meaningless; however, NAIC did not even go that far. Even use of “The Royal We” would seem to express more standing than what the NAIC adopted.
Five key tenets
After the introductory statement, the NAIC presents five key tenets. At best, these tenets offer aspirational catchphrases. The tenets do nothing to recommend an actual regulatory framework.
The NAIC seems pleased with its creativity by using the first letter of the first word of each key tenet to form the acronym “FACTS.” In short, the NAIC key tenets appear to be policy recommendations derived from the old board game Scrabble.
- “Fair and Ethical: respecting the rule of law and implementing trustworthy solutions” comprises the NAIC’s first tenet.
Now who can be against concepts like “Fair and Ethical”? No one; however, what does this “tenet” mean in the real world?
From a policy perspective, the Fair and Ethical tenet presents more problems than proposals. After all, if the public and market participants could rely on a Fair and Ethical standard, there would be no reason for insurance regulation.
Of course, a problem arises with this “rule of law” part of the NAIC’s dog and pony show: Very few if any insurance statutes address AI. So how does one apply the rule of law? Do insurance laws extend to AI activity?
An aggressive insurance regulator, if one still exists, could apply any statutory provision to AI activity in the business of insurance, which will lead to litigation. Without the benefit of statutory guidance, or at least case law, woe be that regulator in a courtroom.
Conversely, a self-loathing state official who does not want to regulate the business of insurance could avoid action because the use of AI is never specifically defined in the insurance code.
Regarding the reference to “implementing trustworthy solutions,” the NAIC provides no definition of that phrase. When we have no definition of the problems or the solutions or who will implement those solutions, extending trust is difficult if not foolhardy.
- “Accountable: responsibility for the creation, implementation and impacts of any AI system.” In adopting these guiding principles, the NAIC recommends that insurance regulators cede the ability to avoid some catastrophic AI widget running amok through a book of business or economic sector. We can suppose that the regulators only have to have an address for a business agent so they can post a sternly worded letter to them after the fact.
- “Compliant: have knowledge and resources in place to comply with all applicable insurance laws and regulations.” Once again, AI may or may not fall under the jurisdiction of any insurance code. So this tenet may or may not carry any meaning in the real world. I guess they needed a letter “C”?
- “Transparent: commitment to responsible disclosures regarding AI systems to relevant stakeholders as well as ability to inquire about and review AI-driven insurance decisions.” The phrase “responsible disclosures” brings to mind those notifications of “changes to the end user agreement,” which arrive with software updates. No, that is not a positive reference.
- “Secure/Safe/Robust: ensure reasonable level of traceability of datasets, processes and decisions made and implementation of a systematic risk management process to detect and correct risks associated with privacy, digital security, and unfair discrimination.”
How could the NAIC represent its general guidance on AI without that collection of technical-sounding terms? No, that is not a serious observation.
Jurisdiction
In short, the NAIC made a pretense of issuing public policy guidance.
The opening statement of the NAIC guiding principles for AI also refers to its purpose as “to facilitate operations” of numerous “businesses, professionals, and stakeholders.” Once again, the reader must assume whom the NAIC guiding principles address. When assessing the validity of that assumption, one must consider jurisdiction.
With regard to insurance regulation, the test of jurisdiction does not concern terms like “businesses, professionals, and stakeholders.”
Even a layman familiar with the jurisprudence of insurance law knows that the Supreme Court defined the jurisdiction of state insurance regulation in its decision in SEC v. National Securities, Inc. 393 US 453, 89 S. Ct. 564, 21 L. Ed. 2d 668. In that opinion, the court overruled the action of an overreaching Arizona insurance regulator who acted beyond his jurisdiction.
Writing for the court majority, Associate Justice Thurgood Marshall defined state insurance regulation’s jurisdiction as limited to and focused on “the business of insurance.” Terms like insurer, business, agent, and other titles do not automatically fall under the limited and contingent jurisdiction granted to the states by the McCarran- Ferguson Act (Public Law 15 of 1945).
Justice Marshall opined that the McCarran-Ferguson Act focused on the broad expanse of “the relationship between the insurance company and the policyholder.” The justice provided a definition of that relationship by writing: The relationship between insurer and insured, the type of policy which could be issued, its reliability, interpretation, and enforcement—these were the core of the “business of insurance.”
Therefore, if the NAIC general guidance on the use of AI was a serious attempt to serve the public interest protections embedded in insurance regulation, the statement should have addressed “the use of AI in the business of insurance.”
Again, this NAIC recommendation does not establish regulatory activity. It does not state that states should expand insurance codes to incorporate all AI activity executed as part of the business of insurance. The NAIC simply asks the source of the AI operations to be nice.
The expansion of AI is not something humankind should embark on with a completely trusting mindset.
For example, some well-meaning engineering work group decided to install the HAL 9000 computer into the United States spacecraft Discovery One bound for Jupiter in 2001: A Space Odyssey. All seemed well until “HAL” said in a calm and soothing voice: “I’m sorry, Dave, I’m afraid I can’t do that.”
The author
Kevin P. Hennosy is an insurance writer who specializes in the history and politics of insurance regulation. He began his insurance career in the regulatory compliance office of Nationwide and then served as public affairs manager for the National Association of Insurance Commissioners (NAIC). Since leaving the NAIC staff, he has written extensively on insurance regulation and testified before the NAIC as a consumer advocate.