One of the most significant trends in insurance regulation involves regulators requiring insurers to demonstrate that their use of alternative data (“Big Data”) and artificial intelligence (“AI”) is not discriminatory. On April 20, 2022, the Connecticut Insurance Department (the “Department”) released a notice titled “The Usage of Big Data and Avoidance of Discriminatory Practices” (the “Notice”) addressed to all entities and persons licensed by the Department (“Licensees”). In the Notice, the Department raises concerns about the expanding role of Big Data in the insurance process and the potential that its use could result in unfair discrimination. In light of those concerns, the Notice reminds all Licensees of their obligation to ensure that their use of Big Data and AI complies with applicable anti-discrimination laws and requires all Connecticut domestic insurers to complete an annual data certification by September 1, 2022.

We have previously discussed and predicted regulatory developments and trends regarding insurance and AI, including in our webcasts here and here. The Notice potentially positions Connecticut as the new forerunner in AI insurance regulation in the United States.

The Notice

The Notice, which updates the Department’s previous guidance issued in April 2021, has several interesting components:

  • Support for Innovation. The Notice asserts the Department’s support of the insurance industry’s use of AI and recognizes the opportunities it creates to provide innovative products and services to consumers and to operate more efficiently.
  • Concerns over Discrimination. The Notice reminds all Licensees of their obligation to use AI and Big Data in compliance with applicable federal and state anti-discrimination laws.
  • Mandatory Certification. Connecticut domestic insurers are expected to complete a data certification by September 1, 2022 (and annually thereafter), affirming that:
    • they have reviewed the provisions set forth in the Notice;
    • they have in place an established process concerning the use of data received from third-party developers or vendors that is substantially consistent with the guidance set forth in the Notice;
    • they will make available any and all data used to build models or algorithms included in all rate, form and underwriting filings upon the request of the Department; and
    • they will maintain the records supporting the certification for a commercially reasonable period of time.
  • Expansive Definition of “Big Data.” Big Data, according to the Notice, refers to a complex volume of data and the set of technologies that analyze and manage it. Big Data also covers a wide variety of sources, including consumer intelligence, social media, credit and alternative credit information, retail purchase history, geographic location tracking and telematics, mobile, satellite, behavioral monitoring, psychographic/biographic/demographic/firmographic data, sensors, wearable devices and Radio Frequency Identification Devices. The inclusion of credit and alternative credit information also reflects broader concerns among insurance regulators, typified by the New York State Department of Financial Services’ investigation into the use of credit scores by certain property and casualty insurers (which we describe here) and the Washington Insurance Commissioner’s recent order temporarily prohibiting the use of credit information to underwrite or rate personal insurance policies (which has been temporarily stayed by a Washington court).
  • Certification Covers AI Uses Beyond Underwriting. The Notice lists the uses of Big Data that are covered, including underwriting, rating, marketing, claim settlement practices, fraud detection, data gathering, product design, distribution and management.
  • Application to AI Vendors. The Notice recognizes that many AI applications that insurers use are provided by third-party vendors, stating that insurers continue to be responsible and accountable for ensuring that the utilization of Big Data, either internally or with vendors, is in compliance with federal and state anti-discrimination laws.
  • Access to Data. The Notice emphasizes the Department’s authority to require insurers and third-party data vendors and model developers to provide the Department with access to data used to build models or algorithms included in the insurers’ rate, form and underwriting filings.
  • Questions from the Department. In an appendix, the Notice provides a list of several questions and instructions that may be part of Department examinations, including the following:
    • Who oversees all data-related questions?
    • Provide the name of all data sources, vendors, brokers, aggregators, bureaus, etc., utilized as part of your services, products or offerings, indicating if the sources are public or private.
    • Are all the data sources documented and checked for reliability, accuracy, consistency and completeness?
    • Is data collected that is regulated in use, like age, gender, race, income and marital status?
    • Provide the privacy protections used and/or followed when storing data, including the methods used.
    • How many iterations of the raw data are done before it is shared with the user? Provide a list of the number of iterations and describe the general process of preparing the raw data for sale and consumption.
    • What data validation methods are used once it is transformed? Provide a list of the names of methods and the general process of data validation and accuracy determination.
    • What standards are used for validation? List such validation standards used and the general process for following the standards and fixing any problems or issues that occur.
  • Data Governance and Accuracy. The Notice and the list of questions indicate that the Department has concerns regarding how Big Data is governed, where it resides and how it is used within the insurance industry, as well as how it subsequently moves into industry archives, bureaus, data monetization mechanisms or additional processes within or beyond the insurance ecosystem. The Department is also concerned about Big Data accuracy, context, completeness, consistency, timeliness and relevancy.
  • Risk Management. The Department also stresses in the Notice the importance of being able to understand how Big Data algorithms, predictive models and various processes are inventoried, risk assessed, risk managed, validated for technical quality and governed throughout their life cycle.

Key Takeaways

The Notice is part of a larger trend we have previously discussed whereby regulators are shifting the burden to insurers to show, and in some circumstances prove, that they are not unfairly discriminating through their use of Big Data or AI. This trend began in 2019 with Circular Letter No. 1, issued by the New York State Department of Financial Services, which placed an affirmative burden on insurers to determine whether their use of external data results in unfair discrimination in providing life insurance. Circular Letter No. 1 provides that an insurer should not use Big Data or AI in underwriting or rating life insurance, unless the insurer has determined: (1) that it does not collect or utilize prohibited criteria; and (2) the use is not unfairly discriminatory, which should include an analysis of whether the use is supported by generally accepted actuarial principles. Circular Letter No. 1 expressly states that “[t]he burden remains with the insurer at all times.”

In 2021, Colorado passed a law to protect consumers from unfair discrimination in insurance practices that prohibits the use of external consumer data that results in unfair discrimination. The Colorado law also mandates the adoption of rules, still forthcoming, that will require insurers to attest to their adoption of risk mitigation measures for AI, to provide detailed disclosures concerning their use of consumer data and to promptly remedy any unfair discrimination.  As with Connecticut’s Notice and New York’s Circular Letter No. 1, these shift the burden onto insurers to demonstrate a lack of unfair discrimination. Oklahoma and Rhode Island both have bills pending patterned on Colorado’s law.

To prepare for the regulatory scrutiny discussed in the Notice, and this regulatory trend more broadly, many insurers that are investing heavily in AI are enhancing their policies, procedures, training and governance so that they are better able to demonstrate their efforts to use AI in a non-discriminatory manner, including the following:

  • Reviewing the Certification and Questions. Connecticut domestic insurers that are using Big Data and AI should consider: (1) how they would respond to the questions set forth in the appendix to the Notice, and (2) whether they meet the requirements necessary for certification by September 1, 2022. If gaps exist, insurers should consider what additional efforts are needed.
  • Internal Risk Management. Licensees using Big Data and AI should consider implementing risk management programs that address the concerns raised in the Notice, particularly as they relate to data accuracy. As part of those efforts, Licensees should consider identifying their highest-risk models and uses and testing those for potential bias.
  • Vendor Risk Management. The Notice is clear that the Department views insurers as responsible for compliance with anti-discrimination laws, even when AI or Big Data is provided by third parties and that the Department has the authority to require access to data or models provided by third parties. Accordingly, Licensees should consider whether their third-party diligence and risk management are sufficient and whether those processes provide them with the information they need to respond to the listed questions and otherwise comply with the requirements of the Notice. Connecticut domestic insurers should also consider whether these are sufficient to complete their certifications.
  • AI Governance. Licensees that have invested heavily in AI should also consider enhancing their AI governance. In the Notice, the Department identifies the importance of: governance for the entire life cycle of Big Data; how AI models are inventoried, risk assessed and managed; and whether appropriate mitigation and validation processes are in place.

As we continue to monitor developments in the regulation of AI and third-party data, we invite insurers and other parties to contact us for further guidance.

The authors would like to thank Debevoise law clerk Eli Goldman for his contribution to this article.

To subscribe to our Data Blog, please click here.

Author

Eric R. Dinallo is Chair of the Debevoise insurance regulatory practice and a member of its Financial Institutions and White Collar & Regulatory Defense Groups in New York. He can be reached at edinallo@debevoise.com.

Author

Avi Gesser is Co-Chair of the Debevoise Data Strategy & Security Group. His practice focuses on advising major companies on a wide range of cybersecurity, privacy and artificial intelligence matters. He can be reached at agesser@debevoise.com.

Author

Marshal Bozzo is a regulatory counsel based in the New York office and a member of the Debevoise Insurance Regulatory practice. He can be reached at mlbozzo@debevoise.com.

Author

Anna R. Gressel is an associate and a member of the firm’s Data Strategy & Security Group and its FinTech and Technology practices. Her practice focuses on representing clients in regulatory investigations, supervisory examinations, and civil litigation related to artificial intelligence and other emerging technologies. Ms. Gressel has a deep knowledge of regulations, supervisory expectations, and industry best practices with respect to AI governance and compliance. She regularly advises boards and senior legal executives on governance, risk, and liability issues relating to AI, privacy, and data governance. She can be reached at argressel@debevoise.com.