On June 30, 2022, the California Department of Insurance (the “Department”) released Bulletin 2022-5 (the “Bulletin”), which places several limitations on the use of Artificial Intelligence (“AI”) and alternative data sets (“Big Data”) by the insurance industry. The Bulletin states that the Department is aware of recent allegations of racial discrimination in marketing, rating, underwriting and claims practices by insurance companies and reminds all insurance companies of their obligations to conduct their businesses “in a manner that treats all similarly-situated persons alike.” The Bulletin goes on to describe recent examples of alleged unfair discrimination being investigated by the Department, including (1) subjecting claims from certain inner-city ZIP Codes for special scrutiny, (2) using facial recognition in claims decisions, and (3) collecting personal information that is unrelated to the risk being underwritten.

The six most significant aspect of the Bulletin are:

  1. Restrictions on Both AI and Big Data: The Bulletin provides that insurance companies must avoid discrimination that can result from the use of artificial intelligence, as well as Big Data, which is described as “extremely large data sets analyzed to reveal patterns and trends.”
  2. Restrictions Beyond Underwriting: Insurance companies must avoid discrimination when using AI or Big Data for underwriting, as well as marketing, rating, processing claims, and investigating suspected fraud relating to any insurance transaction that impacts California residents.
  3. A Focus on Proxy Discrimination: The Department notes a growing concern in the use of purportedly neutral individual characteristics as a proxy for prohibited characteristics, which include sex, race, color, religion, ancestry, national origin, disability, medical condition, genetic information, marital status, sexual orientation, citizenship, primary language, or immigration status. Potential proxies listed in the Bulletin include ZIP Codes, biometrics, facial recognition, geographic data, homeownership data, credit information, education level, civil judgments, court records, consumer retail purchase history, social media, internet use, condition or type of an applicant’s electronic device, and how a consumer appears in a photograph. This list has many of the same inputs the New York Department of Financial Services described as potentially suspect in underwriting for life insurance in its Circular Letter No. 1 (2019).
  4. Concerns Over Lack of Actuarial Nexus: The Department likely views the above list of suspect characteristics as non-exhaustive because it goes on to state that any input used for AI models and Big Data that lacks a sufficient actuarial nexus to the risk of loss has the potential to unfairly discriminate.
  5. Transparency and Explainability Requirements: The Bulletin provides that when insurers use complex algorithms in a declination, limitation, premium increase, or other adverse action, the insurer must provide the specific reason or reasons for that decision to the consumer.
  6. Due Diligence Requirements: The Bulletin states that before utilizing any data collection method, fraud algorithm or rating/underwriting or marketing tool, insurers “must conduct their own due diligence to ensure full compliance with all applicable laws.”

Takeaways. The Bulletin is part of an emerging patchwork of state laws and regulatory pronouncements placing significant obligations on insurers’ use of AI applications and Big Data, which includes recent developments in Connecticut, Colorado, and New York, as well as guidance from the NAIC and NCOIL. Insurance companies seeking to comply with these new developments should consider taking some of the following steps:

  • AI/Big Data Inventory: Assembling a list of AI models and Big Data uses that could be subject to these regulations will help insurers prioritize which applications may require immediate attention.
  • Risk Rating: Creating an AI risk-management framework that includes a list of high-risk factors for AI and Big Data uses (e.g., use of potentially suspect inputs in underwriting algorithms). Those criteria can then be used to identify the highest risk AI and Big Data applications for review and possible mitigation.
  • Mitigation: Identifying mitigation options for high-risk AI applications, including testing suspect inputs, additional transparency, human oversight of decisions, ensuring data quality, and increasing the diversity of the teams designing and operating the AI/Big Data applications.
  • Training: Conducting trainings on AI and data compliance, governance, and risk management for employees and contractors involved in designing and operating the AI/Big Data applications, as well as certain members of senior management, legal, compliance, risk, and business functions.
  • Governance: Creating a cross-functional AI Oversight Committee to review certain high-risk AI/Big Data applications and recommend mitigation. The Committee can also implement AI/Big Data Polices, including Guiding Principles and Codes of Conduct.
  • Vendor Risk Management: Many AI/Big Data applications are at least partially developed by third parties. Insurers should consider whether their diligence and contractual procedures are sufficient for vendors that are providing services that may be covered by the Bulletin.
  • Documentation: As the risk of regulatory exams and civil litigation over insurers’ use of AI and Big Data increases, so does the need for robust documentation of efforts to meet regulatory compliance obligations. This may include review of data inputs, results of model testing, assessment of actuarial risk, implementation of mitigations, who received training, and how concerns about models or particular decisions were resolved.
  • Examinations of Business Practices, Algorithms, and Models: The Bulletin also states that the California Department of Insurance may use market conduct examinations or Special Investigative Unit examinations to audit and examine all insurer business practices, including their marketing, rating, claim, and underwriting criteria, programs, algorithms, and models.

Debevoise has developed the Debevoise AI Regulatory Tracker (“DART”), an online tool to help our clients keep track of AI regulatory developments across the globe. For a demonstration of the DART, please contact us at agesser@debevoise.com and argressel@debevoise.com.

To subscribe to the Data Blog, please click here. Please do not hesitate to contact us with any questions.

Author

Avi Gesser is Co-Chair of the Debevoise Data Strategy & Security Group. His practice focuses on advising major companies on a wide range of cybersecurity, privacy and artificial intelligence matters. He can be reached at agesser@debevoise.com.

Author

Eric R. Dinallo is Chair of the Debevoise insurance regulatory practice and a member of its Financial Institutions and White Collar & Regulatory Defense Groups in New York. He can be reached at edinallo@debevoise.com.

Author

Marshal Bozzo is a regulatory counsel based in the New York office and a member of the Debevoise Insurance Regulatory practice. He can be reached at mlbozzo@debevoise.com.

Author

Anna R. Gressel is an associate and a member of the firm’s Data Strategy & Security Group and its FinTech and Technology practices. Her practice focuses on representing clients in regulatory investigations, supervisory examinations, and civil litigation related to artificial intelligence and other emerging technologies. Ms. Gressel has a deep knowledge of regulations, supervisory expectations, and industry best practices with respect to AI governance and compliance. She regularly advises boards and senior legal executives on governance, risk, and liability issues relating to AI, privacy, and data governance. She can be reached at argressel@debevoise.com.

Author

Scott M. Caravello is an associate in the litigation department. He can be reached at smcaravello@debevoise.com