On October 4, 2022, the White House released the Blueprint for an AI Bill of Rights (the “Blueprint”), which provides non-binding “principles” for organizations in both the public and private sectors to use when developing or deploying artificial intelligence (“AI”) or other automated systems.

The Blueprint does not include many new ideas for AI compliance. Instead, it represents a collection of principles that have been included in laws and guidance published by governments and organizations around the world. But unlike many of those guidelines, it takes a rights-based approach that is focused on AI’s potential harm, rather than a risk-based approach, which means that the Blueprint’s recommendations apply to all covered automated systems, largely regardless of their risk.

This approach significantly undermines the likely value of the Blueprint as a model for future AI regulation in the United States. Many organizations that have adopted AI are currently running hundreds, if not thousands, of models that make decisions that range from consequential to relatively insignificant. Requiring those organizations to put each model through a complicated and time-consuming compliance process is not an effective way to reduce the risks associated with automated systems. Instead, it will result in a misallocation of resources, with too much effort spent on low-risk AI (e.g., spam filters, graphics generation for games, inventory management, cybersecurity monitoring, etc.) and not enough effort spent on high-risk AI (e.g., hiring, lending, insurance underwriting, law enforcement, education admissions, etc.).

In response to this critique, the drafters will likely point out that the Blueprint only applies to automated systems that “have the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services,” and therefore does not apply to low-risk AI at all. They likely would also note that the Blueprint includes an appendix of examples of covered automated systems, which largely include AI applications that would be considered high-risk under other regulatory frameworks. But this scope limitation is not a prominent feature, and nowhere in the Blueprint can you find examples of low-risk automated systems that are expressly out of scope. In addition, the phrase “the potential to meaningfully impact” is likely to sweep in a lot of low-risk AI that has the potential to impact Americans, but, as a practical matter, is unlikely to do so. As a result, there will be many low-risk automated systems for which the application of the Blueprint is unclear, and there will be pressure from regulators and compliance professionals to bring those systems under the compliance regime as a matter of prudence because of their theoretical potential for causing harm, even if that harm is not likely to occur.

If the Blueprint were only applicable to an identified list of high-risk AI (which is the approach that the EU has adopted with the draft AI Act), it would be a more valuable policy tool for promoting organizations’ responsible use of automated systems. As discussed below, this is especially true because the Blueprint does make effective use of “AI Storytelling” by providing concrete examples to demonstrate the risks of certain AI use cases and the ways those risks can be mitigated.

One additional drawback of the Blueprint’s rights-based approach is that it focuses almost exclusively on the risks of AI and therefore leaves little room to balance the potential benefits of automated systems against their possible drawbacks. Although the Blueprint’s Foreword does mention the “extraordinary benefits” of AI and its potential to “make life better for everyone,” the document fails to acknowledge that many of the risks it associates with automated systems can be equally applicable to human decision-making, which can also be flawed, opaque, and biased.

The Blueprint’s Five Principles

Below is a summary of the Blueprint’s five principles, along with a checklist of actions that the White House believes will advance each principle.

  1. Safe and Effective Systems (You should be protected from unsafe or ineffective systems).
    • Protect the public from harm in a proactive and ongoing manner
      1. Public consultation
      2. Pre-deployment testing
      3. Risk identification and mitigation
      4. Ongoing monitoring
      5. Clear organizational oversight
    • Avoid inappropriate, low-quality, or irrelevant data use and the compounded harm of its reuse
      1. Use relevant and high-quality data
      2. Derived data sources tracked and reviewed carefully
      3. Data reuse limits in sensitive domains
    •  Demonstrate the safety and effectiveness of the system
      1. Independent evaluation
      2. Clear and regular reporting
  2. Algorithmic Discrimination Protections (You should not face discrimination by algorithms and systems should be used and designed in an equitable way).
    • Protect the public from algorithmic discrimination in a proactive and ongoing manner
      1. Conduct equity assessments, including input data
      2. Use robust representative data
      3. Remove proxies
      4. Ensure accessibility to people with disabilities
      5. Conduct disparity assessments and mitigate disparities identified
      6. Engage in ongoing monitoring and mitigation
    • Demonstrate that the system protects against algorithmic discrimination
      1. Independent evaluation
      2.   Clear and regular reporting
  3. Data Privacy (You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used).
    • Data privacy should be protected by design and by default
      1. Privacy risks include risks to third parties
      2. Collect and retain data only as-needed to meet specific, narrow goals
      3. Identify harms and mitigation risks for use, sharing, or storage of data
      4. Follow industry-standard best practices for privacy and security
    • Protect the public from unchecked surveillance and monitoring
      1. Heightened oversight of surveillance and monitoring systems, including risk assessments
      2. Avoid surveillance unless necessary and use least invasive means
      3. Limit surveillance and monitoring to prevent impingement of civil rights or liberties
    • Create appropriate and meaningful mechanisms for consent, access, and control
      1. Seek consent for narrow, specific use-cases for a specific duration
      2. Consent requests should be plain, brief, direct, and understandable by laypeople
      3. Provide people whose data is collected with the ability to access their data and metadata
      4. Provide an ability to correct the data and metadata as necessary
      5. Allow people to withdraw consent, resulting in deletion of their data
      6. Individuals should be able to use automated systems for consent, access, and control decisions
    • Demonstrate that data privacy and user control are protected
      1. Independent evaluation
      2. Clear and regular reporting
    • Data related to sensitive domains should carry additional protections
      1. Only use sensitive data for strictly necessary functions
      2. Consent for non-necessary functions should be optional
      3. Conduct periodic ethical reviews of any use of sensitive data
      4. Conduct regular audits of data quality
      5. Sensitive data should not be sold, transferred, or made public
      6. Publicly report lapses or breaches that result in sensitive data leaks, wherever appropriate
  4. Notice and Explanation (You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you).
    • Provide clear, timely, understandable, and accessible notice of use and explanations
      1. Make system documentation public, in plain language, and include impact assessments
      2. Identify who is responsible for design of the automated system and who is utilizing it
      3. Users should receive notice of automated systems before or when the system is impacting them
      4. Notices and explanations should be improved through user testing to ensure clarity
    • Provide explanations as to how and why an automated decision was made or an action was taken
      1. Tailor explanations to a specific purpose and make the explanation useful for users
      2. Tailor explanations to relevant audiences, and assess explanation through research
      3. Tailor explanations to risk and give users advance explanation for high-risk systems
      4. Ensure that explanations reflect the factors and influences that led to particular decisions
    • Demonstrate protections for notice and explanation
      1. Clear and regular reporting
  5. Human Alternatives, Consideration, and Fallbacks (You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter).
    • Provide for opting out of automated systems in favor of a human alternative, as appropriate
      1. Give brief, clear notice of opt-out rights, along with information about how to opt-out
      2. Provide human alternatives when there is a reasonable expectation of human involvement
    • Institute fallback and escalation systems to address appeals, and system failure or errors
      1. Availability of human involvement proportional to system’s impact on rights and opportunities
      2. Mechanisms for human involvement should be easy to use and tested to confirm that, and should be available on a timely basis, proportionate to time-critical decisions at stake
    • Training, assessment, and oversight to combat automation bias
      1. Provide training for everyone interacting with the system, with regular assessments
      2. Incorporate lessons learned from assessments to mitigate system bias into governance
    • Additional human oversight capabilities and safeguards for sensitive domains
      1. Institute human oversight to ensure automated systems in sensitive domains are narrowly scoped and tailored to specific goals, and safe and effective for that specific situation
      2. Ensure human oversight in any high-risk decision, such as sentencing decisions or medical care
      3. Establish meaningful oversight of the system, including possible limited waivers of confidentiality for designers and developers of automated systems
    • Demonstrate access to human alternatives, consideration, and fallbacks
      1. Clear and regular reporting

The Need to Treat AI Like We Treat Employees

As discussed above, the decision in the Blueprint to take a rights-based approach (that focuses on the potential for impact), rather than a risk-based approach (that focuses on the likelihood of impact), means that there may be pressure to apply this substantial list of requirements to all automated systems, even those that pose a relatively low risk of harm. This will likely have the effect of stifling innovation and lead to a misallocation of compliance resources, especially for organizations with a large number of models. To illustrate why, it is helpful to think of automated systems in the same way we think about human resources.

Nearly every employee has the potential to cause a significant amount of harm to, and thus meaningfully impact, an organization. They can steal sensitive information, alienate clients, destroy valuable property, and undermine core company objectives. Even the most junior employee has the potential for significant damage. But organizations cannot function without their employees, and most organizations that are investing heavily in automated systems need hundreds, if not thousands, of employees. It would be unworkable for these organizations to require a lengthy and robust vetting process before each employee at every level of the company is allowed to do their job. Instead, the hiring process for most employees is limited to a resume review, one or two interviews, and a background check.

Companies, however, all have some employees who hold sensitive or higher profile positions, whose mistakes or malfeasance would be likely to cause significant financial harm, reputational damage, or legal liability to the company. The vetting process for these jobs is therefore more involved, and often includes a detailed submission from the candidate, informal and formal reference checks, and multiple rounds of interviews, which can take several months. But for most companies, this only applies to a relatively small number of positions. It would be a waste of time and resources to subject candidates for a mailroom opening to the same vetting process as the new CEO, even though mailroom employees have the potential do an enormous amount of damage to an organization by not delivering important packages, or by leaking sensitive documents to the press. For similar reasons, the Blueprint’s principles should be focused primarily on the small number of automated systems that are most likely to significantly impact Americans in a negative way and, therefore, pose the highest risk, rather than the much broader category of automated systems that merely have the potential to do so.

The Value of AI Storytelling

One area where the Blueprint does excel, however, is in illustrating the value of “AI Storytelling,” which plays an important role in building an effective compliance culture around AI and other emerging technologies.  This is because a general difficulty in AI governance and compliance is a lack of comprehension—that is, not being able to effectively convey concrete practical concerns about particular AI applications to key audiences. Regulators, academics, developers, and AI users often talk past each other with respect to how the AI is actually being used and how it might cause harm. AI Storytelling helps address this problem by using plain language and concrete examples to illustrate the value of the AI tool at issue, the specific risks associated with that use case, and concrete ways that an organization can avoid or mitigate those risks.

For example, concerns have been raised about bias in automated tools that are used screen resumes of job applicants. But regulators have struggled to come up with concrete examples to help frame the issue in a way that everyone (1) understands the problem, and (2) agrees that it is a problem, such as:

  • Including “travel” or other similar hobbies as an input in the resume screening tool, when doing so favors affluent candidates but does not meaningfully improve the quality of the applicant pool for the particular job; and
  • Penalizing candidates who have a one-year or more gap on their resume, when doing so negatively impacts women candidates who took time off to raise children.

AI Storytelling uses these kinds of concrete examples of AI risks to allow policymakers and AI developers to engage more effectively in the areas of automated systems that are most in need of attention, rather than talking about bias in the abstract, which means one thing to civil rights lawyers, but may mean something very different to data scientists.

The Blueprint includes many examples of effective AI Storytelling, some of which are drawn from groundbreaking studies and reporting on AI risks. These include:

  1. Principle: Protection from Unsafe and Ineffective Systems
    • Concrete Example: A proprietary model was developed to predict the likelihood of sepsis in hospitalized patients and was implemented at hundreds of hospitals around the country. An independent study showed that the model’s predictions underperformed relative to the designer’s claims while also causing “alert fatigue” by falsely alerting likelihood of sepsis.
  2. Principle: Protection from Discrimination by Algorithms
    • Concrete Example: A search for “beautiful girls” using some search engines results in pictures mostly of whites, while searches for “Black girls,” “Asian girls,” or “Latina girls” return predominantly sexualized content. Some search engines have been working to reduce the prevalence of these kinds of results, but the problem remains.
  3. Principle: Protection from Abusive Data Privacy Practices
    • Concrete Example: A data broker gathered millions of personal records of Americans, without their knowledge or consent, by scraping data from public social media profiles, and then suffered a breach, exposing hundreds of thousands of people to potential identity theft.
  4. Principle: The Right to Know that an Automated System Is Being Used
    • Concrete Example: A lawyer representing an older client with disabilities who had been cut off from Medicaid-funded home healthcare assistance couldn’t determine why, especially since the decision went against historical access practices. In a court hearing, the lawyer learned from a witness that the state in which the older client lived had recently adopted a new algorithm to determine eligibility. The lack of a timely explanation made it harder to understand and contest the decision.
  5. Principal: The Right to Opt-Out or Have Human Review of Automated Decisions
    • Concrete Example: A large corporation automated performance evaluation and other HR functions, leading to workers being fired by an automated system without the possibility of human review, appeal, or other form of recourse.

Although these examples are effective at illustrating the risks posed by AI, by not also focusing on the benefit of automated systems, and by not limiting its applications to identified high-risk use cases, the Blueprint missed an opportunity to provide a more useful template for AI regulation in the United States.

To subscribe to the Data Blog, please click here.

The Debevoise Artificial Intelligence Regulatory Tracker (DART) is now available for clients to help them quickly assess and comply with their current and anticipated AI-related legal obligations, including municipal, state, federal, and international requirements.

Author

Avi Gesser is Co-Chair of the Debevoise Data Strategy & Security Group. His practice focuses on advising major companies on a wide range of cybersecurity, privacy and artificial intelligence matters. He can be reached at agesser@debevoise.com.

Author

Jehan Patterson is a litigation counsel based in the firm’s Washington, D.C. office and a member of the firm’s White Collar & Regulatory Defense Group. Her practice focuses on advising the firm’s financial institutional clients on matters related to consumer finance law and enforcement.

Author

Anna R. Gressel is an associate and a member of the firm’s Data Strategy & Security Group and its FinTech and Technology practices. Her practice focuses on representing clients in regulatory investigations, supervisory examinations, and civil litigation related to artificial intelligence and other emerging technologies. Ms. Gressel has a deep knowledge of regulations, supervisory expectations, and industry best practices with respect to AI governance and compliance. She regularly advises boards and senior legal executives on governance, risk, and liability issues relating to AI, privacy, and data governance. She can be reached at argressel@debevoise.com.

Author

Scott M. Caravello is an associate in the litigation department. He can be reached at smcaravello@debevoise.com