On November 16, 2023, the Committee on Professional Responsibility and Conduct for the State Bar of California (“COPRAC”) provided initial recommendations regarding use of generative AI by lawyers (the “Guidance”). The Guidance uses the existing Rules of Professional Conduct as a framework, but recognizes that generative AI is a rapidly evolving technology that might necessitate new regulation and rules in the future. Many of the recommendations in the Guidance were also made in a recently-released proposed advisory opinion by the Florida Bar Board of Governor’s Review Committee on Professional Ethics on the same issue. Because of the clear, concise, and practical nature of the COPRAC Guidance, it is likely to become a useful set of guidelines that lawyers can follow when using generative AI for work, but it is also likely to have an impact beyond the legal profession, as general guidance on the responsible use of generative AI for professional tasks.
- Framing the Guidance
The Guidance begins by noting that “[l]ike any technology, generative AI must be used in a manner that conforms to a lawyer’s professional responsibility obligations . . . and [a] lawyer should understand the risks and benefits of the technology used in connection with providing legal services. How these obligations apply will depend on a host of factors, including the client, the matter, the practice area, the firm size, and the tools themselves.” The Guidance also notes that, at least at this initial stage, it should be read as guiding principles rather than as “best practices.”
- The Eight Parts of the Guidance
The Guidance is divided into eight parts: Duty of Confidentiality, Duty of Competence and Diligence, Duty to Comply with the Law, Duty to Supervise and Training, Communications, Fees for Legal Services, Duty of Candor, and Prohibition on Discrimination. Again, many of these guidelines could be applied to the professional use of generative AI beyond the legal profession.
- Duty of Confidentiality
Recognizing that use of generative AI poses risks to the confidentiality of client information, the Guidance provides that:
- A lawyer must not input any confidential information of the client into any generative AI solution that lacks adequate confidentiality and security protections.
- A lawyer must anonymize client information and avoid entering details that can be used to identify the client.
- A lawyer should consult with cybersecurity experts to ensure that any AI system in which a lawyer would input confidential client information adheres to stringent security, confidentiality, and data retention protocols.
- A lawyer who intends to use confidential information in a generative AI product should ensure that the provider does not share inputted information with third parties or utilize the information for its own use in any manner, including to train or improve its product.
- Duty of Competence and Diligence
Recognizing that AI outputs can include information that is false or inaccurate, the Guidance provides that:
- AI-generated outputs can be used as a starting point but must be carefully scrutinized. They should be critically analyzed for accuracy and bias, supplemented, and improved, if necessary.
- A lawyer must critically review, validate, and correct both the input and the output of generative AI to ensure the content accurately reflects and supports the interests and priorities of the client in the matter at hand, including as part of advocacy for the client.
- A lawyer’s professional judgment cannot be delegated to generative AI and remains the lawyer’s responsibility at all A lawyer should take steps to avoid over-reliance on generative AI to such a degree that it hinders critical attorney analysis fostered by traditional research and writing.
- Duty to Comply with the Law
Recognizing that generative AI can be used in a way that is inconsistent with existing laws, such as those relating to hiring, lending, investment advice, privacy, and cybersecurity, the Guidance provides that:
- A lawyer should analyze the relevant laws and regulations applicable to the attorney or the client and make sure not to counsel a client, or assist a client in, engaging in conduct that the lawyer knows is a violation of any law, rule, or ruling of a tribunal when using generative AI tools.
- Duty to Supervise and Training
Recognizing the need to provide supervision in the use of generative AI by junior lawyers and non-lawyers, the Guidance provides that:
- Managerial and supervisory lawyers should establish clear policies regarding the permissible uses of generative AI and make reasonable efforts to ensure that the firm adopts measures that give reasonable assurance that the firm’s lawyers and non-lawyers’ conduct complies with their professional obligations when using generative AI. This includes providing training on the ethical and practical aspects, and pitfalls, of any generative AI use.
Recognizing that there may be circumstances in which it would be appropriate to notify clients that generative AI is being used for their matters, the Guidance provides:
- A lawyer should evaluate their communication obligations throughout the representation based on the facts and circumstances, including the novelty of the technology, risks associated with generative AI use, scope of the representation, and sophistication of the client.
- The lawyer should consider disclosing to their clients that they intend to use generative AI in the representation, including how the technology will be used, and the benefits and risks of such use.
- A lawyer should review any applicable client instructions or guidelines that may restrict or limit the use of generative AI.
- Fees for Legal Services
Recognizing that generative AI presents novel issues relating to how clients are charged for legal work, the Guidance provides that:
- A lawyer may use generative AI to more efficiently create work product and may charge for actual time spent (e.g., crafting or refining generative AI inputs and prompts, or reviewing and editing generative AI outputs).
- A lawyer must not charge hourly fees for the time saved by using generative
- Costs associated with generative AI may be charged to the clients in compliance with applicable law.
- A fee agreement should explain the basis for all fees and costs, including those associated with the use of generative AI.
- Duty of Candor
Recognizing the risk that the use of generative AI for drafting legal briefs can result in inaccurate documents being submitted to courts, the Guidance provides that:
- A lawyer must review all generative AI outputs, including analysis and citations to authority for accuracy before submission to the court, and correct any errors or misleading statements made to the court.
- A lawyer should also check for any rules, orders, or other requirements in the relevant jurisdiction that may necessitate the disclosure of the use of generative AI.
- Bias and Discrimination
Recognizing that many generative AI tools are trained on data sets that are not representative, and therefore can result in outputs that are biased, the Guidance provides that:
- A lawyer should be aware of possible biases and the risks they may create when using generative AI (e.g., to screen potential clients or employees).
- Lawyers should engage in continuous learning about AI biases and their implications in legal practice, and firms should establish policies and mechanisms to identify, report, and address potential AI biases.
The Guidelines recognize that generative AI has the potential to “facilitate efficiency and expanded access to justice” for the legal profession but also cautions that the technology may be associated with unacceptable risk. To unlock the full potential of generative AI and avoid the pitfalls associated with its use, legal and other professional firms should consider adopting a sensible governance framework that includes:
- Generative AI Policies. To operationalize governance, consider the risks associated with the use of generative AI for the applicable industry and create policies that outline expectations and any prohibitions or limitations on the use of such technologies. These policies may also address specific risks associated with generative AI, including provider and vendor management concerns, monitoring for quality control, and data governance.
- Training on Policies and Ethical Obligations. To socialize policies and expectations, consider providing training for individuals involved in developing, overseeing, testing, or using generative AI technologies. For individuals subject to industry professional codes or fiduciary duties, consider providing additional guidance on how to satisfy those obligations in light of generative AI.
- Mitigation Options. To reduce the risks associated with certain uses of generative AI, consider identifying a list of measures that can be implemented, including enhanced transparency, additional human oversight, and bias assessments, as appropriate.
- Updating Engagement Letters or Other Contracts. Consider if it is appropriate to update attorney engagement letters or other contracts governing the provision of professional services to disclose, restrict, or limit use of generative AI and outline expectations regarding fees for services associated with the use of generative AI.
- Tracking Existing Rules and Regulations, as Well as New Developments. Limitations on the ability of companies to use generative AI are not only coming from new laws or regulations. The Guidance, for example, is the application of the existing Rules of Professional Conduct to new circumstances (e., the use of generative AI for legal work). Other legal, regulatory, and governance bodies are likely to take a similar approach. Rather than waiting for new regulations to pass, they will leverage their existing authority to regulate and provide guidance on the use of generative AI. Therefore, in addition to tracking new AI regulatory developments, companies using generative AI for work-related tasks should also consider reviewing existing rules and regulations to see how they might be applied to their use of generative AI. Any gaps between existing legal obligations (and obligations likely to arise in the near future), and current practices with respect to generative AI should be identified and a plan should be put in place to close those gaps.
To subscribe to the Data Blog, please click here.
The Debevoise Data Portal is an online suite of tools that help our clients quickly assess their federal, state and international breach notification and substantive cybersecurity obligations. Please contact us at firstname.lastname@example.org for more information.
The cover art used in this blog post was generated by DALL-E.