Avi Gesser, Co-Chair of the Data Strategy & Security group spoke with host Tony Lee on SHRM’s All Things Work podcast, about where AI tools excel and where they falter when used by workers.

Listen to the podcast HERE.

SHRM published an accompanying article quoting Avi, entitled “The Promise and Peril of Artificial Intelligence,” about how employers and employees have been quick to adopt the capabilities of ChatGPT and other generative AI without considering the potential consequences of their use.

Avi opines on what a ChatGPT / Generative AI policy should include to minimize the numerous legal risks, such as copyright infringement, privacy violations and plagiarism.

“But before devising any new protocols, it pays to think about how the company will be using the technology,” he says, adding that “[o]rganizations…are still learning what the technology can do,” what people are using it for, and what they want to use it for.

“He notes that using the technology may create new legal issues,” such as whether or not to charge a client for materials developed with Generative AI without disclosing the use of those tools.

Avi goes on to recommend steps when formulating a policy:

  1. Outline prohibited uses. This would include anything that violates existing company policies or laws. Don’t allow anyone to put any type of confidential or highly personal information into the tool unless it is the company’s closed, proprietary system that no one outside the organization can access.
  2. Determine what is allowed. A company may be comfortable with employees using the technology for drafting letters or summarizing public documents, for example.
  3. Develop a system to evaluate employee requests to use the technology in ways that are not expressly forbidden or allowed. Appoint a person or group to make such decisions and establish a way for employees to explain their project and receive an answer.

The full article can be viewed HERE.

The cover art used in this blog post was generated by DALL-E.

Author

Avi Gesser is Co-Chair of the Debevoise Data Strategy & Security Group. His practice focuses on advising major companies on a wide range of cybersecurity, privacy and artificial intelligence matters. He can be reached at agesser@debevoise.com.