As AI finds its way into nearly every facet of life, companies are striving to increase AI adoption among their employees to enhance efficiency and optimize results. Yet heightened use of these tools also introduces risks, including losing confidentiality over sensitive data, production of inaccurate documents, poor decision-making, and violating legal and contractual obligations.

Clear internal communication of AI policies can mitigate these risks – just as poor communication can heighten them. When employees aren’t sure which tools are approved, what data is safe to input, or what use cases are allowed, organizations risk increased exposure to cybersecurity threats, compliance failures, disappointing adoption rates, and a lack of return on investment. Unclear guidance can also lead to employees using confidential client or customer information with AI in violation of privacy policies or contractual requirements.

To avoid these pitfalls and the reputational damage they can cause, organizations should consider building out employee communications programs that include these three best practices:

  1. Define a strategic framework

A clear strategy should be the foundation for effective communication. Companies should know: What are the goals of our AI adoption? Is AI intended to accelerate growth initiatives, drive efficiencies, improve customer service, or push other key objectives? Will the use of AI be governed by a centralized function across the enterprise, or is its application – and enforcement of use restrictions – dispersed and flexible?  What tools will employees have access to, and how should they use them, and not use them, with examples?

The answers will help the company frame communications in a way that supports its goals. For example, if a core goal of AI implementation is to boost productivity, it may be prudent to spend more time assuaging employee fears around AI, building trust in its reliability, and directly speaking to potential objections than listing technical details of the platform.

  1. Craft effective messaging

Messaging should feel authentic to the company. If a core company value is fostering growth and advancement for employees, explain how using cutting-edge technology is a chance for upskilling, or how it can free up time for additional learning and development. This helps to build trust in leadership and in the tools themselves.

Demystifying AI also goes a long way in improving employee perceptions. Distilling technical details into clear, actionable language helps employees understand what AI can offer them and sets expectations on secure uses. Avoid acronyms and provide examples of practical use-cases that connect to the day-to-day work of employees whenever possible.

Most importantly, communications should be two-way. Building feedback loops through anonymous surveys, dedicated Q&A sessions, and empowering middle managers with strategies for talking to their teams together creates a collaborative environment that keeps leadership ahead of issues, ensures gaps in the policy are rapidly identified and remedied, and sees that employee needs are met.

  1. Develop multiple touchpoints

An internal responsible use policy buried on the company intranet probably won’t cut it, especially given the risks posed by AI. A multi-phase, multi-channel approach that leverages commonly used channels like Slack, employee newsletters, and town hall meetings can help to ensure that employees don’t accidentally miss essential requirements of the AI policies and procedures. It also gives employees a choice of where and how they’d like to engage. Finally, different channels lend themselves to different types and styles of information. Blending formats gives the company the chance to share all the relevant information (including things like monthly tips and FAQs) in bite-size chunks without bogging employees down so much that they overlook it altogether.

When AI is not properly introduced and explained to a workforce, companies risk confusion and operational inefficiencies, and in some cases, legal liability and reputational harm. To ensure use of AI is compliant, responsible, strategically aligned, and organizationally successful, leaders should carefully communicate to, and engage with, their internal stakeholders — creating one in-sync team that works together toward a common goal.

* * *

To subscribe to the Data Blog, please click here.

The Debevoise STAAR (Suite of Tools for Assessing AI Risk) is a monthly subscription service that provides Debevoise clients with an online suite of tools to help them fast-track their AI adoption. Please contact us at STAARinfo@debevoise.com for more information.

Author

Avi Gesser is Co-Chair of the Debevoise Data Strategy & Security Group. His practice focuses on advising major companies on a wide range of cybersecurity, privacy and artificial intelligence matters. He can be reached at agesser@debevoise.com.

Author

Scott is a Partner and Head of U.S. Litigation Support at FGS Global, advising boards and executives on government investigations and enforcement actions; antitrust, IP, employment and commercial litigation; and cybersecurity incidents. FGS Global is the world’s leading stakeholder strategy firm, with approximately 1,400 professionals around the world, advising clients in navigating critical issues and reputational challenges - including the evolving landscape of artificial intelligence.

Author

Sarah Ashton is a Partner based out of FGS Global's Los Angeles office with expertise in strategic communications, public affairs and government relations spanning sectors and industries – including specialties in emerging technology and crisis communications. FGS Global is the world’s leading stakeholder strategy firm, with approximately 1,400 professionals around the world, advising clients in navigating critical issues and reputational challenges - including the evolving landscape of artificial intelligence.

Author

Kelly Langmesser is a Managing Director in the FGS New York Office who advises and supports clients in a variety of crisis and special situations, including cybersecurity preparedness and incident response, litigation matters, and executive leadership transitions. FGS Global is the world’s leading stakeholder strategy firm, with approximately 1,400 professionals around the world, advising clients in navigating critical issues and reputational challenges - including the evolving landscape of artificial intelligence.