Symbolic representations of good and evil AI morality, crafting AI policy

The use of generative artificial intelligence platforms can increase productivity and foster creativity. At the click of a button, AI can quickly produce content—blog posts, video scripts, articles, social media posts, images and much more—in response to user prompts, after scanning its database of millions of pieces of content.

At the same time, using AI tools can pose risks to an association and its members:

  • Content created solely by AI is generally not copyrightable, which means that you may not be able to prohibit others from using it.
  • AI output may infringe a copyright owner’s exclusive rights if portions of that owner’s copyrighted work are included in the output.
  • Flaws within the AI’s algorithm or training data mean AI output is not always accurate and may include biased results. Failure to review the AI’s output for accuracy can lead to embarrassment and even legal liability, such as for deceptive or misleading advertising content.
  • Information the user provides is used to train the AI and might be incorporated into output for others, leading to potential security breaches.

For these reasons and more, an association must put guidelines in place to ensure employees use AI tools responsibly. Creating a policy will help mitigate the legal and reputational risks AI presents. Consider these five issues when creating an AI use policy.

Acceptable uses. An AI use policy should clearly explain the risks of using AI tools. Include a list of both permitted and prohibited uses and require any other uses to be authorized in advance and in writing. Clearly state which platforms and tools employees are allowed to use. Be sure to train staff on those platforms and tools and provide best practices.

Review and disclosure. Require employees to do a thorough review of all output for accuracy—and to confirm the output does not contain biased, offensive or discriminatory content or disclose personal or confidential information. Consider requiring employees to disclose when they use AI and to document their use of AI tools for work purposes.

Data privacy. Data privacy should be a focus for any AI policy. The policy should prohibit employees from entering any association, member or vendor confidential information, trade secret or personal information. Limit employee access to sensitive information and require explicit permission to enter any association data into AI tools. Some AI platforms allow users to opt out of having their input data included as training materials for the AI platform. Consider restricting employees’ use of AI platforms only to those that offer this opt-out.

Consequences for misuse. Provide a mechanism for reporting any suspected violations of the AI use policy. Clearly state disciplinary actions, up to and including termination, that employees may face if they violate the policy or misuse AI tools.

Legal requirements. Work with legal counsel to ensure the policy: (1) adequately protects the association; (2) aligns with other association procedures and policies, such as anti-harassment or information security policies; (3) aligns with confidentiality agreements; and (4) complies with any state or local requirements, such as data privacy laws. Finally, track legal and regulatory developments and adjust the AI usage policy as needed.

Once finalized, create awareness of the AI policy. Train staff on the policy requirements and ensure a clear understanding of why compliance with the policy is critical to protecting the association’s interests.

Implementing an AI policy will help the association avoid reputational and legal issues. Even more importantly, providing staff with clear guidance and training on how to properly incorporate AI tools will allow employees to responsibly use these cutting-edge tools to enhance their work on behalf of the association’s mission.

Advertisement