Estimated reading time: 8 minutes
Given some of artificial intelligence’s (AI) challenges right now, it might be tempting to say that AI isn’t the panacea that everyone expected it to be. Personally, I think we’re still very early in the AI adoption curve, so organizations need to continue to pay attention to what’s developing and conduct experiments to see how it works.
In the past, we’ve talked about the need for organizations to develop an AI strategy. Today, I want to talk about developing an internal AI policy. I had the opportunity to hear our friend Carrie Cherveny speak at SHRM’s 2024 Annual Conference on “Getting Smart About AI”, which was very informative. So, I asked Carrie if we could talk about developing AI policy and thankfully, she said yes.
Having an AI policy is a fundamental step to be ‘ready’ for AI in your workplace. An AI policy is now just as essential as, for example, your anti-harassment or Family and Medical Leave Act (FMLA) policies.
Carrie Cherveny is chief compliance officer and senior vice president of strategic solutions at HUB International. In her role, Carrie works with clients to develop strategies that ensure compliance and risk mitigation when it comes to benefits and employment practices. As always, please remember that her comments should not be construed as legal advice or as pertaining to any specific factual situations. If you have detailed questions, they should be addressed directly with your friendly neighborhood employment attorney.
Carrie, thanks for being here. Why do organizations need to consider having an internal AI policy (in addition to an AI strategy)?
[Cherveny] Today AI is everywhere. Did you catch any of the Olympic games? It seemed like more than half the ads were for AI platforms. In fact, on June 10, 2024, Apple announced the upcoming launch of Apple Intelligence – its new artificial intelligence technology that will be integrated into the release of iOS18. According to the Apple press release, ‘It harnesses the power of Apple silicon to understand and create language and images, take action across apps, and draw from personal context to simplify and accelerate everyday tasks’. Ready or not – AI is here. Having an AI policy is a fundamental step to be ‘ready’ for AI in your workplace. An AI policy is now just as essential as, for example, your anti-harassment or Family and Medical Leave Act (FMLA) policies.
Employers have some decisions to make. Employers have to decide if they will allow the use of AI in the workplace and whether AI will be limited to a specific platform. Likewise, employers will have to identify the departments and roles that are permitted and/or prohibited from using AI. Well-crafted policies are designed to specifically address these questions and more.
When it comes to drafting policies, often human resources departments take the lead. Who should be involved in helping to develop an AI policy?
[Cherveny] AI has the potential to impact every corner of your organization. This means that your organization’s AI policy should be multifaceted and include various subject matter disciplines. Organizations should establish an AI committee and include, at a minimum:
- Legal/in-house counsel
- Human Resources
- Finance/Accounting
- Operations
Other subject matter expert (SME) committee members will be dependent on the nature of the business. For example, a healthcare organization would likely include its Health Insurance Portability and Accountability Act (HIPAA) Privacy Officer. A financial services firm may include its compliance department along with a data privacy officer. Employers with union employees may want to include a union representative.
Once we determine who should be involved in helping to develop an AI policy, is there a framework they can follow to identify key areas of consideration?
[Cherveny] Not only should the AI committee work together to develop a comprehensive policy, but the committee should also be charged with vetting the AI tools. For example, a committee should develop a robust discovery process to better understand the vendor’s reputation, how it handles the information entered into its system, and its data security and cyber security measures.
The organization should draft comprehensive, clear, and unambiguous ‘rules of the road’ for the use of AI in the workplace including, for example:
- Prohibited uses of AI. Consider the kinds of data that employees may never put into an AI platform such as Personally Identifiable Information (PII), Protected Health Information (PHI), company confidential information (financials, methodologies, trade secrets, attorney-client privileged information, etc.).
- Permitted uses of AI. When may an employee use AI in the performance of their job? For example, AI may create efficiencies for general research, creating/identifying sample documents, wordsmithing a written document or job aids (such as skill development, learning a new system or a tool in a system such Excel pivot tables).
- Required Safeguards. Will employees be required to ‘fact check’ data or findings sourced through AI? We’ve all read about the attorneys who turned in briefs to the courts that were filled with fictitious cases and citations. Employees should be required to check reliable sources to ensure that the AI findings are accurate and credible. There are some AI platforms that will also provide the citations and sources of its findings. For example, Microsoft CoPilot provides the citations and sources for its findings. However, even when the AI provides its sources, the end-user should also check sources outside of the AI references to ensure complete, thorough, and accurate work.
- Required notices and disclosure. Will you require your employees to disclose when they use AI? For example, under a new New York state law, users are required to disclose when they use AI. Notice and disclosures are quickly becoming a best practice in AI policy requirements. Employers may want to consider requiring employees to disclose the purpose or reason for using the AI, identify the platform(s) used, and a summary of the results that were included in the work product.
- Required attribution and citations. Will you require employees to identify the specific AI tools upon which they relied when developing their work product? This is a little like a notice or disclosure policy but requires the employee to specifically identify the AI tool or platform – not simply the use of AI.
Since we’re having an AI conversation, I asked Microsoft CoPilot for a ‘sample employer AI use policy for the workplace’. Here’s what it produced:
AI Tool Usage Policy [template created by Microsoft CoPilot]
Policy Brief & Purpose: Our company’s AI tool usage policy aims to outline the responsible and secure use of artificial intelligence (AI) tools in the workplace. This is particularly important when handling sensitive data and proprietary company and customer information.
Scope: AI tools are revolutionizing our work by automating tasks, enhancing decision-making, and providing insights. However, they also introduce challenges regarding information security and data protection.
Purpose: The purpose of this policy is to ensure that all employees use AI tools in a secure, responsible, and confidential manner, adhering to the requirements for evaluating security risks and protecting confidential data.
Policy Statement: We recognize the potential risks AI tools pose to our operations and customers. We are committed to maintaining the confidentiality, integrity, and availability of all company and customer data.
Security Best Practices: Employees are expected to follow these best practices when using AI tools:
- Evaluation of AI Tools: Before using any AI tool, evaluate its security features, terms of service, and privacy policy. Check the tool developer’s reputation and any third-party services involved.
- Protection of Confidential Data: Do not upload or share confidential, proprietary, or regulated data without prior approval from the appropriate department.
- Access Control: Do not grant access to AI tools outside the company without proper approval and compliance with security requirements.
- Use of Reputable AI Tools: Only use AI tools that are reputable and meet our security and data protection standards.
Compliance: All employees must comply with this policy as part of their employment terms. Any violation may result in disciplinary action up to and including termination of employment.
Consider this template a starting point, and you should modify it based on your specific needs and legal requirements. It’s also advisable to consult with legal counsel to ensure compliance with all applicable laws and regulations. Remember, an effective policy is one that is clear, comprehensive, and enforceable.
I want to thank Carrie for sharing her knowledge with us. And I love that she included the sample AI policy template to get our thinking started! If you want to learn more, check out this archived webinar from HUB International on “Humanizing HR in the Age of AI: Embracing the Technology Revolution”.
Once organizations decide that they need to create an AI policy, then the challenge begins of determining what to include in the policy. Carrie mentioned some initial considerations here, but in our next article, we’re going to do a deeper dive into the components of an artificial intelligence policy. Stay tuned!
Image created by DALL-E demonstrating the importance of human oversight in AI
The post Why Organizations Need an Artificial Intelligence Policy [Part 1] appeared first on hr bartender.