Estimated reading time: 6 minutes
It’s one thing to write a great policy. It’s something different to implement it.
We’ve been talking about artificial intelligence (AI) policies in this series. The first article focused on why organizations need to have both AI strategies and policies. In the second article, we discussed some things to consider when writing an Artificial Intelligence policy. Today, I want to wrap up this series with a conversation about implementing policy.
To offer some insights, I’ve been chatting with our friend and attorney Carrie Cherveny, chief compliance officer and senior vice president of strategic solutions at HUB International. In her role at HUB International, Carrie works with clients to develop strategies that ensure compliance and risk mitigation when it comes to benefits and employment practices.
Just a reminder, because we are talking about human resources policy, please keep in mind that Carrie’s comments should not be construed as legal advice or as pertaining to any specific factual situations. If you have detailed questions, they should be addressed directly with your friendly neighborhood employment law attorney.
Carrie, thanks for discussing this important topic with us. One of the things that occurred to me during our conversation is that not only do organizations need to have policies in place for the use of AI but so do human resources departments. Is there a process you could suggest for HR departments needing to create an Artificial Intelligence policy for themselves?
[Cherveny] There are a number of ways HR is using AI today. We’ve already talked about recruiting but there are many other opportunities for HR to create efficiencies and improve the employee experience.
New Hire Onboarding: For example, imagine a gamified onboarding experience for a cohort of new hires. The games can be designed to orient the employees to the company, build work relationships, and develop a sense of belonging for the new employees. The game may be a scavenger hunt or other contests and exercises where employees engage in friendly competition. Employees could be challenged to find three fun facts about employees in other departments and teams, learn about other job functions in the organization, and identify various process or handbook policies. These activities can be built in a gamified system and tracked on leaderboards with prizes for reaching new ‘levels’ or milestones.
Learning and Development: Today, AI can learn a person’s voice and apply it to training content. Likewise, AI can take content, add avatars and create a presentation. AI can even suggest the content for the topic and build the entire training program. These tools and resources can create significant efficiencies as HR addresses employee learning opportunities. HR can use AI for upskilling workers, teaching them new subject matter to grow their career and they can do using their own voices.
Time, Attendance, and Payroll: Reliance on automated timekeeping and monitoring systems without proper human oversight can create potential compliance challenges with respect to determining hours worked for purposes of federal wage and hour laws. AI may incorrectly categorize time as non-compensable work hours:
- Worker activity
- Productivity/performance
- Keystrokes
- Eye movements
- Internet browsing
AI should not determine whether an employee is performing ‘hours worked’ under the Fair Labor Standards Act (FLSA). AI will not be able to determine if in substance, the employee was ‘suffered or permitted to work’ and thus performed ‘hours worked’ under the FLSA.
While AI provides various opportunities for HR efficiencies, it also poses several risks. The Department of Labor has issued a Field Assistance Bulletin detailing risks described below:
Unpaid Time: AI also poses a risk when it may make determinations regarding unpaid meal breaks. Employers must ensure that employees are completely relieved of duty for time to be counted as unpaid break time. Auto meal break deductions and other longer break periods may result in an FLSA violation (including nursing mother breaks). Likewise, AI poses the same risks and challenges with regard to ‘waiting time’. AI may improperly auto deduct the waiting time.
Geofencing: These systems use GPS technology from an employee’s phone or other wearable device to determine the worker’s location relative to a job site. AI backed geofencing systems track employees’ locations and automate the clocking in and clocking out process, based on the employee’s location hours worked. While ‘geofencing’ has been available for some time, the addition of AI compounds the risk that payroll system will improperly clock-out an employee simply based on location and what it deems to be hours worked.
Family and Medical Leave Act – Certifications and Eligibility: The improper computation of hours worked not only creates an FLSA risk, but it also creates a risk with respect to FMLA eligibility. If an AI program does not capture all actual hours worked, then it’s possible that an otherwise FMLA eligible employee may be denied leave based on the failure to meet the 1,250 hours worked requirement.
Once an organization creates their Artificial Intelligence policy, they need to communicate it. Can you name 2-3 things that organizations should consider when communicating their new AI policy?
[Cherveny] It seems more and more difficult to get information in the hands of employees. Employees have become accustomed to TikTok, YouTube, and other video means of quick packages of information. One way to get information in the hands of employees is to use AI to create a video (or video series) explaining the company’s new AI policy. A video or video series could include short (i.e., 5-8 minutes) pre-recorded videos explaining various components of the AI policy.
Employers should also publish a written policy in its entirety. We’d recommend putting the policy in at least three places:
- Inside any stand-alone IT policy;
- Inside the employee handbook; and
- Publish on a stand-alone basis.
Policies can also be socialized and distributed through TEAMS channels, posted on company intranets, and delivered in team meetings, manager 1×1 meetings, and in quarterly or annual town-hall meetings. The more the company is communicating about the AI policy the more it will become socialized inside the organization’s culture and way of thinking.
Again, a huge thanks to Carrie for sharing this information with us. If you want to learn more about the risks and rewards of artificial intelligence, check out this HUB International webinar on “Humanizing HR in the Age of AI: Embracing the Technology Revolution”.
Artificial intelligence isn’t going away. Sure, it’s going to have some hiccups and setbacks. But it’s not going away. Now is the perfect time for organizations to discuss their strategy and create an AI policy. The organizations that help employees get comfortable with AI are the ones that will be successful – because employees will know the ethical and compliant way to use it.
Image created by DALL-E for Sharlyn Lauby
The post How to Implement an Artificial Intelligence Policy [Part 3] appeared first on hr bartender.