Estimated reading time: 8 minutes
While artificial intelligence (AI) is very much in the news, the technology is still new. It can be challenging to draft a policy about something that we’re still learning.
In the first article of this series on artificial intelligence policies, we discussed the reasons that organizations need to consider drafting a policy. In this article, let’s take a closer look at what organizations should consider when it comes to creating policy.
To help us learn more, I’ve been chatting with our friend Carrie Cherveny, chief compliance officer and senior vice president of strategic solutions at HUB International. In her role at HUB International, Carrie works with clients to develop strategies that ensure compliance and risk mitigation when it comes to benefits and employment practices.
Because we are talking about human resources policy, please remember that Carrie’s comments should not be construed as legal advice or as pertaining to any specific factual situations. If you have detailed questions, they should be addressed directly with your friendly neighborhood labor attorney.
Carrie, thanks again for helping us understand this topic. When it comes to policy development, organizations have an opportunity to state their positions about something. For example, organizations might talk about their commitment to ethical conduct and compliance when introducing their code of conduct. Are there some things that organizations might want to consider confirming their position on when introducing an AI policy?
[Cherveny] AI is revolutionary. With such dramatic change comes fear, uncertainty, and doubt. Compounding the concerns about AI is the lack of transparency and visibility into the AI programming. There’s really no way to ‘look under the hood’ and inspect the AI engine. Consequently, there’s no way to know if the system was developed with any inherent bias. Moreover, because AI is machine learning (meaning it learns from the end user), there’s no way to know if the AI is adopting an unconscious bias of the end-user.
For example, let’s say a recruiter is using AI to sort through candidate resumes and first interviews. The recruiter that selects candidates to go forward in the process has an unconscious bias and leans into younger females for the role. Is it possible that the AI will learn from the recruiter and likewise highlight younger females as ‘top candidates’ for the role?
To control for these possibilities, employers must be sure to always “pressure test’ the AI results. In the recruiting example, the recruiter should regularly review the resumes and the candidate profiles for the rejected candidates or those on the bottom of the list.
Be your own end-user. In other words, use the AI and be the candidate. Change your name to names that may reflect various ethnicities. Change your resume to reflect various years of service (i.e. age). Change your address to reflect various geographic locations. Do you get the same result from the application tracking system?
The risks of a discriminatory result or process using AI also creates fear, uncertainty, and doubt for your employees. Employers have made great strides in the field of diversity, equity, and inclusion – if not managed properly AI could create a significant setback.
The AI policy provides the opportunity for the employer to reassure employees that:
- A human will always be the decisionmaker
- The employer is committed to an environment free from discrimination
- The employer is committed to its diversity, equity, and inclusion (DEI) initiatives and programs
- The organization has robust AI audit processes to prevent a discriminatory or result
For example, since we are talking about AI, I asked Microsoft CoPilot for ‘sample language for an AI policy reassuring employees that AI will never make a final decision and HR will provide oversight to avoid discrimination or a disparate impact’. Here’s what was provided:
AI Decision-Making and Oversight Policy [created by Microsoft CoPilot]
Commitment to Non-Discrimination: Our company is committed to maintaining a workplace free of discrimination. We ensure that all employment decisions are based on merit, qualifications, and abilities. We do not tolerate discrimination in any form.
Role of AI in Decision-Making: While we leverage artificial intelligence (AI) to assist in processing and analyzing data, we affirm that AI will never make final decisions on matters that affect employee status or progression within the company. AI serves as a tool to support, not replace, human judgment and decision-making.
Human Oversight: To ensure fairness and avoid any disparate impact, our Human Resources (HR) department will provide oversight and review of all AI-assisted decisions. HR will:
- Regularly audit AI tools for any potential biases.
- Review AI-generated recommendations before any action is taken.
- Ensure that AI tools are used in compliance with our equal opportunity policy.
Employee Rights: Employees have the right to:
- Request a review of any AI-assisted decision that they believe may have a discriminatory effect.
- Be informed about the AI tools used and the data processed.
- Receive an explanation of any AI-assisted decision upon request.
While we can’t tell organizations how to use AI – that will be driven by industry, jobs, etc. – it seems to me that any AI policy needs to give employees a way to ask questions or express concerns. Do organizations need to include some sort of reporting mechanism in their policy? And if so, should this be reported to HR or maybe some sort of “AI Committee”?
[Cherveny] Have you ever had a pleasant customer service chat-bot experience? When I ask conference attendees this question, I usually receive a resounding, unanimous ‘No!’ or ‘Never!’. It’s one thing to be a frustrated customer, it’s another to be an employee being denied their rights under various federal laws.
An employee difficult chat-bot experience can be a violation of various federal laws. For example, some AI tools may require verbal or video interactions. There are AI chatbots that can conduct a candidate interview or to assist an existing employee with benefits or handbook questions. Likewise, employers may use AI video tools to conduct a candidate interview or conduct new hire orientation. The use of these tools is not illegal and can often create significant efficiencies.
But – what if your candidate or employee has an impairment that makes it difficult for the individual to communicate with the AI? For example, a video AI tool may not provide a positive rating for a candidate with a speech impediment, strong accent, or a facial tick. Likewise, an AI chatbot may not provide a high score for a candidate who has dyslexia. How can that candidate or employee get past your AI tool and reach a live person?
These are just a few of the examples that make it necessary for employers to create an ‘easy button’ for candidates and employees to obtain access to a live person. There are at least two regulations that may be applicable here.
The Americans with Disability Act (ADA): The employer relies on an algorithmic decision-making tool that intentionally or unintentionally ‘screens out’ an individual with a disability, even though that individual is able to do the job with a reasonable accommodation. ‘Screenout’ occurs when a disability prevents a job applicant or employee from meeting – or lowers their performance on – a selection criterion, and the applicant or employee loses a job opportunity as a result. A disability could have this effect by, for example, reducing the accuracy of the assessment, creating special circumstances that have not been taken into account, or preventing the individual from participating in the assessment altogether.
The Americans with Disabilities Act will require that the employer provide the candidate or employee with a disabling condition easy access to an avenue to request an accommodation. For example, a candidate with a speech impediment will need an easy way to request the company to provide a live human for the interview instead of the AI.
Title VII of the Civil Rights Act: As in previous example, Title VII may apply if the candidate or employee does not speak English as a first language and/or may have an accent. Failure to give candidates and employees the same opportunities regardless of their national origin (i.e. their accent) may run afoul of Title VII if English proficiency Is not a legitimate position requirement. The employer must ensure that candidates and/or employee do not suffer a disparate impact on the basis of national origin.
The Equal Employment Opportunity Commission (EEOC) has been ahead of the AI curve and has provided useful and information guidance on these topics.
My thanks to Carrie for sharing her knowledge with us. Organizations have a lot to consider when drafting an artificial intelligence policy. There are the considerations based on your industry and jobs. We talked about some of those aspects in the first article. And then there’s existing legislation, which is changing to meet the needs of the modern workplace.
In addition to the guidance being provided by the EEOC, be sure to check out the checklist created by HUB International on how HR departments can seamlessly integrate AI into their workflows. And that’s going to lead us to our third (and final) article in this series on artificial intelligence – how can human resources departments effectively implement an AI policy in their company.
Image created by DALL-E for Sharlyn Lauby
The post What Organizations Should Include in Their Artificial Intelligence Policy [Part 2] appeared first on hr bartender.