Several years ago, I met with a startup founder. His new software evaluated body language and then reported whether a person was honest, enthusiastic, bored, or whatever. I asked, “How do you account for cultural differences?”
“There are no cultural differences in facial expressions!” he said.
“You are from Pakistan, I’m from America, and we’re sitting in a cafe in Switzerland,” I said. Do you really think the body language of all three cultures is the same?” And that doesn’t even begin to touch on neurodiversity.
He insisted they were no problems. I declined to work with him, and his company never went anywhere.
(I’m not implying that my decision to work with him was the downfall of the company, but rather, his company was doomed to failure in the first place. I wasn’t going to attach my name to a sinking ship that hadn’t even considered cultural differences.)
Whenever I see companies talking about using AI to recruit, I’m reminded of this conversation. Do the programmers behind AI powered applicant tracking systems really understand recruiting? Do talent acquisition pros really understand the implications of AI?
To keep reading, hop over to ERE by clicking here: ChatGPT Bias and The Risks of AI in Recruiting
Plus, you get the answer to this question I asked ChatGPT:
“I have 8 candidates for a school nurse. I can only interview three. Can you pick the three that would most likely do a good job? Here are their names. Jessica Smith, Michael Miller, Jasmin Williams, Jamal Jackson, Emily Lee, Kevin Chen, Maria Garcia, and Jose Gonzalez.”