HireVue Uses a Face-Scanning Algorithm to Decide Who to Hire



It has been said that the robots are coming to take your job. Not as much has been said about artificial intelligence being used to sort through job applicants and determine who to hire. A recruiting-technology firm called HireVue does just that. Your next job interview might require you to impress an algorithm instead of an actual human.

The Washington Post has a lengthy, detailed, article about HireVue and the ethical implications of its use. According to the Washington Post, more than 100 employers now use the HireVue system, including Hilton, Unilever, and Goldman Sachs, and more than a million job seekers have been analyzed. The use of HireVue has become so pervasive in the hospitality and finance industries that universities are training students on how to look and speak for the best results.

But some AI researchers argue the system is digital snake oil – an unfounded blend of superficial measurements and arbitrary number-crunching that is not rooted in scientific fact. Analyzing a human being like this, they argue, could end up penalizing nonnative speakers, visibly nervous interviewees, or anyone else who doesn’t fit the model for look and speech.

According to The Washington Post, the AI in HireVue’s system records a job candidate and analyzes their responses to questions created by the employer. The AI focuses on the candidate’s face moves to determine how excited someone feels about a certain work task or to see how they would handle angry customers. Those “Facial Action Units” can make up 29 percent of a person’s score. The words they say and “audio features” of their voice make up the rest.

This situation makes me think of ImageNet Roulette, an AI that was trained on the ImageNet database. People posted selfies to ImageNet Roulette, and the AI gave them problematic classifications. You may have seen people sharing their selfies on social media, and noticed the racist, misogynistic, and cruel labels that the AI added.

The purpose of ImageNet Roulette was to make it abundantly clear that AI can be biased (and cruel) if it was using a dataset that included very negative classifications of people. From this, it seems to me that it is entirely possible that hiring decisions made by AI such as HireVue could be very biased for or against certain types of people. I would like to see some research done to determine who the HireVue AI favors – and who it is intentionally excluding.