Robotic Hiring Systems and Discrimination

Companies using machine hiring systems might delete potential employees in violation of federal law prohibiting bias based on race, disability, age and other factors. Humans must honor protected classes in interviews while AI vendors protect proprietary algorithms.

In the above video, Wall Street Journal senior correspondent Jason Bellini covers the pros and cons of robotic hiring systems. He interviews Kevin Parker, CEO of HireVue, who says his platform is more objective than traditional interviews because it removes bias from the hiring process. However, Bellini also interviews Ifeoma Ajunwa, legal scholar and labor law professor at Cornell University who challenges that view.

First, some legal background:

Employers who interview job applicants must adhere to tenets of Title VII of the Civil Rights Act of 1964, which forbids discrimination based on national origin, age, disabilities and other factors. Questions must be free of bias. For instance, an interviewer may not inquire about a candidate’s height, weight and marital status.

No doubt AI programmers have taken Title VII into account when phrasing interview questions, such as found in this tip sheet by the University of New England. But that’s not where algorithmic discrimination might occur.

That bias might be subtle, programmed into an algorithm adapted to the hiring company’s idea of an “ideal” job candidate. People might be excluded without anyone knowing if the robot is measuring facial features for age, weight, symmetry, voice tone or other distinguishing human feature. There is no real way of knowing without examining the proprietary program.

Dr. Ajunwa addresses this concern in an NPR interview:

So that’s where it gets more complicated – right? – because a job applicant could suspect that the reason they were refused a job was based on characteristics such as race or gender, and this is certainly prohibited by law. But the problem is how to prove this. So the law requires that you prove either intent to discriminate or you show a pattern of discrimination. Automated hiring platforms actually make it much harder to do either of those.

And a lot of times, the algorithms that are part of the hiring system, they are considered proprietary, meaning that they’re a trade secret. So you may not actually be able to be privy to exactly how the algorithms were programmed and also to exactly what attributes were considered. So that actually makes it quite difficult for a job applicant.

Benetech, a nonprofit whose mission is “to empower communities with software for social good” is concerned about AI hiring systems discriminating against people with disabilities. The company discusses key findings of a 2018 study titled “Expanding Employment Success for People with Disabilities“:

  • Artificial intelligence tools are increasingly widespread and vendors of these products have little understanding of their negative impact on the employment of people with disabilities.
  • The level of data collection about all of the relevant issues remains rudimentary, limiting many opportunities for improvements.
  • It is clear that employers see people with disabilities primarily through a compliance lens, and not through a business opportunity frame.

As AI hiring systems become more popular with such companies as Goldman Sachs, Unilever and Vodafone, attorneys and legislators are investigating ways to ensure algorithms are compliant with federal law.

Illinois is among the first in the nation to take on robotic hiring programs in its “Artificial Intelligence Video Interview Act,” which requires transparency and consent for any company using these algorithms.

In a post about the new law, Bloomberg Law states:

Employers increasingly are using AI-powered platforms such as Gecko, Mya, AutoView, and HireVue to streamline their recruitment processes, control costs, and recruit the best workers. Providers claim their technologies analyze facial expressions, gestures, and word choice to evaluate qualities such as honesty, reliability, and professionalism.

But the technology is also controversial. Privacy advocates contend AI interview systems may inject algorithmic bias into recruitment processes, and that AI systems could generate unfounded conclusions about applicants based on race, ethnicity, gender, disability, and other factors.

Interpersonal Divide in the Age of the Machine contains chapters that address the inherent biases of algorithmic programming. Institutional racism, subliminally associated with an organization’s target audience or bottom line, may be encoded into sophisticated robotic systems.

For instance, the Washington Post reports that a popular algorithm that identifies patients who need extra medical care “dramatically underestimates the health needs of the sickest black patients, amplifying long-standing racial disparities in medicine.”

When it comes to robotic HR systems, that’s the beginning of what awaits those the algorithm selects for employment. If technology is used to select a person for a job, one can anticipate that it will be used to monitor performance on that job.

Here’s an excerpt from Interpersonal Divide:

Machines not only monitor how employees are using devices and applications but also may be programmed to detect moods and behaviors of those employees. Machines monitor employees to an alarming degree in some companies, often under the pretext of improving performance. Stress is measured, too, although usually in a negative light. Examples include tracking a worker’s Internet and social media use; tapping their phones, emails and texts; measuring keystroke speed and accuracy; deploying video surveillance; and embedding chips in company badges to evaluate whereabouts, posture and voice tone.

Cyberlaw needs to catch up with federal labor law, especially when AI is used in hiring and firing decisions. As Bloomberg Law notes in its report, some labor law attorneys believe algorithmic systems could unintentionally screen out protected classes. One attorney cited in the above post suggests employers should test robotic systems against a pool of candidates for potential bias.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s