Employers are increasingly automating the hiring process through the use of artificial intelligence (AI) tools. While these tools may be appealing to hiring teams for their potential to streamline and speed up the hiring process, and to identify superior and more diverse candidates, they can also perpetuate existing biases. Now, local governments are responding with proposed regulations—for companies using such technologies and those developing them—to guard against discriminatory practices.
More and more, employers are using artificial intelligence (AI) tools to automate key parts of the hiring process. Hiring algorithms have gained popularity in recent years—across all industries and job types—as a result of employers’ desires to increase efficiency in the hiring process and improve the quality and, in some cases, diversity of candidates.
Automation in hiring has become even more widely adopted as a result of COVID-19 and the pivot to remote work, which has made business increasingly dependent on virtual systems for previously in-person interactions such as job applications and interviews. The use of hiring AI will likely continue to proliferate as the economy recovers and companies enter a phase of new hiring.
However, this deployment of AI may prove to be problematic as algorithms can reproduce human bias in surprising and insidious ways.
Automated delivery of online job ads has resulted in reproduction of gender and racial bias. One tech company’s internally developed software to screen candidates was trained with existing employees’ resumes and produced severe discrimination against women. In the interview process, use of facial recognition technologies can also lead to discrimination.
In all of these cases, the issue lies with the development of the AI. When trained on the resumes of a company’s current employees, the algorithm reproduces the human bias that led to the company’s workforce being predominantly white and male—further entrenching systemic inequities. As algorithms aren’t trained with diverse data, it perpetuates biases against candidates from diverse backgrounds.
Lawmakers are seeing this potential for discrimination and acting on it.
A bill proposed by the New York City Council would require companies to disclose their use of technology in the hiring process. In addition, the bill would require vendors of the hiring software to conduct audits to ensure their tools do not discriminate. At a national level, in December 2020, 10 U.S. senators sent a letter to the Chair of the Equal Employment Opportunity Commission (EEOC) requesting the commission’s oversight on hiring technologies, noting its potential to “reproduce and deepen systemic patterns of discrimination.”
While it may seem that shifting away from human-led decision-making can reduce prejudice, a comprehensive review of employment algorithms found that “predictive hiring tools are prone to be biased by default.” This is due to the inherent weakness of the workforce data on which the algorithms are trained, given that the data reflects the bias of the people that screened, hired, and promoted the workforce.
This leads to a conundrum for companies, who are simultaneously seeking to improve efficiencies in hiring and meet goals related to diversity, equity, and inclusion; particularly as they face renewed calls to diversify their workforces following both the unequal impacts of COVID-19 and recent momentum around the global Black Lives Matter movement.
Employers may see potential for AI to be trained to value candidates from protected classes (in the U.S., this includes race, color, religion, sex, national origin, disability, and genetic information). Indeed, some companies are developing hiring tech with the specific goal of eliminating bias.
However, automating the process at all involves training AI to seek specific qualifications and attributes—the sum of which defines a standard expectation of what jobseekers should be and how jobseekers should present themselves in order to be hired. As a result, even with government intervention, this may lead to discrimination against anyone that doesn’t—or can’t—meet the expectation.
Signals of Change
HireVue has removed the facial analysis component from its screening assessments due to increased concerns over the transparent and appropriate use of AI in the hiring process. HireVue is a prominent video interview and assessment vendor that has conducted over 19 million video interviews for over 700 customers worldwide.
A new report from the U.K.’s Trades Union Congress calls for “urgent legislative changes” to regulate the use of artificial intelligence, warning of widespread discrimination if left unchecked. The report sets out “’red lines” that should not be crossed if AI systems are to exist in harmony with, rather than undermine, the basis for the modern employment relationship.
A working paper finds that some hiring algorithms may be able to increase the diversity and quality of job candidates. Researchers tested different designs of hiring algorithms and found that adding an “exploration bonus” increased Black and Hispanic candidates, compared to “standardized learning” models.
BSR Sustainable Futures Lab
Implications for Sustainable Business
Regardless of industry, most major corporations are already using algorithm-driven technologies at some point of the hiring process. While hiring may seem like an issue solely in the realm of Human Resources, the widespread use of these technologies and their potential for perpetuating bias and discrimination make it a concern for many other functions of the business, from Legal to Diversity, Equity, and Inclusion all the way to the C-suite.
To ensure respect for human rights and avoid potential legal risks, companies need to be aware of all the types of AI hiring tools they are using. This can include software developed within the company or from different vendors for different stages of the hiring process.
Companies should conduct ethics and human rights reviews on each of their AI tools to understand the potential risks and adverse impacts that the technologies may bring. This will require understanding the inherent issues with the technology design and the potential for bias and discrimination in its intended use by the company.
If a company understands the potential risks and still wants to move forward with deploying the AI hiring tools, they will need to ensure that the people using the tools are adequately trained. Automating hiring processes with AI technologies may reduce the time and staff needed for hiring, but it cannot fully replace humans. There will need to be human oversight and regular reviews of the candidates surfaced by the technologies as well as jobseekers’ experience of the hiring process to ensure rights are respected throughout.
Human oversight of AI tools is a must. Even when technologies are developed with the specific aim of promoting diversity and reducing discrimination, it is vital to remain vigilant.
In addition, companies will need to notify job candidates when they are being evaluated by AI tools and consider providing an option to opt-out of such AI screening, without penalizing the candidate for doing so.
Using hiring technologies will also lead to a wealth of data collected from job candidates and, eventually, the workers who are hired. Companies will be faced with questions on how to protect data to ensure candidate and worker privacy.
Lastly, the technology companies that develop the algorithms and AI tools need to build in ethical review and human rights assessments in the product development process. Considerations for potential bias and discrimination should be integrated throughout the development process, and teams developing AI tools should include a diverse group of women and men, including individuals from diverse backgrounds.