Leveraging AI in talent acquisition was considered taboo eight months ago. The fear of algorithmic bias and ‘automating away my job’ made sure the risk of these technologies wasn’t worth the potential reward.

But lately, there’s been a shift. Talent leaders are starting to take artificial intelligence more seriously, especially as smaller teams and budgets are forcing teams to adopt technology that can help them scale with less. Those same leaders are also seeing the impact AI has had in other functions like sales, marketing, and R&D, dramatically improving their ability to work at scale.

And yet, new regulations like New York’s Local Law 144, would seek to limit how recruiting teams use AI.  For the unfamiliar: Local Law 144 would require employers to conduct bias audits on automated employment decision tools (AEDTs), including those that use AI and similar technologies, and would require employers to provide specific notices about such tools to employees or job candidates who reside in New York City. Others have pointed out the ways in which this particular legislation might not actually have much impact, but the law and its passing illustrates the ways in which we’re still thinking about AI in recruiting all wrong –– ultimately, this law and others like it could materially slow down the adoption of software that can benefit recruiting teams that are serious about scaling their diversity recruiting efforts.

Bias for Good

Recruiting is a special use case, and concerns around AI’s potential to negatively impact candidates who are part of underrepresented groups aren’t unfounded, especially as tools touch ever more elements of the recruiting and hiring pipeline –– from how candidates find roles, to resumé scans, to facial and voice recognition software leveraged in the interview process.

But there’s a flip side to that coin: tools that use AI specifically to support the hiring of underrepresented groups. While AI can unintentionally filter out candidates for certain groups based on keywords and NLP, it can also –– when applied intentionally –– do the exact opposite, and surface candidates from underrepresented groups to help diversify pipelines. Regulation that doesn’t acknowledge that there are tools and applications that are specifically designed to support underrepresented groups by quite literally filtering candidates with different backgrounds into candidate pools. Thinking only of the ways AI and automation can result in negative outcomes demonstrates a too-narrow view on these tools, and could do more harm than good in their attempts to prevent bias in hiring.

Optimizing for Candidate Control

The ethics around AI application for the talent acquisition and hiring process depends heavily on where the technology is applied: it might be an issue if it’s automating the pipeline for recruiters and potentially weeding out qualified candidates before they’re reviewed by a human, but not if it’s designed to support candidates in finding and applying for relevant roles. For example, algorithms that match candidates with skill-relevant, available job opportunities so that qualified candidates don’t miss out on opportunities, or even better, algorithms that uncover ‘inferred skills’ to match candidates with roles they wouldn’t have applied for otherwise.

AI tools can also analyze the behaviors of a candidate on any given platform to understand and surface roles in which they’re actively interested; this a fantastic way to optimize the process for job seekers.

AI vs Referrals

Many companies –– from startups to Fortune 500s –  rely on referrals to fill open roles, which data shows can lead to a homogeneous workforce. This is in part because referrals lend a helping hand in narrowing down vast quantities of applicants to vetted candidates. But AI done well can do the same, and when it’s designed to surface candidates that meet predetermined D&I thresholds, it can support the hiring of talented, qualified candidates who might have otherwise been overlooked, or who because of their background, wouldn’t have come in via referral.

Not only is prioritizing DE&I objectively the fair and right thing to do and an important part of a forward-thinking, equitable society, but it’s also simply good for business; companies that prioritize these efforts are more productive and successful, while employees are happier and stick around longer. And as Gen-Z enters the workforce, the candidate pool is objectively becoming more diverse, so preparing to attract underrepresented candidates contributes to future proofing any business.

It’s the People Team’s Turn

Sales, marketing, and R&D have had all the fun with AI –– it’s far less prone to amplifying systemic issues in those contexts. But now it’s the people teams turn to embrace this technological shift, while giving feedback on all the ways it can disenfranchise the very people we serve and thinking of ways to use these advances to support positive outcomes, rather than regulating them out of processes point-blank in service of “less bias.”

The opportunity to adopt technology now to support diversity efforts at scale is a massive one, especially as companies experience a slower pace of hiring relative to the frenzied pace of the last few years. If you’re not laying the foundation for attracting underrepresented people in new, tech-enabled ways, companies risk losing out on exceptional talent –– and losing the competitive edge that AI can provide.


Authors
Ilit Raz

Ilit Raz is the founder & CEO of Joonko, a platform that leverages technology to match underrepresented candidates with employment opportunities at great companies. She has 20+ years of experience working in tech startups, spent seven years in the highly-regarded IDF intelligence unit 8200, has vast experience in NLP, and also holds a Computer Science degree and an Executive MBA. She founded Joonko to focus on building solutions for underrepresented people in early 2016, after experiencing bias in the industry first-hand.


Discussion

Please log in to post comments.

Login