Blindly trusting anything is always a mistake but even more so when human resources (HR) relies too much on artificial intelligence (AI) to hire people. With the global explosion of AI, I decided to find out how organizations can address hiring bias while making the most of the latest HR technologies.
Know Your Hiring Data
Michele Bezzi, manager at SAP Security Research, leads a team that studies security challenges in AI. Bezzi stressed that privacy, fairness, and bias were three major facets of their research. Those areas are also reflected in the the company’s guiding principles for AI.
His recommendation to hiring managers was simple: conduct a comprehensive review of data before letting it loose inside the machine. Bezzi explained that modern AI uses machine learning to gain knowledge through data. For example, programmers feed images and text into the machine that eventually learns to identify a cat or a dog. The trouble is that data can be inherently and unconsciously biased. Companies face multiplying the risks in more complex scenarios like hiring.
“Machines cannot learn beyond the data they’re trained with. It’s important to train the machine with unbiased data,” said Bezzi. “Make sure your data represents your hiring objectives. Remove historical data that’s biased and add in original data that reflects your new desired outcomes.”
Bezzi cautioned against unintended discrimination as well. An algorithm could make a biased decision by pulling together various pieces of seemingly innocuous data. “Even if you remove someone’s gender or ethnicity, the machine could make biased decisions by automatically factoring in their school or the neighborhood where they live.”
Keep People in the Hiring Loop
In all the hoopla about the benefits of AI in finding top talent, it turns out the missing ingredient was people.
“When you start applying data to complex tasks like hiring, and humans aren’t involved, this can lead to errors [that have] serious consequences for hiring decisions,” said Bezzi. “Automating HR tasks is important, but you still need people to analyze the machine-generated results.”
That is where the Brilliant Hire solution by SAP comes in. Developed by employees in the SAP.iO Venture Studio program, this screening-as-service combines AI with human support. Closely aligned with the SAP SuccessFactors Human Experience Management (HXM) Suite, the network develops and administers job applicant tests, and works with outside experts who evaluate candidate results. These experts cannot see the candidate’s name, resume, or other personal identifiers, and at least three different evaluators review each applicant. During the earliest recruiting stages, algorithms can sort resumes by matching applicant skills to keywords in job descriptions. Once people make the short list, tasking them to solve relevant job-related challenges is a better way to surface the strongest candidate.
“Our test questions put candidates into actual job situations,” said Ryan Phillips, general manager at Brilliant Hire. “A financial analyst might be asked to download a spreadsheet, complete specific tasks, and re-upload it. A designer might be asked how they’d recommend redesigning a complex website to simplify the user experience. These questions are hard to cheat on.”
The model revolves around selectively applying AI to augment the human experience.
“AI is still incredibly biased, so we’re only using it in small ways across an application,” Phillips explained. “We can conduct searches based on a candidate’s answers to spot similarities within existing online content. We can also see how many paths the candidates used while taking the test, and if they copied and pasted information.”
Ditch the AI Black Box
Understanding how machines arrive at their decisions, typically known as “explainability,” can help prevent and address hiring bias in AI-based tools.
“Companies need to know what was built into the algorithm to calculate results,” said Bezzi. “It can’t be an impenetrable black box. The more sophisticated the algorithm, the more difficult it is to understand the neural networks behind machine decisions. Always start with a simple process, and make sure you can understand the why before moving on to something more complex. Better yet, keep it simple.”
Phillips suggested using open source tools to cost-effectively examine the inherent bias in an algorithm by running some simple scripts.
Machines Alone Are Not Enough
If you thought that AI would eventually push humans out completely, think again. Hiring bias starts with the data and people are responsible for eliminating it and bringing human judgment across the hiring experience. Now, let’s all take a deep breath because machines are not completely taking over anytime soon.
Follow me: @smgaler
A version of this article originally appeared on SAP BrandVoice on Forbes.