How can employers build digital trust when using AI to make or influence employment decisions?
The most obvious way for employers to build digital trust is by ensuring that AI systems used in sensitive personal data scenarios like employment are extremely secure and that they assist in making decisions based on legitimate business criteria, clearly they need to avoid discrimination against job applicants or employees based on legally-protected characteristics such as race, sex, age, or disability and they also need to be effective in making or supporting successful choices. Both the employers and the applicants need to be able to trust the system being used. Relying on output from AI systems to make employment decisions can lead to discrimination in several different ways. For example, using a recruitment tool that treats some candidates less favorably based on a protected characteristic, or that sets a quota of ensuring certain numbers of individuals of a protected characteristic will be selected by the tool, is disparate treatment (under U.S. law) or direct discrimination (in the U.K. and Europe).
Risk of discrimination
A milestone settlement recently reached by the EEOC over AI discrimination in hiring highlights these risks. In that case, which settled for US$365,000, the EEOC alleged that a company programmed its AI-powered application software to automatically reject female applicants over the age of 55 and male applicants over the age of 60. However, even when systems are not designed to intentionally discriminate, problems can become embedded in AI systems in unintentional ways.
For example, data used to train AI tools may not be statistically balanced, or may even reflect past discrimination, which may unintentionally lead an AI to favor or disfavor certain groups on the basis of a protected characteristic. They can also lead to disparate impact (indirect discrimination in Europe) if outcomes put people who share a protected characteristic at a disadvantage, even though the AI tool is not specifically taking the characteristic into account in its decision making.
If an employer identifies unjustified disparate impact, it may be difficult to practically or legally adjust an AI system to remove that disadvantage. Where AI involves machine learning, it can be hard to identify why an algorithm or the training data is causing the relevant effect, which makes correcting it problematic. Additionally, an employer may face claims of discrimination by trying to “fix” a potential disparate impact caused by AI by reprogramming the tool to favour the disadvantaged group based on a protected characteristic, such as having different “pass marks” based on protected characteristics to equalise successful male and female candidates, or establishing a quota based on protected class status.
Another potential risk arises if AI systems put employees with a disability at a disadvantage. For example, an AI-assisted interview that uses visual or verbal data to make hiring recommendations may disadvantage candidates with some types of disability. In that case, the employer may be under a duty to make a reasonable accommodation/reasonable adjustment to ensure that systems do not disadvantage candidates with disabilities in that way.
Speed of change
Above all, this is an area where governments and regulators are aware of the issues but struggling to keep up with advances in technology. Individuals are becoming increasingly alive to the potential risks of AI and the routes available to challenge decisions they disagree with. Over the next few years, the law will likely begin to catch up, so employers should monitor developments closely.
Ultimately, biased systems that are subject to consistent successful legal challenges, or insecure systems that are subject to data breaches and cyberattacks, will never be trusted. So it is vital to get all of these features right in any system that is deployed in the employment context.