The news: A new law in New York requires third-party audits on artificial intelligence algorithms used by companies to hire or promote employees, per Ars Technica.
Why it’s worth watching: The law, the first of its kind in the country, requires employers to use external consultants to run independent assessments on whether the algorithms they’re using exhibit bias based on sex, race, or ethnicity.
- Employers must now also tell job applicants living in New York when AI plays a role in getting hired or promoted.
- The law comes as recruitment automation is increasingly being used to increase recruiter productivity and accelerate time to fill while reducing cost per hire.
- Bias present in automated tools has disproportionately harmed people of color who want to rent an apartment or purchase a home. This AI bias can also skew the hiring process and exclude viable applicants.
The bigger picture: Job seekers are increasingly dealing with their resumes and applications being scanned and sorted by AI bots that could be running on biased algorithms. Various services cater to reformatting CVs to get past AI gatekeepers and to human eyes.
- Legislating independent assessments of AI algorithms to remove biases could help raise awareness on the issue and standardize bias-free AI.
- However, creating ethical AI teams to regulate and study complex and disparate algorithms will be expensive and time-consuming—this would require expertise in a very specialized field and could fizzle out without legislative enforcement.
What Big Tech is doing: Incremental algorithmic audits can’t fundamentally solve the entrenched systemic bias replicated and amplified by AI tools—and large tech firms in particular need to demonstrate a willingness to turn ethical AI teams’ findings into actionable policy change.
Twitter created a “bias bounty” program offering researchers prizes of up to $3,500 if they correctly spot instances of bias in its image-cropping algorithm.
Facebook open-sourced an AI training data set designed to surface age, gender, and skin tone bias in computer vision and audio machine learning models.
Google and Snap both addressed the poor ability of their computer vision apps to identify and process dark skin tones.
What’s next? Regulatory pressure to remove AI’s ethical bias is mounting, and Big Tech is responding with bug bounties and open-source data sets.
- More needs to be done to excise AI’s bias from algorithms deployed on billions of people, especially when it comes to rights to equality and non-discrimination in employment.
- Setting AI ethics standards, like the guidelines issued by the Department of Defense for contractors and UNESCO, makes them more enforceable across various industries.