Artificial Intelligence (AI) has become a game-changer in recruitment. It’s fast, efficient, and seemingly objective. Tools like Automated Employment Decision Tools (AEDTs) can now scan resumes, evaluate candidates during interviews, and predict how well someone might fit into a company. The promise? An easier, quicker, and more data-driven way to hire.
However, there’s a flip side to this shiny new technology. For all its benefits, AI has a darker undercurrent: It can unintentionally perpetuate biases that favor certain groups while sidelining others, especially women.
This issue is amplified when you consider that most AI systems are trained on historical data that may have included biases from previous hiring practices. AI can learn those biases and continue favoring candidates from historically privileged groups.
So, what can be done? Enter New York City’s Local Law 144 (LL144), which aims to regulate how AI is used in hiring, ensuring fairness and accountability. In this article, we’ll explain LL144, how it seeks to create a fairer hiring process, and why it’s crucial for women’s employment opportunities.
What is NYC Local Law 144?
Passed in 2021 and enforced starting in July 2023, LL144 is the first regulation in the U.S. to impose strict guidelines on the use of AI in hiring. The law was designed to ensure that companies using Automated Employment Decision Tools (AEDTs) do not unfairly discriminate against any group, especially women and minorities.
Here’s what the law requires:
- Bias Audits: Employers must conduct annual audits of their AEDTs. These audits, carried out by independent third parties, evaluate whether the AI tool is causing any biases based on gender, race, ethnicity, or other protected characteristics.
- Public Disclosure: Employers must also make the results of these audits publicly available. That means a summary of the audit findings must be posted on the company’s website, so anyone (including job seekers) can see whether the AI tool has been tested for fairness.
- Notification to Candidates: Employers must notify job candidates that an AI tool will be used in the hiring process at least ten days before it is applied. This is crucial because it allows candidates to understand how their data might be used and what criteria the AI is assessing.
While LL144 is groundbreaking, it is not without its challenges, and it raises important questions about how effective it will be in driving real change.
The Gendered Impact of AI Bias in Hiring
Here’s where things get a little more complicated. While AI is often touted as “objective”, the reality is far from that. AI systems are trained on massive data sets that often include historical hiring data. This data can reflect past hiring patterns, which, unfortunately, often lean in favor of men, especially in male-dominated industries like tech.
Take women who’ve taken career breaks for caregiving, for example. AI tools might see these gaps in employment as “red flags”, unfairly penalizing women for decisions they made around family life. This bias is subtle, but it can significantly impact the chances of women being hired.
Another issue is that women are often underrepresented in the tech and data science fields, meaning fewer women are involved in designing these AI systems. When this happens, it’s easy for the algorithms to overlook important gender-specific considerations. The result? AI hiring tools that may unintentionally favor men.
A study by New York University revealed that AI-based hiring tools often screened out women who took maternity leave. This is just one example of how algorithms can unintentionally favor one group over another, making it even harder for women to land jobs, especially in industries where they are already underrepresented.
How does NYC’s Local Law 144 Address These Challenges?
Local Law 144 takes a step toward addressing these gendered biases by requiring regular audits of AI tools used in hiring. These audits aim to catch any potential bias before it becomes a problem. However, there are some significant challenges regarding the law’s actual implementation.
- Low Compliance Rates: The results of the first round of audits under LL144 show that many employers are not fully complying with the law. A study found that only a small fraction of companies had posted the required audit results on their websites. This lack of transparency makes it difficult to hold companies accountable.
- Scope and Ambiguity: The law leaves room for interpretation. For example, some employers may argue that certain AI tools don’t fall under the scope of the law, thus evading some of the requirements. This loophole can undermine the law’s effectiveness in ensuring AI is used fairly.
- No Clear Corrective Actions: While LL144 requires audits, it doesn’t mandate any specific actions if bias is found. Companies are not required to change their practices or adjust their algorithms. Without this push for corrective action, the audits could become more of a box-ticking exercise than a meaningful tool for change.
Recommendations for a Fairer Future
Although LL144 is a step in the right direction, there’s still a long way to go. To create truly fair workplaces, AI hiring tools need to evolve. Here are some key recommendations for making AI hiring fairer, especially for women:
- Inclusive Data Sets: AI tools need to be trained on data that reflects the full diversity of candidates, including those who’ve taken career breaks or those with non-linear career paths. This would ensure that women, especially those with caregiving responsibilities, aren’t unfairly penalized for their life choices.
- Diverse Development Teams: The teams building AI tools must be diverse in gender, ethnicity, and background. A diverse team is more likely to identify and fix biases that could negatively impact underrepresented groups.
- Algorithm Transparency: Companies should prioritize transparency by making their AI algorithms more understandable to the public. This means revealing how the algorithms work and what data they use to make decisions. This transparency would help ensure companies are held accountable for their AI hiring tools.
- Actionable Audits: It’s not enough to simply identify bias in AI systems; there needs to be a mandate for companies to take action when biases are discovered. This could mean adjusting algorithms or introducing new hiring practices to level the playing field.
- Ongoing Monitoring: AI is not a set-it-and-forget-it technology. Regular, continuous monitoring of these systems is essential to identify and correct any emerging biases before they can negatively impact candidates.
Bottom Line
AI holds immense promise in revolutionizing hiring processes, but without deliberate efforts to address inherent biases, it risks reinforcing existing inequalities. LL144 represents a commendable step towards accountability, but its limitations underscore the need for more comprehensive measures.
By fostering inclusive AI development, enforcing stringent regulations, and promoting transparency, we can pave the way for workplaces that truly value diversity and empower all individuals, regardless of gender, to thrive.