The integration of AI into recruitment and hiring processes has revolutionized talent acquisition. From screening resumes to scheduling interviews and evaluating candidate fit, AI-powered tools promise unprecedented efficiency and objectivity. However, this technological advancement has also raised critical concerns about algorithmic bias and discrimination. New York City has taken a pioneering step to address these concerns through Local Law 144 (NYC LL 144), one of the most significant AI regulations affecting employers today.

This comprehensive guide explains everything you need to know about NYC LL 144, including what it requires, who it affects, and how to achieve compliance through proper bias auditing.

Understanding NYC Local Law 144

What is NYC LL 144?

NYC Local Law 144, which took effect in 2023, regulates the use of "automated employment decision tools" (AEDTs) in recruitment and hiring processes. The law represents a landmark effort to ensure that AI systems used in employment decisions don't perpetuate or amplify discriminatory practices against protected groups.

What Are Automated Employment Decision Tools?

AEDTs are any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output (such as scores, classifications, or recommendations) that are used to substantially assist or replace discretionary decision-making for employment decisions. This includes tools that:

The Business Case for AI in Recruitment

Before diving into compliance requirements, it's worth understanding why organizations adopt these tools in the first place. AI-powered recruitment systems offer several compelling advantages:

Efficiency gains: Automating resume screening and initial candidate assessments can reduce time-to-hire by up to 70%

Enhanced objectivity: When properly designed, AI can reduce human biases by focusing on job-relevant qualifications

Improved candidate experience: Faster response times and streamlined processes create better experiences for applicants

Better talent matching: Advanced algorithms can identify qualified candidates who might be overlooked through manual screening

Cost reduction: Automation reduces the resource burden on HR teams, allowing them to focus on high-value activities

However, these benefits can only be realized when AI systems are fair, transparent, and free from discriminatory bias—precisely what NYC LL 144 aims to ensure.

Core Requirements of NYC LL 144

NYC LL 144 establishes four fundamental requirements that employers must meet before using AEDTs:

1. Independent Bias Audit

Employers must ensure that an independent third party conducts a bias audit of any AEDT within one year before its use. This audit must be renewed annually to maintain compliance.

Who qualifies as an independent auditor? According to the NYC Department of Consumer and Worker Protection (DCWP), an independent auditor is a person or organization that:

2. Public Disclosure of Audit Results

Organizations must make information about the bias audit publicly available. This includes:

3. Candidate and Employee Notification

Employers must inform job candidates and employees about:

4. Alternative Selection Process or Accommodation

Candidates and employees must be provided with an alternative selection process or reasonable accommodation upon request.

Understanding Adverse Impact and Bias Audits

What is Adverse Impact?

The Equal Employment Opportunity Commission (EEOC) defines adverse impact as "a substantially different rate of selection in hiring, promotion, or other employment decision which works to the disadvantage of a race, sex, or ethnic group."

NYC LL 144 requires assessment of adverse impact across specific demographic categories to ensure AEDTs don't discriminate against protected groups.

Required Testing Categories

The law mandates bias audits across the following categories:

Ethnicity categories:

Sex categories:

Intersectional categories:The law also requires testing of intersectional categories that combine sex and ethnicity (e.g., female Hispanic, male Asian, etc.), recognizing that discrimination can occur at the intersection of multiple protected characteristics.

Calculating Selection Rates and Impact Ratios

A compliant bias audit must include:

Selection rates: The proportion of candidates from each demographic category who are selected (hired or promoted) by the AEDT

Impact ratios: A comparison of each group's selection rate to the most selected category, calculated as:

Impact Ratio = (Selection Rate for Group) / (Selection Rate for Most Selected Group)

Generally, an impact ratio below 0.80 (or 80%) indicates potential adverse impact, based on the EEOC's "four-fifths rule." However, statistical significance and practical considerations should also be evaluated.

Penalties for Non-Compliance

Organizations that fail to comply with NYC LL 144 face significant consequences:

Given that each use of a non-compliant AEDT could constitute a separate violation, penalties can accumulate rapidly, creating substantial financial and reputational risks.

How to Achieve Compliance: A Step-by-Step Approach

Step 1: Inventory Your AI Tools

Begin by identifying all AEDTs currently in use or under consideration for recruitment and hiring processes. This includes:

Step 2: Engage an Independent Auditor

Select a qualified independent auditor who:

Step 3: Collect and Prepare Data

Gather the necessary data for bias auditing, including:

Step 4: Conduct the Bias Audit

The independent auditor will:

Step 5: Publicly Disclose Audit Results

Make audit information available on your website or through other accessible means, including:

Step 6: Implement Notification Procedures

Establish processes to notify candidates and employees about:

Step 7: Address Identified Biases

If the audit reveals adverse impact:

Step 8: Establish Ongoing Compliance

Leveraging Technology for Compliance

Modern AI governance platforms can streamline the compliance process through automated bias testing capabilities. These solutions typically offer:

Automated data ingestion: Direct integration with HR systems and applicant tracking platforms to seamlessly collect necessary data

Statistical analysis: Automatic calculation of selection rates, impact ratios, and statistical significance testing across all required demographic categories

Compliance reporting: Generation of audit reports that meet regulatory requirements, including all mandated statistics and disclosures

Continuous monitoring: Ongoing tracking of AEDT performance to identify emerging bias issues before they become compliance violations

Documentation management: Centralized storage of audit reports, notifications, and compliance evidence

Best Practices for Responsible AI in Recruitment

Beyond minimum compliance, leading organizations adopt these practices:

1. Bias Testing Throughout the AI Lifecycle

Don't wait until annual audits. Test for bias during:

2. Diverse Training Data

Ensure AI systems are trained on representative datasets that reflect the diversity of your candidate pool. Unrepresentative training data is a primary source of algorithmic bias.

3. Human-in-the-Loop Oversight

Maintain meaningful human review of AI recommendations, especially for final hiring decisions. AI should augment, not replace, human judgment in employment decisions.

4. Transparency with Stakeholders

Go beyond minimum notification requirements by:

5. Vendor Due Diligence

When procuring third-party AEDTs:

6. Cross-Functional Governance

Create a governance structure involving:

The Broader Regulatory Landscape

While NYC LL 144 is currently the most specific AI hiring regulation in the United States, it's part of a broader trend toward AI governance:

State-level initiatives: Several states are considering similar legislation for AI in employment

EU AI Act: Classifies certain AI systems in employment as "high-risk," requiring conformity assessments and transparency obligations

Federal proposals: Various federal bills addressing AI bias and discrimination are under consideration

Industry standards: Professional organizations are developing best practices and ethical guidelines for AI in HR

Organizations that establish robust AI governance practices now will be better positioned as this regulatory landscape evolves.

Conclusion: Compliance as Competitive Advantage

NYC Local Law 144 represents more than a compliance obligation—it's an opportunity to build trustworthy, fair, and effective AI-powered recruitment processes. Organizations that embrace rigorous bias testing and transparency don't just avoid penalties; they:

The pathway to compliance begins with understanding your obligations, conducting thorough bias audits through qualified independent auditors, implementing transparent notification procedures, and continuously monitoring your AEDTs for fairness. By partnering with experienced AI governance providers and adopting best practices that exceed minimum requirements, organizations can harness the power of AI in recruitment while ensuring equal opportunity for all candidates.

Achieve NYC LL 144 Compliance with Holistic AI

Navigating the complexities of NYC Local Law 144 requires more than just understanding the regulations—it demands robust technical capabilities, deep expertise in AI governance, and tools purpose-built for compliance at scale.

Holistic AI provides the complete solution for NYC LL 144 compliance and beyond.

End-to-End AI Governance Platform

Our comprehensive platform delivers everything you need to ensure your recruitment AI systems are compliant, fair, and trustworthy:

Automated Bias Testing & Auditing

AI System Discovery & Inventory

Continuous Monitoring & Risk Mitigation

Regulatory Compliance Automation

Red Teaming & Advanced Testing

Trusted by Global Enterprises

Leading organizations including Unilever, MAPFRE, and Siemens trust Holistic AI to govern their AI systems responsibly. Our platform combines technical rigor with practical usability, enabling compliance teams, HR leaders, and data scientists to collaborate effectively.

Why Holistic AI?

Independent Expertise: As an independent third-party auditor, we meet NYC LL 144's requirements while bringing objectivity to your compliance efforts

Research-Driven Approach: Our solutions are grounded in cutting-edge AI safety research and academic partnerships with institutions like UCL

Proven Track Record: Demonstrated technical credibility through competitive achievements and real-world deployments with major enterprises

Comprehensive Coverage: Unlike point solutions, we address the entire AI lifecycle—from discovery and inventory to testing, monitoring, and ongoing governance

Future-Proof Platform: Built to scale with your AI adoption and adapt to evolving regulations like the EU AI Act and emerging US federal requirements

Get Started Today

Don't let compliance concerns slow down your AI-powered recruitment initiatives. With Holistic AI, you can confidently deploy AEDTs knowing that:

Transform compliance from a checkbox exercise into a competitive advantage.

Contact Holistic AI today to schedule a demo and discover how our AI governance platform can help you achieve NYC LL 144 compliance while building more effective, trustworthy recruitment systems.

Ready to ensure your recruitment AI is compliant and fair?

Visit holistic.ai or reach out to our team to learn how we can support your responsible AI journey.