The integration of AI into recruitment and hiring processes has revolutionized talent acquisition. From screening resumes to scheduling interviews and evaluating candidate fit, AI-powered tools promise unprecedented efficiency and objectivity. However, this technological advancement has also raised critical concerns about algorithmic bias and discrimination. New York City has taken a pioneering step to address these concerns through Local Law 144 (NYC LL 144), one of the most significant AI regulations affecting employers today.
This comprehensive guide explains everything you need to know about NYC LL 144, including what it requires, who it affects, and how to achieve compliance through proper bias auditing.
Understanding NYC Local Law 144
What is NYC LL 144?
NYC Local Law 144, which took effect in 2023, regulates the use of "automated employment decision tools" (AEDTs) in recruitment and hiring processes. The law represents a landmark effort to ensure that AI systems used in employment decisions don't perpetuate or amplify discriminatory practices against protected groups.
What Are Automated Employment Decision Tools?
AEDTs are any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output (such as scores, classifications, or recommendations) that are used to substantially assist or replace discretionary decision-making for employment decisions. This includes tools that:
- Screen or filter job applications
- Rank or score candidates
- Assess candidate qualifications or competencies
- Make hiring or promotion recommendations
- Evaluate employee performance for promotion decisions
The Business Case for AI in Recruitment
Before diving into compliance requirements, it's worth understanding why organizations adopt these tools in the first place. AI-powered recruitment systems offer several compelling advantages:
Efficiency gains: Automating resume screening and initial candidate assessments can reduce time-to-hire by up to 70%
Enhanced objectivity: When properly designed, AI can reduce human biases by focusing on job-relevant qualifications
Improved candidate experience: Faster response times and streamlined processes create better experiences for applicants
Better talent matching: Advanced algorithms can identify qualified candidates who might be overlooked through manual screening
Cost reduction: Automation reduces the resource burden on HR teams, allowing them to focus on high-value activities
However, these benefits can only be realized when AI systems are fair, transparent, and free from discriminatory bias—precisely what NYC LL 144 aims to ensure.
Core Requirements of NYC LL 144
NYC LL 144 establishes four fundamental requirements that employers must meet before using AEDTs:
1. Independent Bias Audit
Employers must ensure that an independent third party conducts a bias audit of any AEDT within one year before its use. This audit must be renewed annually to maintain compliance.
Who qualifies as an independent auditor? According to the NYC Department of Consumer and Worker Protection (DCWP), an independent auditor is a person or organization that:
- Is not involved in developing the AEDT
- Is not involved in using the AEDT for employment decisions
- Has the expertise to conduct bias audits according to regulatory standards
2. Public Disclosure of Audit Results
Organizations must make information about the bias audit publicly available. This includes:
- Summary statistics from the audit
- The date of the audit
- Selection rates and impact ratios for protected categories
3. Candidate and Employee Notification
Employers must inform job candidates and employees about:
- The use of an AEDT in the hiring or promotion process
- The job qualifications and characteristics the tool will assess
- Information about data collection, retention, and usage policies
4. Alternative Selection Process or Accommodation
Candidates and employees must be provided with an alternative selection process or reasonable accommodation upon request.
Understanding Adverse Impact and Bias Audits
What is Adverse Impact?
The Equal Employment Opportunity Commission (EEOC) defines adverse impact as "a substantially different rate of selection in hiring, promotion, or other employment decision which works to the disadvantage of a race, sex, or ethnic group."
NYC LL 144 requires assessment of adverse impact across specific demographic categories to ensure AEDTs don't discriminate against protected groups.
Required Testing Categories
The law mandates bias audits across the following categories:
Ethnicity categories:
- Hispanic or Latino
- White
- African American
- Native Hawaiian or Pacific Islander
- Asian
- Native American or Alaska Native
Sex categories:
Intersectional categories:The law also requires testing of intersectional categories that combine sex and ethnicity (e.g., female Hispanic, male Asian, etc.), recognizing that discrimination can occur at the intersection of multiple protected characteristics.
Calculating Selection Rates and Impact Ratios
A compliant bias audit must include:
Selection rates: The proportion of candidates from each demographic category who are selected (hired or promoted) by the AEDT
Impact ratios: A comparison of each group's selection rate to the most selected category, calculated as:
Impact Ratio = (Selection Rate for Group) / (Selection Rate for Most Selected Group)
Generally, an impact ratio below 0.80 (or 80%) indicates potential adverse impact, based on the EEOC's "four-fifths rule." However, statistical significance and practical considerations should also be evaluated.
Penalties for Non-Compliance
Organizations that fail to comply with NYC LL 144 face significant consequences:
- First violation: Up to $500 per violation
- Subsequent violations: Up to $1,500 per violation
- Violations related to failure to provide notice: Additional penalties
Given that each use of a non-compliant AEDT could constitute a separate violation, penalties can accumulate rapidly, creating substantial financial and reputational risks.
How to Achieve Compliance: A Step-by-Step Approach
Step 1: Inventory Your AI Tools
Begin by identifying all AEDTs currently in use or under consideration for recruitment and hiring processes. This includes:
- Resume screening software
- Applicant tracking systems with AI components
- Video interview analysis tools
- Skills assessment platforms
- Chatbots used for candidate evaluation
Step 2: Engage an Independent Auditor
Select a qualified independent auditor who:
- Has no involvement in developing or using your AEDTs
- Possesses expertise in bias testing and employment law
- Can conduct audits that meet NYC LL 144 requirements
- Provides comprehensive reporting and documentation
Step 3: Collect and Prepare Data
Gather the necessary data for bias auditing, including:
- Candidate demographic information
- Selection outcomes (hired/not hired, promoted/not promoted)
- AEDT scores or recommendations
- Sufficient sample sizes for statistical validity
Step 4: Conduct the Bias Audit
The independent auditor will:
- Calculate selection rates for each protected category
- Determine impact ratios comparing each group to the most selected category
- Assess statistical significance of any disparities
- Identify potential sources of bias in the algorithm
- Generate a comprehensive audit report
Step 5: Publicly Disclose Audit Results
Make audit information available on your website or through other accessible means, including:
- Summary statistics by demographic category
- Impact ratios for protected groups
- Date of the audit
- Distribution date of the AEDT (when it was first used)
Step 6: Implement Notification Procedures
Establish processes to notify candidates and employees about:
- AEDT use in the selection process
- What the tool evaluates
- Data collection and retention policies
- How to request an alternative process or accommodation
Step 7: Address Identified Biases
If the audit reveals adverse impact:
- Work with your AEDT vendor to understand root causes
- Consider algorithm adjustments or recalibration
- Implement additional human oversight
- Evaluate alternative tools or approaches
- Re-audit after making changes
Step 8: Establish Ongoing Compliance
- Schedule annual bias audits
- Monitor AEDT performance continuously
- Update notifications as tools or processes change
- Maintain documentation of compliance efforts
- Train HR staff on compliance requirements
Leveraging Technology for Compliance
Modern AI governance platforms can streamline the compliance process through automated bias testing capabilities. These solutions typically offer:
Automated data ingestion: Direct integration with HR systems and applicant tracking platforms to seamlessly collect necessary data
Statistical analysis: Automatic calculation of selection rates, impact ratios, and statistical significance testing across all required demographic categories
Compliance reporting: Generation of audit reports that meet regulatory requirements, including all mandated statistics and disclosures
Continuous monitoring: Ongoing tracking of AEDT performance to identify emerging bias issues before they become compliance violations
Documentation management: Centralized storage of audit reports, notifications, and compliance evidence
Best Practices for Responsible AI in Recruitment
Beyond minimum compliance, leading organizations adopt these practices:
1. Bias Testing Throughout the AI Lifecycle
Don't wait until annual audits. Test for bias during:
- Tool selection and procurement
- Initial deployment
- Ongoing operations
- After any significant changes to the algorithm or training data
2. Diverse Training Data
Ensure AI systems are trained on representative datasets that reflect the diversity of your candidate pool. Unrepresentative training data is a primary source of algorithmic bias.
3. Human-in-the-Loop Oversight
Maintain meaningful human review of AI recommendations, especially for final hiring decisions. AI should augment, not replace, human judgment in employment decisions.
4. Transparency with Stakeholders
Go beyond minimum notification requirements by:
- Explaining how AI tools work in accessible language
- Sharing your commitment to fairness and non-discrimination
- Providing easy channels for questions and concerns
- Being transparent about audit findings and remediation efforts
5. Vendor Due Diligence
When procuring third-party AEDTs:
- Request evidence of bias testing from vendors
- Include compliance obligations in contracts
- Require vendors to support your audit requirements
- Establish clear accountability for discriminatory outcomes
6. Cross-Functional Governance
Create a governance structure involving:
- HR and talent acquisition teams
- Legal and compliance functions
- Data science and IT
- Diversity, equity, and inclusion (DEI) leaders
The Broader Regulatory Landscape
While NYC LL 144 is currently the most specific AI hiring regulation in the United States, it's part of a broader trend toward AI governance:
State-level initiatives: Several states are considering similar legislation for AI in employment
EU AI Act: Classifies certain AI systems in employment as "high-risk," requiring conformity assessments and transparency obligations
Federal proposals: Various federal bills addressing AI bias and discrimination are under consideration
Industry standards: Professional organizations are developing best practices and ethical guidelines for AI in HR
Organizations that establish robust AI governance practices now will be better positioned as this regulatory landscape evolves.
Conclusion: Compliance as Competitive Advantage
NYC Local Law 144 represents more than a compliance obligation—it's an opportunity to build trustworthy, fair, and effective AI-powered recruitment processes. Organizations that embrace rigorous bias testing and transparency don't just avoid penalties; they:
- Attract diverse talent by demonstrating commitment to fairness
- Make better hiring decisions through more accurate, unbiased tools
- Build stronger employer brands based on ethical AI practices
- Prepare for emerging regulations in other jurisdictions
- Reduce legal risks related to employment discrimination
The pathway to compliance begins with understanding your obligations, conducting thorough bias audits through qualified independent auditors, implementing transparent notification procedures, and continuously monitoring your AEDTs for fairness. By partnering with experienced AI governance providers and adopting best practices that exceed minimum requirements, organizations can harness the power of AI in recruitment while ensuring equal opportunity for all candidates.
Achieve NYC LL 144 Compliance with Holistic AI
Navigating the complexities of NYC Local Law 144 requires more than just understanding the regulations—it demands robust technical capabilities, deep expertise in AI governance, and tools purpose-built for compliance at scale.
Holistic AI provides the complete solution for NYC LL 144 compliance and beyond.
End-to-End AI Governance Platform
Our comprehensive platform delivers everything you need to ensure your recruitment AI systems are compliant, fair, and trustworthy:
Automated Bias Testing & Auditing
- Calculate selection rates and impact ratios across all required demographic categories automatically
- Assess intersectional bias with advanced statistical analysis
- Generate NYC LL 144-compliant audit reports with one click
- Meet the annual audit requirement with streamlined workflows
AI System Discovery & Inventory
- Identify all AEDTs in use across your organization, including shadow AI
- Maintain a comprehensive inventory of recruitment and HR AI tools
- Track compliance status for each system in your AI portfolio
Continuous Monitoring & Risk Mitigation
- Monitor AEDT performance in real-time to detect emerging bias issues before they become violations
- Receive automated alerts when systems drift outside acceptable fairness thresholds
- Implement corrective measures proactively rather than reactively
Regulatory Compliance Automation
- Navigate NYC LL 144, EU AI Act, and other AI regulations from a single platform
- Generate public disclosure documentation automatically
- Maintain audit trails and evidence of compliance efforts
- Stay ahead of evolving regulatory requirements
Red Teaming & Advanced Testing
- Go beyond basic bias audits with comprehensive adversarial testing
- Identify vulnerabilities and edge cases that could lead to discriminatory outcomes
- Leverage our award-winning red teaming expertise (OpenAI $50K competition winners)
Trusted by Global Enterprises
Leading organizations including Unilever, MAPFRE, and Siemens trust Holistic AI to govern their AI systems responsibly. Our platform combines technical rigor with practical usability, enabling compliance teams, HR leaders, and data scientists to collaborate effectively.
Why Holistic AI?
Independent Expertise: As an independent third-party auditor, we meet NYC LL 144's requirements while bringing objectivity to your compliance efforts
Research-Driven Approach: Our solutions are grounded in cutting-edge AI safety research and academic partnerships with institutions like UCL
Proven Track Record: Demonstrated technical credibility through competitive achievements and real-world deployments with major enterprises
Comprehensive Coverage: Unlike point solutions, we address the entire AI lifecycle—from discovery and inventory to testing, monitoring, and ongoing governance
Future-Proof Platform: Built to scale with your AI adoption and adapt to evolving regulations like the EU AI Act and emerging US federal requirements
Get Started Today
Don't let compliance concerns slow down your AI-powered recruitment initiatives. With Holistic AI, you can confidently deploy AEDTs knowing that:
- Your systems have been rigorously tested for bias
- You meet all NYC LL 144 requirements
- You have continuous visibility into fairness metrics
- You're prepared for audits and regulatory inquiries
- Your AI governance practices represent industry best practices
Transform compliance from a checkbox exercise into a competitive advantage.
Contact Holistic AI today to schedule a demo and discover how our AI governance platform can help you achieve NYC LL 144 compliance while building more effective, trustworthy recruitment systems.
Ready to ensure your recruitment AI is compliant and fair?
Visit holistic.ai or reach out to our team to learn how we can support your responsible AI journey.