How We Ensure Our Applicant Screening Software Stays Fair

Our AI-driven candidate scoring is NYC Local Law 144 compliant and bias-audited and built for fast, fair, and transparent hiring that supports unbiased recruitment.

PUBLISHED
July 9, 2025
AUTHOR
Rachel Valentino
""

Why We Audit Our AI Scoring System

It's a huge task for a recruiter to ensure every candidate gets a consistent and fair shot, especially when the demand for hiring fast and at scale is constant.

This is even more the case when artificial intelligence and talent acquisition are combined to assist in the hiring process.

The best systems will allow you to move at AI speed, but most importantly—help you prevent biases and not introduce any blind spots that could affect someone’s experience with your company.

Built for Volume & Audited for Accountability

Leading staffing and recruiting teams leverage HeyMilo’s candidate engagement AI, and AI candidate screening tools to reach thousands of candidates daily. It’s always been our intention to make sure our systems that support them are as fair as they are fast.

To continuously ensure that, we partnered with Warden AI, an independent AI assurance platform, to rigorously evaluate our AI scoring system through recurring bias audits.

How the Audit Works

Warden’s review follows the requirements set by NYC Local Law 144, which is a regulation designed to ensure fairness and accountability when automated tools are used in employment decisions. It also aligns with frameworks like the EU AI Act, helping support readiness for 2026 compliance.

Each audit uses a representative sample of interview data to simulate how our AI candidate scoring system performs across different demographic groups.

Each sample reflects a small portion of the many interviews happening daily, captures the variety of use cases teams rely on us for, and helps us monitor how our AI candidate scoring system performs across different conditions.

What We’re Monitoring

Warden utilizes industry-standard methods such as:

  • Disparate Impact
  • and Counterfactual Analysis to evaluate our system.

These techniques are used to help identify whether certain groups may be treated unfairly by the scoring model, even if it’s unintentional. It checks how our system handles variance across demographics, and alerts us if there's any sign of a potential issue.

Each new dataset is reviewed monthly, and results are continuously added to our public-facing AI Assurance Dashboard. That means hiring teams, legal teams, and partners can verify for themselves that the system is doing what it claims to do.

What the Latest Results Show

The most recent audit reviewed a sample of 1,361 interviews. Their Bias & Fairness audit found that candidates were scored equally across sex, race, and intersectional groups. No disparities were flagged, and the system met all NYC Local Law 144 compliance standards. You can explore the full results on our public AI Assurance Dashboard.

If You're Evaluating Fair AI Candidate Screening

There are a plethora of solutions for teams wanting to automate high-volume candidate screening fairly, and it's worth understanding what your options offer in terms of transparency, compliance, and actual support.

HeyMilo combines AI-driven candidate engagement (via SMS, WhatsApp, and email) with AI screenings for resume reviews and AI voice or video interviews — all accessible by candidates on mobile or browser. All scoring results feed directly into your ATS to support faster, more precise hiring decisions.

We back that with SOC 2 compliance, GDPR compliance, and ongoing third-party bias audits with full transparency via our AI Assurance Dashboard. And whether you're a recruiter or a candidate, our multilingual support team is here for you 24/7.

If you're exploring options for fair, efficient applicant screening software, book a demo to see how teams are making more hires using equitable AI!

Share this Post if you're exploring the benefits of an Ai screening tool
No items found.

How We Ensure Our Applicant Screening Software Stays Fair

Our AI-driven candidate scoring is NYC Local Law 144 compliant and bias-audited and built for fast, fair, and transparent hiring that supports unbiased recruitment.

Rachel Valentino
July 8, 2025
How We Ensure Our Applicant Screening Software Stays Fair

Contents

Why We Audit Our AI Scoring System

It's a huge task for a recruiter to ensure every candidate gets a consistent and fair shot, especially when the demand for hiring fast and at scale is constant.

This is even more the case when artificial intelligence and talent acquisition are combined to assist in the hiring process.

The best systems will allow you to move at AI speed, but most importantly—help you prevent biases and not introduce any blind spots that could affect someone’s experience with your company.

Built for Volume & Audited for Accountability

Leading staffing and recruiting teams leverage HeyMilo’s candidate engagement AI, and AI candidate screening tools to reach thousands of candidates daily. It’s always been our intention to make sure our systems that support them are as fair as they are fast.

To continuously ensure that, we partnered with Warden AI, an independent AI assurance platform, to rigorously evaluate our AI scoring system through recurring bias audits.

How the Audit Works

Warden’s review follows the requirements set by NYC Local Law 144, which is a regulation designed to ensure fairness and accountability when automated tools are used in employment decisions. It also aligns with frameworks like the EU AI Act, helping support readiness for 2026 compliance.

Each audit uses a representative sample of interview data to simulate how our AI candidate scoring system performs across different demographic groups.

Each sample reflects a small portion of the many interviews happening daily, captures the variety of use cases teams rely on us for, and helps us monitor how our AI candidate scoring system performs across different conditions.

What We’re Monitoring

Warden utilizes industry-standard methods such as:

  • Disparate Impact
  • and Counterfactual Analysis to evaluate our system.

These techniques are used to help identify whether certain groups may be treated unfairly by the scoring model, even if it’s unintentional. It checks how our system handles variance across demographics, and alerts us if there's any sign of a potential issue.

Each new dataset is reviewed monthly, and results are continuously added to our public-facing AI Assurance Dashboard. That means hiring teams, legal teams, and partners can verify for themselves that the system is doing what it claims to do.

What the Latest Results Show

The most recent audit reviewed a sample of 1,361 interviews. Their Bias & Fairness audit found that candidates were scored equally across sex, race, and intersectional groups. No disparities were flagged, and the system met all NYC Local Law 144 compliance standards. You can explore the full results on our public AI Assurance Dashboard.

If You're Evaluating Fair AI Candidate Screening

There are a plethora of solutions for teams wanting to automate high-volume candidate screening fairly, and it's worth understanding what your options offer in terms of transparency, compliance, and actual support.

HeyMilo combines AI-driven candidate engagement (via SMS, WhatsApp, and email) with AI screenings for resume reviews and AI voice or video interviews — all accessible by candidates on mobile or browser. All scoring results feed directly into your ATS to support faster, more precise hiring decisions.

We back that with SOC 2 compliance, GDPR compliance, and ongoing third-party bias audits with full transparency via our AI Assurance Dashboard. And whether you're a recruiter or a candidate, our multilingual support team is here for you 24/7.

If you're exploring options for fair, efficient applicant screening software, book a demo to see how teams are making more hires using equitable AI!

Rachel Valentino
Rachel Valentino