AI in Recruiting: Embracing Innovation While Ensuring Compliance

Artificial intelligence is rapidly transforming how companies recruit and hire talent. From resume screening to video interviews, AI tools can streamline the hiring process by automatically sorting, ranking, and even preliminarily interviewing candidates. In fact, a recent SHRM report found that one in four organizations now use AI to support HR tasks, and this adoption is growing quickly (theemployerreport.com) . This trend is largely positive – AI can help recruiters fill roles faster and even broaden talent pools. However, it also brings new compliance challenges. Laws and regulations are emerging to ensure these tools are used fairly and without unlawful bias. The good news is that with the right approach, HR leaders can lean into AI as the future of recruiting while treating compliance as a helpful guide rather than an obstacle.

Here, we’ll break down the current AI landscape in recruiting, including key laws in the United States (with specific state examples), the EU’s landmark AI Act, and initiatives in countries like Canada, the UK, Australia, and Brazil. We’ll clarify what employers are responsible for versus what your vendors or SaaS providers must handle. Finally, we’ll outline practical steps to help your organization stay compliant and get the most out of AI in hiring. The tone here is conversational and positive – our aim is to make compliance easy to understand and even easier to embrace as you innovate in HR.

AI in Hiring: A Rapidly Evolving Landscape

AI has quickly become a recruiting game-changer. Companies now use AI-driven software to scan resumes, assess video interviews, chat with candidates, and more. These tools promise faster hiring cycles and reduced manual work. For example, platforms like Distro.io use AI to qualify and shortlist applicants automatically, helping teams focus on top talent sooner. AI can also reduce bias by considering a wider range of candidates beyond those from a few familiar schools or companies.

However, AI is only as good as the data and design behind it. If past hiring data reflects bias (for instance, against a certain gender or ethnicity), an AI tool might unintentionally amplify that bias(boughtonlaw.com) . High-profile cases have shown that flawed AI can reject qualified candidates for the wrong reasons. This risk has not gone unnoticed by regulators. Lawmakers and agencies are creating rules to ensure AI doesn’t undermine equal opportunity. In short, AI in recruiting offers huge benefits, but it must be used responsibly. That’s where the new wave of AI hiring laws comes in – to guide employers in using these tools fairly and transparently.

AI Hiring Laws in the United States

In the U.S., no single federal law specifically regulates AI in hiring yet. Nevertheless, existing anti-discrimination laws already cover AI tools. Laws like Title VII of the Civil Rights Act and the Americans with Disabilities Act apply to any hiring practice – whether done by a human or an algorithm . In other words, if your AI recruiting tool unfairly disadvantages candidates of a protected group, your company can be held liable under discrimination laws just as if a human manager made the biased decision. The Equal Employment Opportunity Commission (EEOC) has made it clear that it is watching AI in hiring closely. In May 2023 the EEOC released guidance warning that use of AI in employment decisions could violate Title VII if it causes disparate impact bias . Multiple federal agencies jointly affirmed in 2024 that civil rights and consumer protection laws will be enforced equally on automated systems as on traditional practices . So while we don’t yet have a federal “AI hiring law,” employers are expected to keep AI use within the bounds of existing laws.

State and Local Laws: In the absence of federal statute, several states and cities have jumped in with their own rules. Here are some of the most important ones for recruiting compliance:

Illinois: Illinois has been a pioneer. Its Artificial Intelligence Video Interview Act (effective 2020) was one of the first laws of its kind. It requires employers to notify candidates if AI will be used to analyze a video interview, explain how the AI works, and get the candidate’s consent before proceeding . An amendment added in 2022 also mandates collecting and reporting demographic data (race and ethnicity) on candidates to monitor for bias if AI video analysis is used exclusively in screening . More recently, Illinois expanded its Human Rights Act (effective Jan 1, 2026) to make it explicitly unlawful for employers to use AI in any employment decision if it results in discrimination against protected classes . This new law also prohibits using proxies like ZIP codes that could stand in for protected traits . Illinois requires notice to applicants when AI is used in any significant hiring or employment decision . (Notably, unlike some other laws, Illinois doesn’t yet require a formal bias audit or impact assessment of the AI tool .)

Colorado: Colorado was actually the first state to pass a broad law targeting AI bias at work. In 2023 it enacted the Colorado “Protections in AI Systems” Act (effective 2026), which imposes duties on developers and users of high-risk AI systems to use reasonable care to avoid algorithmic discrimination . The Colorado law is somewhat inspired by the EU’s approach (risk-based regulation), and it creates a safe harbor if companies follow specified compliance steps . If you deploy an AI system deemed “high-risk” (which would likely include hiring tools), you should be prepared for requirements like risk assessments, transparency, and monitoring under Colorado law.

New York City: New York City grabbed headlines with its Local Law 144 of 2021, the Automated Employment Decision Tools (AEDT) law. Enforcement began July 5, 2023 . This law requires employers (and employment agencies) to commission an independent bias audit of any AI-driven hiring tool before using it on NYC candidates . The bias audit must be repeated annually and a summary of results must be posted publicly on the company’s website  . NYC also mandates that candidates be notified at least 10 business days in advance that an automated tool will be used, and be told what job qualifications the tool is assessing . In practice, this means if you want to use a résumé screening AI or video interview algorithm in NYC, you need to have an up-to-date bias audit of that tool showing it has been tested for disparate impact (e.g. does it reject women or minority candidates at higher rates?). NYC’s law really put AI hiring tools under a microscope in terms of accountability and transparency.

Maryland: Maryland took a narrower focus with a law in effect since October 1, 2020 that protects candidates during interviews. Maryland bans the use of facial recognition tech in pre-employment interviews without written consent . If you want to use an AI video interview system that analyzes a candidate’s facial expressions or identity, you must first obtain explicit consent via a waiver . This ensures candidates aren’t unknowingly being analyzed by AI during interviews.

Other States: A growing list of states are considering or have passed AI-in-hiring measures. For instance, California, New Jersey, New York (state), Connecticut, Massachusetts, Washington, and several others have bills in progress that would regulate AI-driven recruiting . Many of these proposed laws echo similar themes: requiring bias audits or impact assessments, candidate notifications, and avoiding discriminatory outcomes. Hawaii passed a law requiring transparency in automated decision tools for employment. Even where a specific state law isn’t in effect, remember that general anti-bias laws still apply. Always keep an eye on the legal developments in any state where you hire, as this is a fast-moving area with new bills introduced each year (hrexecutive.com).

The takeaway for U.S. employers is clear: you are responsible for ensuring your fancy new AI hiring tool doesn’t discriminate. Compliance involves things like informing candidates, auditing your tools for bias, and being prepared to justify that the AI is fair. It may sound a bit daunting, but these requirements ultimately protect your organization from liability and, importantly, help you build more diverse and merit-based hiring processes. In essence, they encourage better recruiting practices.

The EU AI Act and Global Trends (EU, Canada, UK, Australia, Brazil)

Outside the U.S., other governments are also acting to regulate AI in recruitment. Let’s tour some of the major regions:

European Union (EU): The EU is leading the way with a comprehensive law known as the EU AI Act. Approved in late 2023, this regulation categorizes AI systems by risk level. Using AI for employment or hiring purposes is classified as “high-risk,” which means strict requirements will apply . Under the AI Act, high-risk AI tools (like recruiting software) will need to meet standards for transparency, provide documentation of how they work, undergo risk assessments, and include human oversight in their operation . The rules are expected to start coming into force in 2025. Importantly, the EU AI Act can have extraterritorial reach – if you are a company outside Europe but you use AI that affects people in the EU, you may still have to comply . For example, if a U.S.-based company uses an AI tool to hire remote workers in Germany, that tool and its provider might need to conform to the EU requirements. The AI Act also outright bans certain uses of AI deemed too harmful (for instance, AI that tries to predict traits like race or gender in a deceptive way). For most HR teams, the key point is that by the time the EU law fully applies, any AI hiring software in Europe must be transparent, audited, and overseen by humans to ensure fairness. Many vendors are already adapting to these rules as a baseline, similar to how global companies adjusted to GDPR for privacy .

United Kingdom: The UK, post-Brexit, is not subject to the EU AI Act and so far has taken a more flexible approach. There isn’t a specific UK law on AI in hiring yet. Instead, the UK is relying on existing laws and regulators (and is discussing general AI principles). In practice, this means equality and data protection laws cover AI outcomes. The UK Equality Act 2010 already makes it illegal to discriminate in hiring – if an AI tool rejects someone due to a protected characteristic, the employer faces liability just as they would for a human decision. Likewise, the UK Data Protection Act (similar to GDPR) gives candidates rights if purely automated decisions are made about them; in many cases, candidates can request human review of an AI-made decision. In 2023, the UK government released an AI regulation white paper advocating a pro-innovation, principles-based framework rather than new hard laws . They’ve tasked regulators like the Equality and Human Rights Commission to monitor AI use in their domains. The bottom line for UK employers: ensure your AI recruiting tools don’t breach the Equality Act (no biased outcomes) and that you remain transparent with candidates. Even without a new AI hiring law, if an applicant challenges an AI-driven rejection as discriminatory, your company must answer to the same tribunals and courts as any other bias claim. So it’s wise to apply similar compliance steps – bias testing, documentation, etc. – as one would under the EU or U.S. frameworks. The UK may introduce more formal AI legislation in coming years, but for now compliance is about using AI consistently with existing employment and privacy laws (whitecase.com).

Canada: Canada is on the cusp of new AI regulations. Federally, Canada has proposed the Artificial Intelligence and Data Act (AIDA) as part of Bill C-27 . AIDA will regulate “high-impact” AI systems, likely including hiring tools, by enforcing requirements around fairness, transparency, and human oversight. As of mid-2025, AIDA is not law yet – it’s still moving through Parliament . Meanwhile, one of Canada’s largest provinces has taken a concrete step: Ontariopassed a law requiring transparency in tech-assisted hiring. Starting January 1, 2026, Ontario employers must disclose in job postings if AI or automated tools are used to screen or select applicants . The goal is to improve transparency and accountability in AI-assisted hiring. Other provinces may follow suit; there’s an expectation that British Columbia and others will consider similar measures . Also, Canadian privacy laws like PIPEDA (and province-level laws such as BC’s PIPA) already apply – meaning employers must be careful about how candidate data is collected and used by AI . Notably, Canada’s pending Consumer Privacy Protection Act (CPPA) will require plain-language explanations if any automated decision-making is used . For now, Canadian employers should approach AI hiring tools with caution: maintain human oversight, audit for bias, and be transparent with candidates (many are even doing voluntary disclosure to build trust ). The Canadian Human Rights laws also remain in effect, so bias that creeps into AI could violate those. In short, Canada is aligning with the global trend: AI in hiring is welcome, but it must be fair and transparent. Watch for AIDA regulations coming into effect, and in the meantime, adopt best practices proactively.

Australia: Australia has no AI-specific hiring law yet, but it’s paying attention to the issue of algorithmic bias. Australian employers are subject to strong anti-discrimination laws (both federal and state) covering traits like sex, race, age, and disability. If an AI recruitment tool results in unjustified discrimination against, say, older candidates or people with accents, the employer can be held accountable under these laws . In fact, legal experts in Australia note that both the vendor of an AI tool and the employer using it could potentially be liable if the tool causes discriminatory outcomes . This shared responsibility is important: you can’t fully outsource blame to a software provider if something goes wrong. Australia’s privacy law (the Privacy Act) also comes into play regarding how candidate data is handled, though it currently doesn’t specifically regulate AI decisions. The Australian Human Rights Commission has recommended developing clearer guidelines or even legislation for AI, and there are calls for an AI Ethics framework to be applied in employment decisions. Until new laws emerge, the safest path for Australian companies is to treat AI hiring tools as if they were part of your own decision-making: ensure they are consistent with discrimination laws, and provide accommodations or human alternatives for candidates who might be disadvantaged by a purely automated process (for example, a candidate with a speech disability who might struggle with an AI video interview should be offered a different assessment method). Transparency is also key – letting candidates know an AI is used and how it works can build trust and allow them to raise concerns. Australia is likely to craft more formal AI regulations in the future, but even now, compliance means using AI in a way that upholds the principles of fairness and equality already embedded in Australian law.

Brazil: Brazil is emerging as a leader in AI governance in Latin America. While no AI-specific hiring law is in force yet, Brazil is on track to pass a comprehensive AI law. In 2023, Brazil’s Congress introduced Bill 2338/2023, which aims to establish a broad legal framework for AI . This proposed law emphasizes transparency and accountability – it would require organizations to conduct regular impact assessments for high-risk AI systems (very similar to bias audits) and to publicly report on measures taken to mitigate bias . The idea is to create a strong “chain of accountability” from the AI developers to the end users (companies deploying AI) . Although the bill is not yet enacted, it shows Brazil’s commitment to preventing AI-driven discrimination. Brazil also has a general data protection law (LGPD) which, like Europe’s GDPR, gives individuals rights regarding automated decisions. Under LGPD, candidates could possibly request information about AI-driven decisions or even a human review of an algorithm’s decision in some cases. Additionally, Brazilian labor laws and the constitution prohibit discrimination, so again, any biased outcome from an AI tool could run afoul of existing law. Companies operating in Brazil should watch the progress of the new AI bill and in the meantime ensure their AI recruiting practices are bias-tested and transparent. Given the hefty fines proposed (up to R$50 million per violation) for the forthcoming law , it’s clear Brazil intends to take compliance seriously. The upside is that, as elsewhere, these measures will likely improve trust in AI and lead to better hiring decisions overall(security.ai).

Company vs. Vendor: Who Is Responsible for AI Compliance?

One of the biggest questions for employers using AI is: if you bought a recruiting AI tool from a vendor, who bears the responsibility for compliance? The short answer is both the employer and the vendor have roles to play, but the company (employer) cannot escape liability by pointing to a vendor if something goes wrong. Let’s unpack this.

Employers’ Responsibilities: If you are using an AI-driven hiring platform or software, you as the employer are responsible for how that tool impacts your candidates. Regulators and laws generally view the employer as the entity making the hiring decision, even if an AI is used to assist. For example, New York City’s bias-audit law explicitly places the onus on the employer or employment agency to ensure a bias audit is conducted before using an AI tool . The employer must also notify candidates and publish the audit results . Similarly, the new Illinois law and others require that employers notify applicants about AI usage . If a candidate feels an AI system discriminated against them, they will likely file a complaint or lawsuit against the employer (who made the hiring decision), not just the software maker. This means companies need to perform due diligence on any AI tool they deploy: Has it been tested for bias? Is it used in a way that’s fair? Are we following any required procedures (like getting consent or doing audits)? Essentially, from a compliance perspective, you should treat an AI tool as an extension of your HR team – you’re accountable for its actions.

Vendors’ Responsibilities: On the other side, vendors and SaaS providers of AI tools also have critical responsibilities. A reputable vendor should be designing their AI to comply with laws and should assist clients in meeting their obligations. For instance, some AI hiring platforms proactively conduct independent bias audits on their tools and provide the results to all their clients. (In NYC, the law allows a single bias audit done by a vendor to be used by multiple employers, which helps reduce duplicate effort .) Good vendors will also provide documentation about how their AI works (so you can fulfill any “explain the AI” requirements like in Illinois and other laws). They should also have an avenue for candidates to request human review if required by law. If we borrow terminology from upcoming regulations, the vendor is often the “AI provider” and the employer is the “AI user.” Under frameworks like the EU AI Act, providers of high-risk AI (the vendors) will have to ensure the system is designed and tested to meet standards, while users (employers) must use it correctly, monitor outcomes, and inform people appropriately . In real terms, if you’re using a platform like distro.io (as an example of a recruiting SaaS with AI features), you should expect that Distro.io is building compliance into its product – e.g. ensuring their AI models are trained on diverse data, performing bias testing, and allowing configuration to meet various legal requirements. But you as the employer need to configure and use it in a compliant way – for example, turning on features that send required notices to candidates, not over-relying on the AI without any human check, and working with Distro.io to obtain any audit reports or technical details you might need to demonstrate compliance.

A helpful way to look at it is that compliance is a shared partnership: vendors should “bake in” fairness and transparency, and employers must “operationalize” that fairness and transparency in their own processes. Contractually, it’s wise for companies to demand commitments from vendors about legal compliance (and even indemnification if the tool’s bias causes issues). Practically, if an audit or regulator questions an AI hiring decision, the employer should be able to produce evidence of what the vendor has done (e.g. audit reports, model factsheets) andwhat the employer has done (e.g. candidate notices, documentation of how the tool is used alongside human judgment).

It’s worth noting that many laws explicitly envision this collaboration. For example, the NYC rules implied that an employer may ask their vendor to conduct the required bias audit, since the vendor might have the data and expertise to do so . Another example: Colorado’s law on high-risk AI provides a “reasonable care” safe harbor if developers and deployers (i.e. vendors and users) follow certain best practices . And as mentioned earlier, in Australia both parties could be on the hook for discrimination , underscoring that everyone involved in the AI’s design and use should prioritize fairness.

Key Point: You cannot fully outsource compliance. If you use an AI hiring tool, your team needs to understand what it does, stay updated on legal requirements, and configure/use the tool accordingly. Meanwhile, choose vendors who take compliance seriously – they should be keeping up with laws in all the markets they serve (U.S., EU, etc.) and updating their software to help you comply. When both the vendor and the employer commit to responsible AI, you greatly reduce risk and set the stage for successful outcomes.

Best Practices for Embracing AI in Recruiting 

Responsibly

Compliance doesn’t have to throw a wet blanket on your AI ambitions. In fact, following some best practices will not only keep you on the right side of the law, but also improve the effectiveness and fairness of your hiring. Here are steps companies should consider to confidently use AI in recruiting:

Stay Informed on Laws: Keep track of the legal requirements in the regions where you hire. For instance, know if you’re recruiting in New York City so that you can arrange a bias audit in advance , or if you’re posting jobs in Ontario, be ready to include an AI usage disclosure . Laws are evolving, so periodically review updates (many regulators issue guidance – e.g. the EEOC’s guidance on AI was a heads-up ). If you operate across multiple states or countries, aim to meet the strictest applicable rules as a baseline.

Choose and Vet Vendors Carefully: Select AI recruiting platforms that can demonstrate compliance. Ask potential vendors about their bias mitigation strategies and request any audit results or certifications. A good vendor should be transparent about how their AI model was trained and tested for bias. Include compliance checkpoints in your procurement – for example, ensure the contract requires the vendor to assist with any required audits or candidate inquiries. “Vet your AI vendors carefully,” as legal experts advise, and address AI-specific risks in your vendor contracts .

Ensure Transparency & Candidate Consent: Be open with candidates about your use of AI. This could mean adding a line in your job postings or career site: “We use an AI system to help screen applications – it does XYZ.” In some places this is legally required (e.g. consent in Illinois for video interviews , notice in NYC , disclosure in Ontario ). Even where not required, transparency builds trust. Make sure candidates know how to ask for an accommodation or human alternative if they are uncomfortable with an AI evaluation.

Keep a Human in the Loop: AI should assist, not fully replace, your human decision-making. Retain human oversight especially for final or sensitive hiring decisions. For example, use AI to score or rank candidates, but have a recruiter review those rankings rather than blindly rejecting everyone below a certain AI score. Human oversight can catch obvious mistakes (maybe a great candidate was scored low due to an unconventional resume format, which a person can recognize as an AI quirk). Many regulations (like the EU AI Act) explicitly call for human oversight on high-risk AI  . And studies show that combined AI + human judgment often leads to better outcomes than AI alone. So make it a policy: AI informs our hiring, but does not get the final say on its own.

Perform Regular Bias Audits or Reviews: Even if your local law doesn’t force you to, it’s wise to periodically check your AI tools for biased outcomes. This could be as simple as tracking demographics through stages of your hiring funnel to spot anomalies. Some vendors provide built-in bias reports; use them. And doing your own annual review is a proactive way to catch issues early. Document any findings and adjust the tool or your usage as needed (for instance, if you find the AI consistently scores one group lower, investigate why and push the vendor for improvements).

Maintain Documentation: Keep records of how your AI tools are implemented and used. Save those bias audit results, candidate consent forms, and notification templates. If a regulator or court ever asks, you want to show a paper trail of responsible use. Documentation might include the criteria the AI uses (many laws require you to be able to explain the “qualifications and characteristics” the tool evaluates ). It’s also smart to have an internal policy on AI usage in hiring – what tools are approved, for what purposes, with which safeguards. This shows you’re in control of the technology, not the other way around.

Train Your HR Team: Make sure recruiters and HR staff understand the AI tools and their limitations. Provide training on how to interpret AI recommendations and how to spot potential biases or errors. If, say, the AI flags a candidate as “low fit,” a trained recruiter should know how to double-check rather than discard the candidate outright. Training should also cover privacy and ethical handling of candidate data through these systems. Essentially, empower your people to work effectively with AI. When humans and AI work together thoughtfully, you get the best of both – efficiency from the software and wisdom from experienced professionals.

Protect Candidate Privacy: Compliance isn’t just bias – it’s also about handling data responsibly. Ensure that any personal data processed by the AI (resumes, video recordings, etc.) is secured and used only for legitimate hiring purposes. Follow applicable privacy laws (like GDPR in Europe, LGPD in Brazil, or state laws in the U.S.). If your AI involves video or voice data, be extra cautious – some jurisdictions treat biometric data with special care. Always have a plan for data retention and deletion once hiring for a role is complete, so you’re not holding sensitive data longer than necessary.

Foster a Positive Narrative: Finally, as an HR leader, help shape the narrative that AI + compliance = better hiring. Explain to stakeholders that these laws and steps aren’t about stifling innovation – they’re about using AI successfully. A compliant AI tool is one that is more likely to be fair and effective, which means better hires and a more diverse team. When candidates see that you use AI thoughtfully and fairly, it enhances your employer brand. Compliance can truly be a competitive advantage, signaling that your company is forward-thinking and responsible.

Conclusion: Compliance as an Enabler, Not an Enemy

AI is undoubtedly a powerful ally for the future of recruiting. It can save time, reduce drudgery, and even help uncover great candidates who might be overlooked. Yes, the regulatory landscape is evolving, and it can seem complex at first. But these compliance measures are ultimately about guarding against bias, protecting candidates, and building trust. By leaning into both AI and compliance, HR leaders can innovate with confidence. Think of it this way: just as you wouldn’t operate heavy machinery without safety guards in place, you shouldn’t deploy AI without the “safety guards” of audits, transparency, and human oversight. Those guards don’t slow you down – they keep you on track and out of trouble.

In summary, embrace AI as a tool to enhance your recruiting, and embrace compliance as the framework that makes sure your AI usage is fair, legal, and beneficial to all. When done right, AI + compliance leads to hiring processes that are efficient and equitable. That’s a future every HR leader can get excited about. Here’s to hiring smarter, faster, and with full confidence that we’re doing it the right way! Compliance isn’t the enemy of AI – it’s its best friend in creating a brighter future for talent acquisition.

Sources: