Policy Monitor

United States/New York – Automated Employment Decision Tool Law – Local Law 144

The law, which took effect at the beginning of July, regulates the use of automated employment decision tools in hiring, recruitment or promotion processes and requires employers to take specific steps before and after using these tools. The local law is the first of its kind in the United States, and similar initiatives are expected elsewhere in the US.

What: Law

Impact score: 1

For who: Policy makers, businesses, labour unions, researchers, government administrations

URL: https://rules.cityofnewyork.us/wp-content/uploads/2023/04/DCWP-NOA-for-Use-of-Automated-Employment-Decisionmaking-Tools-2.pdf

In the United States, the use of automation processes in recruitment procedures is already quite well established. The New York City Local Law 144 is the first legislative initiative to attach binding obligations to the use of automation tools. It aims to reduce bias in processes using algorithms to recruit, hire or promote employees.

Under the new law:

  • Employers and employment agencies in NYC are prohibited from using artificial intelligence and algorithm-based technologies to assess candidates and employees unless they conduct an independent bias audit before deploying AI employment tools and making the results public. The bias audits involve an assessment of the impact of the use of automation tools on gender, ethnicity, and race.
  • Companies using these types of algorithms should make disclosures to employees or job candidates. Applicants have the right to inquire about and receive information regarding the data being gathered and assessed.
  • Responsibility for complying with the law lies with New York City employers, not the software vendors who create these AI tools.

Data requirements

A bias audit must utilize historical data from the AI tool. This historical data can come from one or more employers or employment agencies utilizing the tool. In cases where there is insufficient historical data for a statistically significant bias audit, an employer or employment agency can rely on a bias audit that employs test data. If test data is used, the audit's summary of results should explain the absence of historical data and detail the generation and acquisition of the test data used.

Penalties & scope

Companies found not to be in compliance will face penalties of $375 for a first violation, $1,350 for a second violation and $1,500 for a third and any subsequent violations. The scope of Local Law 144 extends beyond NYC-based workers and as long as a person’s performing or applying for a job in the city, they’re eligible for protections under the new law.

Significance

The scope of the law encompasses companies with employees operating within New York City, although labor specialists anticipate its impact to extend across the country. Several states, including California, New Jersey, New York, Vermont, and the District of Columbia, are in the process of developing legislation to oversee the utilization of AI in recruitment. Additionally, Illinois and Maryland have already passed laws that impose restrictions on certain AI technologies, primarily concerning workplace monitoring and the evaluation of potential job applicants.

Critique

Critics argue that the law is overly sympathetic to business interests and has been watered down, leading to reduced effectiveness.

  • Narrow definition of the tools

Critics argue that the law's definition of "automated employment decision tools" is interpreted narrowly. It focuses on instances where AI technology is the sole or primary factor in hiring decisions or is used to overrule a human decision. This leaves out cases where AI is used in conjunction with human decision-makers, which is a common scenario. This limitation is seen as problematic since it doesn't adequately address the potential for AI-driven discrimination in practical hiring processes.

  • Exclusion of key discrimination targets

The law is criticized for not covering discrimination against older workers or individuals with disabilities. While it addresses bias based on sex, race, and ethnicity, critics argue that it should encompass a broader range of protected groups to ensure comprehensive fairness and equality in hiring practices.

  • Lack of explainability requirement

Some critics contend that the law falls short by not requiring a detailed explanation of how AI algorithms make decisions (explainability). In situations where AI technologies impact people's lives, such as hiring, having an explanation of decision-making processes is deemed important. However, the law's focus on measuring the impact ratio does not delve into the internal workings of algorithms, which is a concern for those advocating for transparency and accountability.

Businesses also have criticised the law because the requirement for independent audits is not feasible because the auditing landscape for AI applications in the US is nascent.