If you’re scanning job listings in New York City, there’s a chance you’ll spot new information: a notice that the company uses automated technology to make employment decisions, and a link to an audit.
That’s because of a new law that went into effect in July, which is designed to arm job-seekers with important information and make companies more accountable. It’s the first of its kind in the country.
There are algorithmic tools on the market for just about every part of the hiring process, from technology that advertises job listings to certain groups of people, to software that scrapes your social media profile and draws conclusions about your personality, to tools that scan resumes and flag promising ones, to software that scores applicants based on the sound of their voice.
Unfortunately, the apps that could screen you out of your dream job can be biased—or wildly off base. Last summer, a tutoring company settled a suit with the U.S. Equal Employment Opportunity Commission after the agency claimed that the company used AI-powered software to illegally reject female candidates over 55 and male candidates over 60. When a reporter tested an AI interview app a couple years ago, she received a score of 6 out of 9 for English competency despite answering questions exclusively in German.
In response to these concerns, the New York City Council passed Local Law 144 in 2021, which requires employers to audit their algorithmic tools for bias and post the results. As lawmakers in Europe, D.C., and state capitals across the country debate how to tackle AI bias and transparency, how NYC’s law is working—or not working—for job applicants can provide important lessons. That’s why researchers at Cornell University, Data & Society Research Institute, and Consumer Reports decided to study how useful the law is to job seekers now that it has been enforced for several months.
They found that the law was falling far short of its goals, and that, in practice, job seekers can’t be expected to find or make use of the notices and audits companies are required to disclose.
What does the New York City law require?
The law creates two new obligations for employers in the city who use certain algorithmic tools for hiring and promotion decisions.
The first is that companies have to hire an independent auditor to check the technology every year for bias. Those reports must be posted publicly on the employer’s website. The second is that companies have to notify job-seekers and employees up for promotion about the company’s use of algorithmic tools and provide information about the data the technology uses.
If everything worked according to plan, a job-seeker could spot an opening, see that the company uses AI in their hiring process, and check out the audit report to see if the technology is likely to be biased against her. Theoretically, this transparency, combined with the impact of job-seekers’ decisions, would drive employers to adopt less biased software.
How did the researchers study the law?
To understand how the law was working in practice, 155 student investigators enrolled in an undergraduate course at Cornell University searched for audits and notices on the websites of 391 employers. To decide which companies to study, researchers put together a list of employers, including companies that had recently hired Cornell graduates, or been ranked as a top 100 internship provider, or had been identified in a pilot study, and took a random sample of that list.
Students searched for audits and notices on companies’ websites, as well their listings on LinkedIn and Indeed. They spent no more than 30 minutes per employer, recorded any bias audit or notice they found, and recorded information about the process of looking for the audit. Researchers then checked their findings and sent follow up messages to employers, asking them to correct any errors in the data.
What did they find?
Out of the 391 employers included in the study, 267 had open jobs in NYC’s jurisdiction when a student visited their website. Out of those 267 employers, researchers found a total of 14 audit reports (5%) and 12 notices (4%) about automated decision-making tools.
Researchers also examined the audits and found that most showed the tools didn’t run afoul of the threshold for bias that the U.S. Equal Employment Opportunity Commission (EEOC) uses as a rule of thumb.
However, forthcoming research cited in the paper suggests that many automated employment tools do not meet the EEOC’s threshold. As a result, the researchers reasoned that employers were more likely to post favorable audits and withhold ones that showed bias. Posting bias audits that evidenced discrimination could attract lawsuits, so “legal counsel may advise [companies] that non-compliance with LL 144 is less risky than providing evidence for such litigation,” the researchers wrote.
Why did such a small share of employers post audits and notices? One major issue is that the law gives employers a lot of discretion.
For example, it is essentially left up to companies to determine if they’re covered by the law. Local Law 144 only applies to algorithmic tools that are used to “substantially assist or replace discretionary decision making.” What, exactly, counts as a tool “substantially assisting” in a decision? The city’s Department of Consumer and Worker Protection wrote regulations to implement the law, which defined “substantially assisting” to mean situations where either 1) the tool’s output is the only factor in the decision, or 2) the tool’s output the most important factor in a set of criteria, or 3) the tool’s output is used to override conclusions based on other factors, including human decision-making.
Armed with that definition, it’s left up to companies to decide whether they’re using an algorithmic tool in a way that “substantially assists” their employment decisions. If a company says they’re using AI to analyze the voice and body language of a job candidate, along with a couple other equally important factors, they’re off the hook. And if, in practice, the output of the AI is actually the factor that a hiring manager pays the most attention to, it’s not clear how that would come to light.
Because the law leaves so much up to employers, it wasn’t possible for the researchers to determine how many employers are actually breaking the law. When researchers couldn’t find an audit, that didn’t necessarily mean that the company was flouting the law. Perhaps the company wasn’t using automated tools. Or maybe they were, just not in a way that “substantially assisted” them. Maybe they were still trying to find an auditor, or tucked their audit in a hard-to-find place and didn’t respond to follow up requests.
The student investigators overwhelmingly found the experience of searching for audits challenging, time-consuming, and frustrating. Sometimes the audit was in a footer of the website; sometimes it was in an FAQ; sometimes it was in a dropdown menu; sometimes it was in a downloadable PDF. This bodes poorly for the law’s real-world effectiveness. If the students trained to search for audits struggled to find them, it seems likely the average job-seeker—who may not know to look for audits in the first place—would too.
These results don’t come as a total surprise. Civil rights and worker advocates say New York City’s law started out with weak requirements, and then was further watered down.
Why does this matter?
One lesson is that laws aimed at changing businesses’ decisions by giving consumers information can wind up creating a lot of work for consumers. When that work is challenging, consumers probably won’t do it, and businesses won’t feel the pressure to change.
Another lesson is that when employers have too much flexibility in how they interpret and comply with a law, it’s much less likely that the law will be useful to consumers. The researchers propose that similar laws should apply to software based on its function, rather than how it’s built or how exactly it is used. If a tool ranks job applicants, for example, it could be covered, regardless of whether the employer “substantially” relies upon it, or whether it was built with machine learning. Requiring companies to upload their audit reports to a central database in a consistent format would allow regulators and investigators to keep tabs on the quality of audits, spot patterns in which tools are biased, and make it easier for job-seekers and journalists to find useful information.
In its first few months, the law hasn’t lived up to its goals. But what it has done successfully is kick off the process of figuring out how to regulate decision-making algorithms, while creating a market for independent auditors in the process.
The law—and this research—represent the starting gun, not the finish line.