top of page
Search
  • Writer's pictureLeslie A. Farber

Regulating the Use of AI for Employment-Related Decisions


robot at computer screen - artificial intelligence

Today, there is an array of computer-based tools available to assist companies in their talent management and recruitment efforts. Businesses have increasingly adopted the use of artificial intelligence (“AI”), algorithms and different types of software at all stages of the hiring process.


According to human resources (“HR”) consulting company Enspira, more than 50% of companies in the U.S. either have or plan to implement HR platforms that use AI. Some employers use resume scanners that prioritize applications using certain keywords. Others use video interviewing software to evaluate candidates based on their facial expressions and speech patterns, or “virtual assistants” or chatbots that ask job candidates about their qualifications and reject those who do not meet pre-defined requirements.


While employers generally rely on these tools to help improve efficiency and increase objectivity, the use of AI has been found to produce biases in hiring. For example, one of the benefits of using AI is simplified decision-making. In machine learning, a computer ingests huge amounts of data and, based on the patterns it sees, creates certain rules that enable it to make automatic decisions. Without proper development and safeguards in place, however, there may be bias in the data or in how the algorithm processes the data. This can lead to unintended results, such as talent acquisition software that eliminates certain candidates in discriminatory ways.


Regulators, including international bodies and U.S. federal, state and city governments, have begun to focus on the potential for AI to cause harm. Recently, lawmakers introduced legislation in both chambers of the U.S. Congress that would require organizations using AI to perform impact assessments of AI-enabled decision processes, among other requirements.


New York City has passed the country’s first law that will require employers to conduct bias audits for software-driven tools used to evaluate job candidates or employees within the city. The department of consumer affairs will have enforcement authority, and starting in January 2023, employers that are not compliant will face daily fines for each violation ranging from $500 to $1,500. The law is likely to have implications not only for employers, but for the companies that develop these tools.


In May of this year, the U.S. Equal Employment Opportunity Commission (“EEOC”) released guidance advising employers that the use of AI and algorithmic decision-making tools to make employment decisions could result in unlawful discrimination against applicants and employees with disabilities. The EEOC’s technical assistance discusses potential pitfalls employers should be aware of to ensure such tools are not used in discriminatory ways. The guidance also outlines how existing ADA requirements may apply to the use of AI in employment-related decisions, and offers “promising practices” to help employers with ADA compliance when using AI tools.


As technology continues to develop, the EEOC will likely expand on its guidance regarding employers’ use of AI and how it intersects with both the ADA and other federal anti-discrimination laws.

The growing wave of regulations makes it clear that companies need to put procedures and processes in place to ensure that their organizations are using AI responsibly, especially around diversity in hiring. As a first step, the Society for Human Resource Management (SHRM) recommends that C-suite executives identify whether their organizations use AI and, if so, where. Once they know what AI their organization is using, executives need to put governance and compliance checks and balances in place.


Most AI use principles and guidance recommend similar types of best practices, such as getting documentation from vendors showing how they developed their AI and what data they used. Organizations also must monitor AI systems, since the decisions coming out the systems can change as the data fed into the them changes. This can result in unintended bias. Companies might consider creating teams of hiring and tech professionals that monitor data, identify problems and continuously challenge the outcomes produced by AI.


If you need advice on preventing or remedying discrimination based on artificial intelligence, please contact us at 973.707.3322 or via email at LFarber@LFarberLaw.com.

The contents of this writing are intended for general information purposes only and should not be construed as legal advice or opinion in any specific facts or circumstances.

19 views0 comments
bottom of page