As we stand at the precipice of a technological revolution, the integration of Artificial Intelligence (AI) into the workplace has become not just a competitive advantage, but a necessity for forward-thinking organizations. However, with great power comes great responsibility, and the intersection of AI and the law presents a labyrinth of challenges and opportunities. In this dynamic era, understanding the legal implications of AI implementation is not just prudent; it’s paramount for ensuring ethical, compliant, and successful business operations. Welcome to a discourse where cutting-edge technology meets the timeless principles of law and leadership in the modern workplace.
Modern workplaces are increasingly receptive to and reliant on AI to perform certain human resources (HR) and employee management functions. Common uses of AI in the workplace include:
- Recruiting and hiring.
- Employee onboarding.
- “Intelligent” robots that automate certain repetitive tasks and often work alongside and are “supervised” by human.
Recent breakthroughs in generative AI technologies like ChatGPT can interact with users in a conversational way by providing textual responses to users’ natural language queries through an online chatbot interface. As a result, with or without their employers’ knowledge or consent, employees are increasingly using generative AI tools to promote efficiencies and reduce costs when performing certain workplace tasks including:
- Analyzing data.
- Conducting research.
- Drafting emails, cover letters, memoranda, contracts, presentations, and other routine documents.
- Responding to basic customer service queries.
- Performing human resources (HR) and employee management functions.
These technological changes present practical, legal, and regulatory challenges for employers.
Discrimination and Bias Risks in Screening, Interviewing, and Hiring
One of the most rapidly growing uses of AI is in the employee screening and recruiting process. AI promises to streamline these tasks by automatically sorting, ranking, and eliminating candidates with minimal human oversight. No matter how HR operates, employers must comply with a host of federal, state, and local anti-discrimination laws in all aspects of the employment relationship, including the preemployment screening and interview process. Generally, these laws prohibit:
- Intentional discrimination against individuals because they are members of a specified protected class (disparate treatment).
- Facially-neutral policies or practices that disproportionately affect members of a protected class (disparate impact).
Disparate Treatment Claims
Using AI tools in the recruitment process offers the promise of reducing or eliminating the unconscious bias that sometimes distorts human recruitment and hiring. However, using AI brings inherent risks and does not insulate an employer from discrimination claims. The internet, social media, and public databases used by some AI tools typically contain information about applicants and employees that an employer could not legally ask about on an employment application or during an interview, such as an applicant’s age, religion, race, sexual orientation, or genetic information.
Moreover, AI tools are only as good as the information provided to them. Depending on the available data set and the algorithms used, AI recruiting tools may duplicate and proliferate past discriminatory practices that favored one group of individuals over another. For example, the way AI assesses specific educational backgrounds or geographic locations may skew results based on race.
On the one hand, the lack of transparency in the algorithmic process often makes it impossible to determine how or why an AI tool reached a decision or made a prediction. Algorithms that function in this so-called “black box” may make it less likely that plaintiffs can prove intentional discrimination using the direct method of proof because computer programs inherently lack intent.
On the other hand, most discrimination claims lack direct evidence, and instead rely on circumstantial evidence analyzed under the ‘McDonnell Douglas burden-shifting’ analysis, under which the employer must set out legitimate, nondiscriminatory reasons for adverse employment decisions. Using this framework, the “black box” problem may make it more difficult for employers to articulate the legitimate business reasons behind AI-driven decisions.
Disparate Impact Claims
AI recruiting and hiring tools may also increase the risk of disparate impact claims. In disparate impact cases, once a plaintiff demonstrates that a policy or practice has a disproportionately harmful effect on a protected class (usually by statistical comparison, which the defendant employer can challenge), the employer must show both that:
- The policy or practice is “job related for the position in question and consistent with business necessity.”
- No other alternative employment requirement would suffice.
- (42 U.S.C. § 2000e-2(k)(1)(A).)
The risk of disparate impact claims is magnified when using AI tools. In analyzing a large quantity of data, the algorithm may identify a statistical correlation between a specific characteristic of a job applicant and future job success that has no actual causal correlation. As a result, the employer may be unable to demonstrate that a practice with a disparate impact on a protected class of individuals is sufficiently job-related or consistent with business necessary as is required to defend a claim under Title VII.
Employees and applicants also may have an easier time alleging class-wide discrimination claims if the employer uses the same AI tool or algorithm to assess an entire pool of candidates. This strategy allows plaintiffs to establish a common hiring practice that affects an entire class of plaintiffs, one of the hurdles to class certification.
Employers must be vigilant in analyzing the results of AI tools and review and continue to monitor those results, as the AI tools learn and adapt based on prior results.
Disability Discrimination and Accommodation
The Americans with Disabilities Act (ADA) prohibits employers from discriminating against qualified individuals with a disability in all employment-related decisions, such as hiring, promotion, or termination. The ADA also requires employers to provide a reasonable accommodation to qualified individuals when needed to perform or apply for a job. (42 U.S.C. §§ 12101 to 12213.)
Certain AI screening and recruitment tools may have a discriminatory impact on individuals with a disability. For example, AI tools that analyze an individual’s speech pattern in a recorded interview may negatively rate individuals with a speech impediment or hearing impairment who are otherwise well-qualified to perform the essential tasks of the job.
Employers must monitor the processes and results of AI tools to ensure that they are not eliminating potential candidates who may require accommodation to perform essential job functions. When using online recruiting tools for interviews, initial screening, or testing, employers also must ensure that the website or platform is accessible to individuals who are hearing- or sight-impaired.
EEOC May 2022 Guidance
In May 2022, the EEOC issued a Technical Guidance on AI decision-making tools and algorithmic disability bias. The EEOC Guidance identifies three primary ways that employers using these tools may violate the ADA. These include an employer’s:
- Failure to provide reasonable accommodation needed for the algorithm to rate the individual accurately.
- Using a tool that “screens out” a disabled individual (whether intentionally or unintentionally) when the individual is otherwise qualified to do the job, with or without a reasonable accommodation. This may occur if the disability prevents the individual from meeting minimum selection criteria or performing well on an on-line assessment.
- Using a tool that makes impermissible disability-related inquiries and medical examinations.
The EEOC Guidance also identifies “promising practices” to reduce the likelihood of disability bias when using AI tools.
Federal, State and Local Laws and Regulatory Guidance Regarding AI in the Workplace
Although no federal law specifically regulates the use of AI in hiring, recruiting, and other HR functions, the last several years have brought an increased focus on training about and regulating AI at the federal level. More recently, the EEOC issued technical assistance addressing questions about how employers should monitor and assess algorithmic decision-making tools in employment selection processes.
While not specifically addressing workplace issues, on October 30, 2023, President Biden issued an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The comprehensive order sets new standards for AI safety and security, directing federal agencies to assess the potential risks of and implement policy on advancing and using AI technology in accordance with eight guiding principles and priorities.
Some states and local jurisdictions have enacted laws specifically regulating employers’ AI use. For example, in New York City, employers using automated employment decision tools must conduct an independent bias audit before using the tool and comply with specified notice and posting requirements. Employers must be aware of the laws in jurisdictions where they recruit and hire workers and continue to monitor rapidly changing developments in this area.
Background Checks and Other Privacy Protections
Employers that use AI screening and recruiting tools with access to criminal records or other information retrieved in a typical background check must be mindful of compliance obligations under the Fair Credit Reporting Act (FCRA) and applicable state background check laws. AI tools with access to individuals’ social media information arguably may qualify as a consumer reporting agency (CRA) and therefore trigger disclosure and reporting obligations under FCRA (15 U.S.C. § 1681b(b)(2)(A)). Some state laws also impose notification requirements on employers that conduct background checks internally, even if they do not use a CRA.
Employers that take an adverse employment action based on information in a background report must satisfy other notification requirements. However, the reasons an AI tool rejects a candidate may be unknown (or unknowable) to an employer or even the AI tool developer. This creates a tension between the current regulatory scheme and rapidly evolving AI technologies and tools.
Other Privacy Concerns
Employers using AI screening and recruiting tools must ensure that the AI tools do not violate applicants’ or employees’ privacy rights under various applicable laws, such as:
- Password privacy laws.
- Salary history bans.
- Biometric privacy laws.
As you embark on the journey of integrating AI into your organization’s fabric, remember that legal knowledge is the cornerstone of sustainable innovation. At Goosmann Law, we focus on navigating the intricate nexus of AI and the law, providing tailored solutions to safeguard your company’s interests while fostering innovation. Our team of attorneys stands ready to guide you through the complex legal landscape, ensuring compliance, mitigating risks, and maximizing the transformative potential of AI in your workplace. Don’t hesitate to reach out to us for more information and assistance. Together, let’s harness the power of AI responsibly and ethically, shaping a brighter future for your business and the world at large. Contact us today to embark on this transformative journey with confidence.
For a model AI employee use policy, see our Downloadable Generative Artificial Intelligence (AI) Use in the Workplace Policy.