This post was co-authored by Christian M. Wolgemuth, an attorney in McNees’ Privacy & Data Security and Litigation practice groups.
In our rapidly evolving technological landscape, the use of artificial intelligence (AI) has become more prevalent, touching virtually every aspect of our lives. From smart assistants that streamline our tasks to advanced data analytics that drive business decisions, AI is transforming industries across the globe. However, with this transformation comes important questions about ethics, fairness, and potential biases, particularly when it comes to AI in the employment context. The U.S. Equal Employment Opportunity Commission (EEOC) recently issued new guidance to address them.
The Intersection of AI and Employment
The employment sector has witnessed the integration of AI in various processes, including recruitment, candidate screening, employee evaluations, and even talent development. While AI offers great potential in making these processes more efficient and objective, it also brings with it significant challenges and legal risks.
One of the primary concerns is the potential for AI algorithms to inadvertently perpetuate or even amplify biases that exist in society. These biases may be based on race, gender, age, or other protected characteristics, and their unintended reinforcement can lead to discriminatory outcomes. The EEOC’s guidance seeks to address these challenges and provide a framework for employers to ensure that their use of AI remains compliant with the anti-discrimination laws.
Key Points from the EEOC’s Guidance
- Transparency: Employers should ensure that the AI systems they use are transparent and explainable. This means that the decision-making process of the AI should be understandable to human operators, and the factors leading to particular decisions should be clear.
- Bias Mitigation: Employers should actively assess the potential for bias in their AI systems. This includes regularly reviewing the data used to train the AI and evaluating the outcomes to detect any discriminatory patterns. If biases are detected, steps must be taken to correct them.
- Human Oversight: While AI can be a powerful tool, it should not replace human judgment entirely. Human oversight is essential to ensure that AI systems are not making decisions that have a disparate impact on protected groups.
- Fairness and Consistency: Employers must ensure that the use of AI does not result in unfair or inconsistent treatment of different groups of employees or job applicants. The guidance emphasizes the importance of conducting regular audits to identify and address any disparities.
The above points largely amount to ensuring that employers avoid “adverse impacts” through their use of AI tools. The EEOC defines an “adverse impact” as when the selection rate of a protected group is “substantially” less than the selection rate of those in another group. This analysis involves various mathematical calculations regarding the data pool and selection rate which should be utilized during regular internal audits. For questions regarding adverse impact calculations, reach out to any of the McNees Labor & Employment team.
Be Mindful of State Privacy Laws
The rapid adoption of state consumer privacy laws also impacts businesses’ and employers’ use of AI in decision making. California, for example, as the state which has been leading the trend of adopting novel privacy laws, provides its residents with the right to request that businesses limit the use of California residents’ “sensitive personal information.” California privacy law defines “sensitive personal information” to include information about the individual’s racial or ethnic origin, religious or philosophical beliefs, union membership, personal health, and sex life or sexual orientation. When California consumers exercise this right, the business (or in this case, the potential or current employer) is prohibited from using sensitive personal information in any way that would not be reasonably expected by an average person. And because an average person does not expect this type of sensitive personal information to be factored into an employment decision, employers must confirm that it is not being used in any AI decision-making processes.
California’s consumer privacy laws, including the California Consumer Privacy Act of 2018 and the California Privacy Rights Act of 2020, do not exclude employee and job applicant data. This means that employers and businesses that are required to honor the statutory privacy rights requests of California consumers must also honor the privacy rights of their California employees and applicants when they request to limit the use of their sensitive personal information. Additionally, California’s privacy laws grant consumers the right to be free from discrimination or retribution when they exercise their privacy rights. This means that an employee’s or applicant’s choice to restrict the use of their sensitive personal information cannot be held against them when considering any potential employment decisions.
Currently, California is the only state to expand the scope of its consumer privacy laws to include employees and applicants. However, and as the trends have shown, where California goes others are likely to follow. Virginia has created additional consumer privacy rights for its residents, including the right to opt out of the processing of personal information for the purposes of “profiling in furtherance of decisions that produce legal or similarly significant effects concerning the consumer.” As of right now, this right is not available to individuals in the employment context, but as state privacy laws become more consumer-friendly, we should expect these types of privacy rights to proliferate and be available to employees and job applicants.
As discussed above, employers should already be working to remove bias from screening and hiring practices using AI. However, the rights granted to job applicants and employees in California – and likely the residents of more states in the future – create an additional prohibition against the use of personal information that could create bias or an adverse impact against certain individuals.
A Call for Responsible AI Implementation
The EEOC’s recent guidance is a significant step toward addressing the complex issues surrounding the use of AI in the employment context. It highlights the need for employers to approach AI with responsibility, diligence, and a commitment to equal opportunity.
By incorporating these guidelines into their AI practices, employers can harness the benefits of AI technology while minimizing the risk of discrimination. This not only protects the rights of employees and job seekers but also contributes to a more inclusive and diverse workplace.
In this era of AI-driven innovation, the EEOC’s guidance serves as a reminder that while technology advances, the importance of upholding ethical standards and safeguarding equal opportunities remains paramount. By staying informed about these guidelines and working together to implement them, employers can help to create a future where AI enhances our workplaces without compromising fairness and equality.