Skip to content
PSHRA logo
  • Job Board
  • Store
  • Login Welcome, My AccountLog Out
  • Become a Member
  • Membership
    • Individual Membership
    • Agency Membership
    • Chapters
      • Establish a Chapter
      • Chapter Directory
    • Awards
    • Volunteer Opportunities
    • Community Forum
  • News & Resources
    • HR News Magazine
    • News
    • Research & Reports
    • Webinar Library
    • Assessments
  • Events & Engagements
    • Annual Conference
    • Event Calendar
  • Education & Certification
    • Certification
    • Courses
    • Webinars
    • Fellowships
  • Job Board
  • Store
  • Login Welcome, My AccountLog Out
Facebook Twitter LinkedIn Instagram Youtube
  • Membership
    • Individual Membership
    • Agency Membership
    • Chapters
      • Establish a Chapter
      • Chapter Directory
    • Awards
    • Volunteer Opportunities
    • Community Forum
  • News & Resources
    • HR News Magazine
    • News
    • Research & Reports
    • Webinar Library
    • Assessments
  • Events & Engagements
    • Annual Conference
    • Event Calendar
  • Education & Certification
    • Certification
    • Courses
    • Webinars
    • Fellowships

October 2022

AI and Employment Decisions

Back to Posts
Placeholder image
Back to Posts
  1. Home
  2. AI and Employment Decisions
SHARE:

More than 80% of all employers, and more than 90% of Fortune 500 companies, now report using some form of artificial intelligence (AI) in employment, according to Charlotte A. Burrows, chair of the U.S. Equal Employment Opportunity Commission (EEOC). While putting employment decisions in the hands of an emotionless software program may seem like a leap forward in ensuing fairer and more merit-based employment decisions, this practice has alarmed interest groups across the United States. And state and federal agencies are taking action. 

This is because AI and other types of automated decision-making tools run the risk of screening out applicants or employees based on protected characteristics or conditions, inadvertently replicating human biases or relying on data that serves as a proxy for protected characteristics or conditions. For example, a program that favors applicants with residential zip codes near an employer’s business may inadvertently discriminate against other qualified applicants if the preferred zip codes are proxies for certain racial groups.

The EEOC recently released technical guidance on the use of algorithmic decision-making tools in the context of Title I of the Americans with Disabilities Act (ADA). The ADA prohibits private employers, as well as state and local governments, from discriminating against qualified individuals with disabilities. The U.S. Department of Justice (DOJ) has also released guidance that explains how artificial intelligence can lead to discrimination under the ADA. The EEOC and DOJ’s focus on the ADA in particular is likely due to that population’s unique susceptibility to the effects automated decision-making tools.

State and local governments have also taken steps to address the use of AI in employment decision-making. The City of New York, for instance, adopted legislation—due to go into effect in 2023—that regulates “automated employment decision tools” in employment unless the tool has undergone a “bias audit.” Not to be outdone, it should be no surprise that California, with some of the strictest employment laws in the country, now has AI firmly in its crosshairs.

California’s Civil Rights Council has been workshopping regulations aimed at addressing the intersection between AI and predictive algorithms with the Fair Employment and Housing Act, or “FEHA”—the statutory framework that protects California employees from discrimination, retaliation or harassment based on protected characteristics or conditions. The draft regulations seek to cover the full panoply of employment decision-making, from hiring and firing to everything in between.

On Aug. 10, 2022, the Council agreed to move the draft regulations into a formal rulemaking phase, meaning it intends to publicly release a formal notice of proposed rulemaking with a public comment period. (Employers can subscribe to receive updates from the Council here.)

As a general overview, the current version of the proposed regulations would:

•    Define and incorporate the term “automated decision systems” (“ADS”) throughout the existing regulatory framework. The proposed regulations would prohibit ADS that has a disparate impact, or constitutes disparate treatment, on an applicant or employee or class of applicants or employees unless job- related and consistent with business necessity. ADS is defined as “a computational process, including one derived from machine-learning, statistics or other data processing or artificial intelligence, that screens, evaluates, categorizes, recommends or otherwise makes a decision or facilitates human decision making that impacts employees or applicants.”

•    Prohibit the use of ADS that utilize proxies for protected characteristics or conditions.

•    Specify that employers are responsible for their own ADS and their vendor’s automated decision system. That is, if an employer uses a third-party to assist with any form of employment decision-making (such as recruiting), that third-party would be considered an “agent” of the employer. If the third-party vendor uses any form of ADS that directly or indirectly discriminates against a person or class of persons, the employer can be liable.

•    Requires employers to retain “machine-learning data” for four years. It also requires entities that sell, advertise or facilitate the use of ADS to retain records of the assessment criteria used by the employer for four years.

•    Specifies that third parties who sell, advertise or facilitate the use of ADS for an unlawful purpose can be subject to aider and abettor liability.

•    Specifies that prohibited forms of pre-offer physical, medical and psychological examinations can include those utilizing ADS. As examples, the regulations identify personality based questions, puzzles, games, or other gamified challenges.

California’s draft regulations serve as a not-so-subtle warning to California’s employers, and the consequences of utilizing improper ADS could range from individual lawsuits to messy class actions. In other words, the stakes are very high. Importantly, the Council has now repeatedly stressed that the draft regulations do not create “new” liabilities, but rather reflect how existing laws apply to ADS right now.

Even if that general proposition is true, the draft regulations are dense, confusing, at times commentary-like, and offer little practical guidance for well-intentioned employers (or recognition of the obstacles they will face navigating this complex area). For instance, third-party vendors who rely on proprietary or confidential software are unlikely to simply turn over their source materials. Likewise, employers interested in a bias audit will face the reality that standards on this front are likely to be varied.

Meanwhile, the public’s awareness of AI as a tool in employment decision-making is only going to grow, and that is likely one of the principal goals of the regulations (after all, you are reading this article). It is only a matter of time before requests for AI and other predictive tools become a ubiquitous component of routine employment litigation.

What’s Coming Down the Pike

Fortunately, California’s draft regulations serve as a window into the future. Employers utilizing AI or other automated decision-making tools need to evaluate and understand them to ensure they do not discriminate. Employers may also want to consider developing practices around the use of AI. Some “promising practices” noted by the EEOC include:

•    Disclosing to applicants and employees the traits the algorithm is designed to assess, the method in which they are assessed and factors that affect the rating;

•    Informing all applicants and employees who are being rated that reasonable accommodations are available, with clear instructions for requesting them (e.g., taking a test in an alternative format or being assessed in alternative ways); and

•    Only measuring abilities or qualifications necessary for the job, directly, instead of measuring characteristics or scores correlated with the desired abilities.

If an employer relies on a third-party vendor’s ADS, it cannot simply stick its head in the sand. It will need to understand that software or explore other areas of risk mitigation.

What the above makes clear is that an employer’s decision to use AI or other automated decision tools is actually quite complicated. There are a whole host of considerations to field before pressing the AI button. Using AI or other automated tools is also fraught with risks, especially in the absence of recognized industry standards or more clear guidance for employers. And, in this environment, employers should ask themselves if the risks are worth the reward.

You can access the EEOC and DOJ’s technical guidance here and here.

PUBLISHED DATE

21 October 2022

AUTHOR

Category

HR News Article

Subscribe

Subscribe to the HR Bulletin
Subscribe

Related Posts

View more

June 2023

OPM Releases Guide to Aid Federal Employees in Preparation for Retirement


Learn More

May 2023

Letter Urges Lawmakers to Lift Abortion Coverage Ban for Federal Employees


Learn More

May 2023

Bill Aims to Improve Understanding of AI in Federal Workplace


Learn More

Get started.

Public sector human resources is a challenging and essential profession. Don’t miss out on the knowledge, resources, and connections you need.

Contact Us
PSHRA logo

1617 Duke Street
Alexandria, VA 22314

Phone: (703) 549-7100

Facebook Twitter LinkedIn Instagram Youtube

About PSHRA

  • Leadership
  • Strategic Plan
  • Partner With Us
  • Press Releases

Support

  • Contact

Website

  • Privacy Policy
  • Accessibility
© 2023 International Public Management Association for Human Resources
Scroll To Top
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies. However you may visit Cookie Settings to provide a controlled consent.
Cookie settingsACCEPT
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
CookieDurationDescription
cookielawinfo-checkbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytics
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Others
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
Save & Accept