Key part is that AI is suspected of down-ranking folks by age (ADEA = Age Discrimination in Employment Act)
> The Court has provisionally certified an ADEA collective, which includes: “All individuals aged 40 and over who, from September 24, 2020, through the present, applied for job opportunities using Workday, Inc.’s job application platform and were denied employment recommendations.” In this context, being “denied” an “employment recommendation” means that (i) the individual’s application was scored, sorted, ranked, or screened by Workday’s AI; (ii) the result of the AI scoring, sorting, ranking, or screening was not a recommendation to hire; and (iii) that result was communicated to the prospective employer, or the result was an automatic rejection by Workday.
Age discrimination is a huge issue and I've experienced it firsthand. Places want to hire younger people because they're more apt to work longer hours for less pay. It's going to get worse as people who got into the web tech industry early on are still in the workforce, yet more and more young people are entering the workforce because "learning to code" was the perceived path to prosperity half a decade ago.
> Places want to hire younger people because they're more apt to work longer hours for less pay.
This is the best light you can shine on the discrimination. Most often it really is managers taking their “seniority” literally. As in, they don’t want to take the risk their reports are smarter, more experienced or capable of replacing them, so they discriminate on the basis of age. It’s counterintuitive, but this feels truest from my historical observation.
It will be fascinating to see the facts of this case, but if it is proven their algorithms are discriminatory, even by accident, I hope workday is held accountable. Making sure your AI doesn't violate obvious discrimination laws should be basic engineering practice, and the courts should help remind people of that.
An AI class that I took decades ago had just a 1 day session on "AI ethics". Somehow despite being short, it was memorable (or maybe because it was short...)
They said ethics demand that any AI that is going to pass judgment on humans must be able to explain its reasoning. An if-then rule says this, or even a statistical correlation between A and B indicates that would be fine. Fundamental fairness requires that if an automated system denies you a loan, a house, or a job, it be able to explain something you can challenge, fix, or at least understand.
LLMs may be able to provide that, but it would have to be carefully built into the system.
> Fundamental fairness requires that if an automated system denies you a loan, a house, or a job, it be able to explain something you can challenge, fix, or at least understand.
That could get interesting, as most companies will not provide feedback if you are denied employment.
I'm sure you could get an LLM to create a plausible sounding justification for every decision? It might not be related to the real reason, but coming up with text isn't the hard part there surely
> allegations include that Workday, Inc., through its use of certain Artificial Intelligence (“AI”) features on its job application platform, violated the Age Discrimination in Employment Act (“ADEA”)
I'm interested to see Workday's defense in this case. Will it be "we can't be held liable for our AI", and will it work against a law as "strong" as ADEA?
> The Court has provisionally certified an ADEA collective, which includes: “All individuals aged 40 and over who, from September 24, 2020, through the present, applied for job opportunities using Workday, Inc.’s job application platform and were denied employment recommendations.” In this context, being “denied” an “employment recommendation” means that (i) the individual’s application was scored, sorted, ranked, or screened by Workday’s AI; (ii) the result of the AI scoring, sorting, ranking, or screening was not a recommendation to hire; and (iii) that result was communicated to the prospective employer, or the result was an automatic rejection by Workday.
This is the best light you can shine on the discrimination. Most often it really is managers taking their “seniority” literally. As in, they don’t want to take the risk their reports are smarter, more experienced or capable of replacing them, so they discriminate on the basis of age. It’s counterintuitive, but this feels truest from my historical observation.
They said ethics demand that any AI that is going to pass judgment on humans must be able to explain its reasoning. An if-then rule says this, or even a statistical correlation between A and B indicates that would be fine. Fundamental fairness requires that if an automated system denies you a loan, a house, or a job, it be able to explain something you can challenge, fix, or at least understand.
LLMs may be able to provide that, but it would have to be carefully built into the system.
That could get interesting, as most companies will not provide feedback if you are denied employment.
I'm interested to see Workday's defense in this case. Will it be "we can't be held liable for our AI", and will it work against a law as "strong" as ADEA?