Algorithms Follow Rules – People Don’t. See you in Court?

Algorithms follow the rules every time. People don’t. What happens when you throw the legal system into the mix?
 

Catch and Kill by Ronan Farrow

On February 9, 2021, in Discrimination, Diversity, Ethics, HRExaminer, John Sumser, Reviews, Sexual Harassment, by John Sumser
“Over and over, the goal of the game seems to be to keep the victims feeling ashamed and the documentation out of HR’s hands. Meanwhile, NDAs are used to cover tracks and keep a lid on the story.” John Sumser
Tagged with:  

The Uncoded Bias in AI Hiring

Behind the uncoded bias in AI hiring are machines participating in and perhaps even dictating hiring decisions.

Fear of what we don’t know, and certainty about assumptions that are wrong

“We are designed to connect with one another. We are also designed to be afraid of new and different things. Both aspects of our reality are important for survival.” - Heather Bussing
 

AI Hiring: Bias in the Code

“While it is true that a machine can do a better job at relentlessly sticking to a narrow script, it cannot see or understand things that are not in the data. Unlike people and their unconscious biases, machines can only change their approach with new measurement and new coding. In other words, while machines may be able to address small components of unconscious bias, they cannot address all (or even most) of it.” - John Sumser

Uncoded Bias in AI Hiring

“Allowing machines to participate in and perhaps even dictate decisions in Human Resources raises a host of ethical considerations. After all, these systems control various forms of opportunity for the workforce. More precisely, they involve people’s livelihood, hopes and dreams.” - John Sumser

AI in HR Tech: Discrimination and Bias

“Bias is always an issue with AI because the machine learning systems only know what they are taught. And what machine learning systems ‘‘learn’ is sometimes surprising or just wrong.” - Heather Bussing
 

The Problem with Corporate Values

Corporate values are not always what they seem. Insincere proclamations and ignored platitudes in values statements can become a powerful negative influence when they are practiced and defended with conviction. Dr. Todd Dewett explains.
 

Legal Issues in AI: Discrimination and Bias

“Bias is always an issue with AI because the machine learning systems only know what they are taught. And what machine learning systems ‘learn’ is sometimes surprising or just wrong.” - Heather Bussing
 

Legal Issues in AI: Bias and Discrimination

“Bias is always an issue with Intelligent Tools because the machine learning systems only know what they are taught. And what machine learning systems ‘learn’ is sometimes surprising or just wrong.” – Heather Bussing