HRIntelligencer logo 544px
Bias Issue.
 
This issue is full of evidence that bias can’t be contained as easily as they claim. The problem is multilayered. Our data models are guesses about how things go together. The fact that they can predict the past accurately is used to imply that they can do that for the future. The scary part is that the data models are built by people with limited understanding. We all have limited understanding.
 
As algorithmic decision making finds its way into our software, we will have to make ongoing judgments about the quality of the machine’s decisions. Not only are machine based decisions fallible, they may be very fallible.

It will take you 45 minutes to consume these pieces. If you do that, you will start to think. That might hurt a little. Thinking is often the opposite of clarity.

Big Picture

 

  • A Child Abuse Prediction Model Fails Poor FamiliesRead this piece to gather an understanding of the real problems with algorithmic decision making for issues involving human beings.
  • Beyond the Rhetoric of Algorithmic Solutionism. “If you ever hear that implementing algorithmic decision-making tools to enable social services or other high stakes government decision-making will increase efficiency or reduce the cost to taxpayers, know that you’re being lied to. When implemented ethically, these systems cost more. And they should.” The same is true of HR Systems that manage or recommend people.

 

HR’s View

 

  • Do algorithms reveal sexual orientation or just expose our stereotypes?. “If subtle biases in image quality, expression, and grooming can be picked up on by humans, these biases can also be detected by an AI algorithm.” Scan this one first. It’s a deep critique of the idea that data scientists can somehow construct ‘objective reality’ free from bias. Scanning it will get you the gist of the argument which is that you can explain the result of an algorithm by dissecting it with demographic statistics. This is a model of what it takes to argue with a machine.
  • The Network Uber Drivers Built. Are Uber’s employee networks the model for the next kind of unionization?
Execution

 

  • Shadow Learning: Building Robotic Surgical Skill When Approved Means Fail. Robots are learning to do the tasks of entry-level surgeons. Traditionally, you became a surgeon by doing these apprentice level tasks. The training is starting to break down when the robots do those things.  This is another form of the basic question about automation. Do we understand the consequences of our decisions?
  • Laundry folding robot. How do you fire a machine? “…there’s something kind of comical about developing the most complicated, expensive solution possible to a simple chore like folding laundry, but Laundroid’s real strength lies in the way it uses artificial intelligence to gather data on clothing, and how it uses that data. Clothing items are analyzed piece by piece in order to relay information to the robot arms, but the machine uses the data of the type, size, and color of the clothing to sort it in different ways. “

 

Tutorial

 

 

Quote of the Week

 

Faith that big data, algorithmic decision-making, and predictive analytics can solve our thorniest social problems—poverty, homelessness, and violence—resonates deeply with our beliefs as a culture. But that faith is misplaced. On the surface, integrated data and artificial intelligence seem poised to produce revolutionary changes in the administration of public services. Computers apply rules to every case consistently and without prejudice, so proponents suggest that they can root out discrimination and unconscious bias. Number matching and statistical surveillance effortlessly track the spending, movements, and life choices of people accessing public assistance, so they can be deployed to ferret out fraud or suggest behavioral interventions. Predictive models promise more effective resource allocation by mining data to infer future actions of individuals based on behavior of “similar” people in the past.

These grand hopes rely on the premise that digital decision-making is inherently more transparent, accountable, and fair than human decision-making. But, as data scientist Cathy O’Neil has written, “models are opinions embedded in mathematics.” Models are useful because they let us strip out extraneous information and focus only on what is most critical to the outcomes we are trying to achieve. But they are also abstractions. Choices about what goes into them reflect the priorities and preoccupations of their creators. from A Child Abuse Prediction Model Fails Poor Families

 

About

 
Curate means a variety of things: from the work of vicar entrusted with the care of souls to that of an exhibit designer responsible for clarity and meaning. At the core, it means something about the importance of empathy in organization. HRIntelligencer is an update on the comings and goings in the Human Resource experiment with Artificial Intelligence, Digital Employees, Algorithms, Machine Learning, Big Data and all of that stuff. We present a few critical links with some explanation. The goal is to give you a way to surf the rapidly evolving field without drowning in information. We offer a timeless curation of the intersection of HR and the machines that serve it. We curate the emergence of Machine Led Decision Making in HR.
 



 
Read previous post:
Felix Wetzel, HR Examiner Editorial Advisory Board Contributor
Lessons in Scenario Planning from Brexit that could save your organization

“It doesn’t matter how great a manager you believe you are: if you are not aware of the external environment,...

Close