HRExaminer photo img three part series cc0 via pexels abstract red lights digital computer tech technology ai machine structure light led movement 158826 sq 200px part 2.jpg

Part 2: Basic Ethics Questions

 
Not surprisingly, HR has been slow to adopt AI and Machine learning. The Sierra-Cedar HRTechnology Industry survey [1] suggests that fewer that 7% of the companies they surveyed are using or considering using Machine Learning technologies in HR.

Given that we are at the earliest of early adopter stages, it’s a solid time to think about the ethics involved in using Machine Learning systems to manage, supervise, assess, train, deploy, or categorize Human Beings.

Ethics involves questions of right and wrong. Large institutions are usually concerned with optimization, efficiency, and innovation. They seek ways to maximize returns and minimize costs. They think of their employees as resources first and people second. Corporations have traditionally had some challenges with the very idea of ethics.

Here are the kinds of ethics questions that are going to occupy the conversation about ethics in HR Technology. The questions are standard. The answers may vary from place to place.

  • Who owns employee data?
    Who can sell or manipulate it? How much is an employee entitled to see? If something is wrong, what recourse does the employee have? If it is embedded in some machine learning scheme, what are the ownership variables?
  • If the system learns about itself through data about an employee, does the employee own the learning?
    Can she take it with her when she goes? Is she entitled to royalties if the data is sold for benchmarking or other purposes.?
  • Is it okay to use data that was built without a control group?
    How do we measure effectiveness in the example of experimental control? When machines improve their error rate while implemented in an organization are there extra employee protections required?
  • What is the line between manipulation and motivation?
    If chatbots increase emotional ties, is it okay to use them to increase engagement scores? What are the likely regulatory responses to overly manipulative work environments? Don’t people usually follow orders from authority figures? Doesn’t this apply to machines? Is there a limit to self-congratulatory positive feedback?
  • Are statistics more reliable than human decision-making?
    Where’s the proof? Is empathy a necessary part of decision making? Kindness? Humans are demonstrably more effective at handling novel and/or erratic inputs to decision making. Does that matter? How do you factor unmeasurables into the decision-making process?
  • How do you disagree with a machine’s decisions?
    Can you afford to be the person who is carping about decision quality? How do you get into a position to see bias (it won’t show up in individual recommendations, it’s systemic.)? Do we get stupid in the face of a machine recommendation? Are people predisposed to follow the instructions of an authority figure (consider Google Maps and one’s ability to argue with its recommendations.)
  • Who has the liability for machine recommendations?
    Who pays for damage caused by the machine? How do you handle mistakes? How do you monitor the quality of the algorithm’s performance? Is it ethical to use tools that are known to be imperfect on employees? Are there implicit human experiences that are interfered with when machines are the arbiters of personnel decisions?
  • How do you limit the data’s ability to influence the company?
    How to you turn it off and replace it? How do you know when you have too much influence from a single source? Are there tools that allow you
    to see the risk in machine led decisions?

In a nutshell, the ethics questions we will be grappling with are rooted inthe fact that we simply don’t understand in a sophisticated way:

  • How human beings work
  • How organizations work
  • How the human-machine interface will change these things.

We are going to learn more about each issue in an accelerated way. They are coming to employment decisions, and we will learn from their mistakes and successes. You can expect to discover new ways of thinkingabout employee safety as the risks at work shift from physical to mental and emotional.

Next: Questions to ask your HR AI/ML/Big Data/Predictive Analytics provider.



Read previous post:
HRIntelligencer logo 544px
HRIntelligencer 1.05

This week: Why robotic automation is stumbling, How Blockchains Could Transform AI, Why Do So Few People Major in Computer...

Close