2017 06 28 HRExaminer photo img three part series cc0 via pexels abstract red lights digital computer tech technology ai machine structure light led movement 158826 sq 200px part 1.jpg

Part 1. Liability

 
HR Enterprise software product liability.

Those four little words make some people very uncomfortable. For the entire life of the software industry, no one ever really thought about product liability. The issues were all covered in the unreadable terms and conditions we were required to click through.

Today, things are changing.

The first era of software stretched from ballistic tables and payroll to the latest in HR Forms completion. Over the course of 80 years, software recorded, collected, calculated, and reported data. GIGO (Garbage In, Garbage Out) was the primary principle. There could be no liability because machines simply reported what they were given.

In this second era, things are different. Some of today’s tools (and all of tomorrow’s) do much more than record and report. They suggest, recommend, decide, evaluate, prescribe, filter, analyze, monitor, and learn. Era 1 tools could not hurt people. Era 2 tools can. In a world where machines extract truth and insight, they (at least) share responsibility for the decisions they make. It may be that they have an exclusive right to the liability.

One industry leading CEO says, “we call it machine learning when we talk about it internally. We call it Artificial intelligence when we speak to the market.” For the purposes of this article, I’ll use the terms Machine Learning, Artificial Intelligence, and Big Data somewhat interchangeably. I am referring to our computers’ emerging ability to change their output based on insights derived from new data.

HR Enterprise Software tools will be at the forefront of the implementation of new product liability concerns. Increasingly, HR software recommends and directs the behavior of managers and employees. If the guidance or insight is damaging or wrong, software vendors will be unable to wriggle out from the consequences. Currently, the vast majority of recommendations provided by intelligent HR software are ‘self-correcting.’ They learn from their mistakes and correct the underlying world view. It is common to hear them described as tools that ‘get better with usage.’ Another way of saying that is that their error rate improves over time. One part of the liability issue is the question ‘who bears responsibility for the error rate?’

There is a more difficult dimension.

All cultures, organizational or otherwise, are defined by their biases. The essence of culture is its unique world view. Decisions and behaviors that support and expand the worldview are rewarded. Things that undermine or contradict the worldview receive negative feedback.

Since algorithmic decision making adjusts to the things that make a culture different, they tend to amplify the biases of the culture. Most of these machine learning tools are black boxes. The only way to see the bias is by examining the output. In other words, these new tools may create liability before it can be discovered and managed.

Intelligent machines (and wonderful theories about their near and long term potential) are at the heart of today’s pop culture. Everyone ‘knows’ that self-driving cars are just around the corner, that robots are going to take your job, and that pretty soon you’ll be talking to an intelligent assistant who knows where you put your car keys and can order more laundry detergent.

Yet, these programs and machines are not people who are generally governed by standards of reasonable care. They are property, which is governed by laws of strict product liability, warranty, and whether the design is fit for the intended use. The difference is much like an owner’s liability when their dog bites someone. The dog is a separate actor, the owner did not intend for the dog to harm anyone, but the owner is strictly liable because the dog is the owner’s property.

We are seeing companies develop and sell technology based on machine learning processes that the human designers do not fully understand nor control. These machines are giving recommendations and suggestions based on probabilities to employees, many of whom are ill-equipped to understand and effectively use the information. Often, we don’t know whether the information will be useful until we develop and test it over time.

But it’s not just data. It’s evidence. And the laws that will apply are not the ones that organizations have traditionally been operating under. There is a chain of potential liability that runs from the developer vendors through the sales chain to the organizations using the software and machines.

Next: The Key Ethical Questions in using AI / ML / Big Data / Predictive Analytics in HR.

Read the three-part series:

  1. Part 1 of 4: Liability – June 27, 2017
  2. Part 2 of 4: Basic Ethics Questions – June 28, 2017
  3. Part 3 of 4: Questions for your vendor – June 29, 2017
  4. Part 4 of 4: Managing Your Algorithm – July 02, 2017


Read previous post:
2017-06-26 HRExaminer when best practices dont work photo img cc0 via pexels abstract idea clay think original creative mold pen idea bulb paper sq 200px.jpg
When Best Practices Don’t Work

If you want to do what other people were doing last year, spend a lot of time navel gazing, and...

Close