Managing your algorithm

2017-07-05 HRExaminer photo img four part series cc0 via pexels abstract red lights digital computer tech technology ai machine structure light led movement 158826 sq part 4.jpg
 
We’re driving into Healdsburg, a chi-chi enclave of movie star getaways in the north end of wine country. They are in the process of adding a roundabout to the main street headed into town. Like any traffic circle, there are four or more arms of the circle. It’s where at least three streets intersect.

There is a ton of construction flotsam and jetsam. Traffic cones, concrete dividers, odd fences, steel plates on the ground. It looks like the working model for the idea that the plan never survives contact with reality.

As we come upon the traffic circle, our navigation tool says: Turn Left on Vine St.

The problem is that the directions worked before the city installed the brand new roundabout (or traffic circle if you’re from the East Coast). For the moment, it is impossible to see the left turn in the jumble of construction detritus. There are no road signs.  While the streets are the still the same, the directions are impossible to understand.

My partner asserts, “That’s Vine St. over there, turn there!” As I begin to utter a small disagreement, I am overruled by the urgency in her voice.

It turns out that it wasn’t Vine St. My urge to grump and say, “I told you so” runs high. (I may have even given in to temptation ever so mildly 😉  )

Here we arrive at the next layer of challenge in the design of machine-led decision making. And, it has two parts.

Part one involves the clarity of the recommendation. Like any sort of direction or delegation, specificity in the guidance provided by the machine is important. When the guidance is unclear and the people receiving the guidance have to debate its meaning, all of the advantages of machine insight is lost. There are going to be some very expensive court cases involving the question of whether the instruction was clear. We will see debates about whether a person who thinks they are following an instruction are, in fact, following the instruction.

One way of saying this is that machines are inexperienced at managing people. In human to human communications, one judges the amount of required specificity very carefully. It is a measure of trust. Being very explicit is exactly what you do along the way to building that trust. Without variability in levels of specificity, machines will over-manage some and under manage others. Undermanaged workers produce liability. Overmanaged workers begin to ignore their bosses.

Undermanaged workers produce liability. Overmanaged workers begin to ignore their bosses. Neither situation is useful.

The second part involves keeping the flow of recommendations relevant to current circumstances. As much as their creators would like to believe otherwise, machines that learn work in environments that shift rapidly. Recommendations that worked yesterday may fail today. When the machine issues direction that turn out to be irrelevant or mistaken, their utility declines. 

It may be that the largest expense in owning a machine learning tool is monitoring the relationship between real circumstances and recommendations. This is a critical part of the task of supervising an algorithm. It’s sort of like the quality control required at the end of the line. So far, there do not appear to be any examples of tools that allow for this kind of quality control.

  1. Part 1 of 4: Liability – June 27, 2017
  2. Part 2 of 4: Basic Ethics Questions – June 28, 2017
  3. Part 3 of 4: Questions for your vendor – June 29, 2017
  4. Part 4 of 4: Managing Your Algorithm – July 02, 2017
 
Page 1 of 11
Read previous post:
Neil McCormick Founding Member HRExaminer Editorial Advisory Board
Will the Function of HR Evolve or go the Way of the Dodo Bird?

"Currently, people still make up what we consider to be the workforce in organizations. How long will it be before...

Close