2019-06-20-hrexaminer article john sumser You Better Treat Your Machines With Respect photo img cc0 by lane smith cliff 442416 unsplash 544x757px

“Our machines lack the capability to predict the future as anything other than an extension of the past. That means that their recommended decisions are most useful when they are up to date. So, we have to treat our machines with the respect that we’d accord an old friend.” - John Sumser


In order to move ahead, we’ll need to do two things:

  1. Start treating our machines with the respect we give other co-workers; and,
  2. Stop requiring that our machines treat us like children.

These are really interface questions.

Today, the collective view of a machine’s interaction using intelligent software is a recommendation accompanied by a confidence factor. It might sound like, “We recommend that you give Sally the largest merit increase possible. We have 90% confidence in this recommendation.”

What the confidence factor actually means is that the suggestion is 90% likely to match other historical decision of the same kind.

It would be smarter to say, “We recommend that you give Sally the largest merit increase possible. Historically speaking, this would be in line with other intersections of raise and performance. But, before you make a decision, what’s changed since the last time we talked?”

“Oh,” say you, “we have a new vice president.” “Ah, I did not know that,” says the machine, “our data on VP transitions indicates that you should expect an inversion in decision making. New VPs usually do the opposite of their predecessors. We’d reduce the confidence factor to 70%. What else has changed?”

“Well, there are five new competitors who seem to be doing something really different with their business models.”

“Tell me more,” says the machine. “are they faster, cheaper, and lower quality?”

“Why, however did you know that?” says you.

“The most worrisome new entrants are like that. Sally (remember, this conversation is about Sally) is much better at driving straight ahead growth. She’s weaker in the areas of rethinking assumptions. Maybe you should incent one of your more agile team members.”

In other words, recommendations about decisions that affect both the business and team members should be conversations, not blanket decisions. This applies from simple resume matching to complex succession planning. The dialog between a decision maker and their digital partner should be built on kindness, readiness and a good reading of the current situation.

Our machines lack the capability to predict the future as anything other than an extension of the past. That means that their recommended decisions are most useful when they are up to date. So, we have to treat our machines with the respect that we’d accord an old friend.

We take pleasure in keeping them up to date as a part of our friendship. We know that by neglecting to update them, we limit their effectiveness.

Then, in part 2, we have to stop demanding that they treat us like children. 

Currently, we demand that our machines be entertaining, engaging, intuitive, easy to understand, and so on. If they do not make it simple for us, we refuse to engage with them. By insisting that we are not smart enough to pay attention, we burden our machines and their designers with unnecessary (and error producing) simplification chores.

The two ideas may strike you as odd and counterproductive. But, for the foreseeable future, our smart machines will struggle to be abreast of actual circumstances. Without our help, they will make things simple for us without the real conversations required to make useful decisions.

We are on our way to systems that are deeply conversational. Here is an early step in the process. Researchers Develop an AI System that Provides Textual and Visual Responses. (via Mark Bennett)



 
Read previous post:
It’s Not AI, but it is a Wonderland of Experimentation

“What I’ve found instead was an amazing wonderland of experimentation. The companies I’ve covered in our reports are all charting...

Close