2020-03-05 HR Examiner John Sumser article Nuance in HR that AI and Intelligent Tools Dont See photo img cc0 by adam cao nncDLwbPRNw unsplash 544x680px.jpg

“The nature of algorithmic decision-making is that the machine seeks hard lines for solutions. Most Human Resource questions involve subtle distinctions, case specifics, or other bits of context. When even a little compassion or conscience is required, machines can only assist.” – John Sumser

HR Technology does a poor job of delivering and working with nuance. The nature of algorithmic decision-making is that the machine seeks hard lines for solutions. Most Human Resource questions involve subtle distinctions, case specifics, or other bits of context. When even a little compassion or conscience is required, machines can only assist.

In The Book of Why1, Judea Pearl defines three kinds of learning. You might think of them as ‘levels’ of machine intelligence:

  1. Learning from association:
    • Determining that two things happened at the same time or are correlated. The algorithm learns by imitation, matching, and copying. It’s not intelligence, it’s more like an imitation.
  2. Learning from planning and reflection:
    • Intervening in processes, experimentation, and generalizing the results using hypotheses.
  3. Learning from Counterfactuals:
    • Imagining worlds that do not exist and inferring reasons for observed phenomena.

The final level, Learning from Counterfactuals is what is commonly meant by ‘AI.’ We are many years away from this sort of machine behavior.

Today’s products and services are all very early stage and only learn from association. They are still doing new and powerful things. They offer complicated demonstrations of mathematics and computer processing. They deliver sophisticated categorization that is beyond human capacity. They deliver real value. They just don’t deliver intelligence. Today’s tools are limited to imitation, correlation, and association. Intelligence is emerging, albeit slowly.

The unstated factor that is both the driver of intelligent tool development and the guarantor of its longevity is that data processing and storage have become inexpensive and almost unlimited. For the history of computing until now, all ideas were constrained by the limits of processing and storage capacity. The definition of design elegance/ effectiveness was precisely ‘use the least amount of processing and storage resources as possible.’

Today, these once precious resources are all but free. The major providers can offer as much as a customer can ask of either storage or processing. These companies’ growth is entirely dependent on teaching clients what to do with a bountiful supply of both. That means they want to help their clients get better at the development of intelligent tools (their clients are the companies that build out intelligent microservices). because that consumes vastly more processing and storage than a more static process might.

The heart of the intelligent tools revolution is the idea that you can run a simulation millions of times without really incurring a cost or performance penalty. Machine Learning, which is more than half of the new toolkit, depends on regression analyses. That, in turn, depends on statistical simulations. Running these processes at a grand scale is what makes the whole thing work.

1 Pearl, Judea. The Book of Why: The New Science of Cause and Effect (p. 28). Basic Books. Kindle Edition.



 
Read previous post:
HRExaminer Radio Executive Conversations Badge Podcast Logo
HRExaminer Radio – Executive Conversations: Episode #355: Mike Psenka, President & CEO, Moovila

John Sumser speaks with Mike Psenka, the President and CEO of Moovila. Moovila bridges collaboration, project management, and process automation...

Close