graphic for The 2018 Index of Predictive Tools in HRTech: The Emergence of Intelligent Software

 

What Doesn’t Quite Work 1

On April 9, 2018, in HRExaminer, by John Sumser

 

2018-04-09-hrexaminer-photo-img-cc0-via-pexels-what-doesnt-quite-work-ai-machine-learning-hr-board-broken-builder-209235-544x408px.jpg

John Sumser begins an exploration of the limits of current thinking on intelligent software in management, particularly Human Resources Management.

If you haven’t read part 2 in this series click here to read What Doesn’t Quite Work 2 »

The biggest problems with HR analytics, and intelligent software boils down to one thing. The behavior of people in organizations is not like Jeopardy, Chess, Go, Marketing funnels, Autonomous cars, or other problems with a relatively finite set of answers. Thus, the tools one uses to solve those problems will always be wanting when applied to organizations.

Current tools are fantastic for reviewing the past and speculating about a future where nothing changed in the interim. Intelligent software can tell you a lot about the past and nothing at all about the future. That should be the thing you notice most about the score given a recommendation. It should be marked ‘based on historical inputs.’

When you hear that intelligent software is less error-prone than human beings, that’s not exactly true. It’s clearer to say something like, “Historically speaking, this decision is 90% likely to produce the same result as a similar decision did some period of time ago. We don’t actually have enough data to give you a real-time answer. Let’s talk about what’s happened in the interim.”

Reductionism (bear with me) is, “the practice of analyzing and describing a complex phenomenon in terms of phenomena that are held to represent a simpler or more fundamental level, especially when this is said to provide a sufficient explanation.” Most analytics, data modeling, and linguistic analysis make the assertion that their simple models adequately and accurately reflect the reality they describe. It’s a self-fulfilling prophesy.

Just because we measure and model an organization doesn’t mean we understand it.

Models are judged ‘usable’ when they can predict the past with 80% accuracy. In English, when the model can predict who won last year’s NCAA Tournament 80% of the time it’s good enough to use. That’s fantastic for situations with finite permutations. It’s pretty risky in organizations.

This is the beginning of an exploration of the limits of current thinking about the use of intelligent software in management, particularly Human Resources Management.

If you haven’t read part 2 in this series click here to read What Doesn’t Quite Work 2 »

 

graphic for The 2018 Index of Predictive Tools in HRTech: The Emergence of Intelligent Software


 
Page 1 of 11
Read previous post:
HRExaminer v9.14

“Why is a white man moderating this panel?” Jason Lauritsen was on the receiving end of that question during a...

Close