graphic for The 2019 Index of Intelligent Technology in HR Tech

 

Biased

On November 9, 2017, in HRExaminer, by John Sumser

Bias by John Sumser January 12, 2016 | 2016-01-11-sumser-hrexaminer-bias-photo-cc0-by-zak-suhar-zaksuhar-vsco-co-via-pexels-fashion-men-vintage-colorful-crop-544x330px-HQ

“We think we can measure and predict people but the truth is we really can’t. What happens, then, when we measure someone or a group of someones and then label them with the results?” – John Sumser

The more I listen to the parade of folks who claim that we can measure people, teams and organizations in order to make them better or improve their profitability, the more concerned I get. While we have come a very long way in the development of our understanding, we are at the most primitive of stages. That means we are in a spot where we have to look closely at what we believe about ourselves.

If you attended school in the west any time in the past 60ish years, you learned about the Heisenberg uncertainty principle. In a nutshell, the closer you look at one aspect of things, the less you are able to understand the other aspects.

There’s a related concept referred to as the observer effect or Goodhart’s Law: Measurement always conforms to the expectations of the measurer. That is, you see what you expect to see.

In practice, HR professionals are really good at dealing with the transactional effects of these principles. As a profession, we are better trained to deal with cognitive biases than most. We are the conscience of the organization trying to limit the damage caused by unconscious assumptions.

There’s a trickier problem afoot.

Currently, it takes 40 minutes of dedicated time on the world’s most powerful computer to simulate one second of human brain activity. The most powerful tools we have for examining large numbers of those brains operating in tandem look like groping for direction in a dark maze of twisty passages. We have no capacity to model or simulate the actual moment to moment function of even the smallest teams.

Instead, we use very superficial questionnaires (surveys) that claim to be able to explain and predict social behavior with 100 questions. If SHRM thinks you need way more than that to certify an HR professional as competent, how likely is it that 100 questions actually predict anything of substance. The idea that such an instrument could inform and repair an organizational conflict is right on the edges of absurd.

DNA-7 is a great example of what’s practical today. There are others who are threading the needle in other than purely transactional ways. They acknowledge the limits of their understanding and are in awe of the ethical questions we face in the next generation of automation.

Mostly, we’re scratching our heads collectively.

We think we can measure and predict people but the truth is we really can’t.

What happens, then, when we measure someone or a group of someones and then label them with the results? This is the problem that directly faces the providers of supposed retention prediction algorithms. When you measure something as intangible as the likelihood of attrition in an individual, the measurement becomes a self fulfilling prophecy.

If your algorithm decides that I am a flight risk, do you tell me? Do you tell my boss? Do you tell his boss?

As I talk to people who are gaining experience with these sorts of tools, they acknowledge that this info ‘makes its own gravy’. That is, the system starts to behave as if the prediction is true as soon as people think about it.

If I’m now seen as ‘likely to leave’ as the result of the algorithm, the rest of the organization backs away. ‘Short timers’ always have less trust and credibility in critical conversations. It’s smart to discount the opinion of someone who won’t be here. They don’t have ‘skin in the game’.

It’s not a small problem.

We’re moving into the other side of the looking glass. Our ‘wonderland’ is developing labels and categorizations without regard to their human consequences. We measure and make claims of the truthiness of our new statistics. Meanwhile, our over-surveyed employees reads every additional questionnaire as a request to tell management what it wants to hear.

It’s time for a conversation on the topic of the impact of continuous measurement on the population. Perhaps we should really test these things before we roll them out to an emotionally unprepared workforce.

graphic for The 2019 Index of Intelligent Technology in HR


Read previous post:
HRIntelligencer v1.24

Highlights: The 10 Top Recommendations for the AI Field in 2017. Automation vs Humanization: The 21st-Century’s Two Most Powerful Trends,...

Close