The tools are becoming so widespread that the people in charge of analytics at the client may not be aware of the fact that line managers are using them to make decisions. If you were the tiniest bit paranoid, you might think that the machines were creeping into the workflow.
Currently, most enterprise HR systems come with embedded predictive tools that
- Assess personality and recommend interaction strategies as a way of solving predicted communications problems
- Predict the likelihood that a specific employee will leave or stay
- Predict the likelihood of a ‘fit’ on a team or in a company
- Offer advice and action recommendations based on a combination of personal history and so-called ‘best practices’
- ‘Benchmark’ processes based on quantified historical outcomes from anonymized sources
On some levels, this is nascent Artificial Intelligence.
It’s important to remember that AI is more like a really smart 4 year old than a really seasoned manager. Expect these early forms of AI to generate big mistakes. Unchecked, those mistakes will spiral out of control.
- Predictive algorithms always reinforce the status quo. To inquire into ‘fit’ is to ask if we can perpetuate the current situation.
- Predictive algorithms always amplify the bias that exists in the organization. It’s just what they do.
- Predictive algorithms influence the way decisions are made. This means that they control the history once they are turned on.
- Predictive algorithms use data from the decisions they have influenced to make the next prediction
- Predictive algorithms predict a future that is just like now only more so.
- Predictive algorithms can not predict disruptive change in an actionable way.
- Predictive algorithms are deemed successful if they generate the right results. Often, their authors do not understand what they actually do. The test for success is usually ‘Can it predict what already happened with precision?’ That means that the recursion problem (making things more like they already are) is never addressed in the testing. You can only see it in later results.
These factors make predictive tools more than a little dangerous. A technology that recommends decisions based on history and then uses the results of those decisions as data for the next decision is destined to spiral out of control. All algorithms have this recursive quality.
No one knows how to correct it.
Worse yet, the designers of algorithms can rarely explain why they produce the results they produce.
That means that the only way to tell if your predictive tool is out of control is by examining the results. The ability to check the results happens after the liability is incurred.
- Predictive algorithms are really bad at systemic improvement. In order to use them to make things better, you have to be very precise about the results you are after.
- Predictive algorithms always produce unintended consequences.
- Predictive algorithms are never as precise as they pretend to be.
Before you can use predictive algorithms well, you must have a clear, simple picture of what you want. This varies wildly and makes prediction very, very difficult.
The concept of ‘algorithmic safety’ is just emerging in the labs. Most competent algorithm writers acknowledge much of what I’ve laid out here. The problem is that it’s intoxicating to build a tool that accurately predicts, even if it is just accurate about what’s already happened.