On HR Prediction Algorithms: “Expect to see retention rate guarantees, recruiting pipeline guarantees, and maybe even some actual business performance guarantees.”

Algorithm developers build their tools using the following process:

  1. Get a big pile of historical data for the thing you want to predict.
  2. Write a complex mathematical formula that comes close to predicting the pile of historical data (maybe 75% or 85% accurate).
  3. Create a black box of variables that make up the difference  so that the algorithm is very close to precise.
  4. Test the algorithm against the historical data. If you can predict the history with precision, the algorithm is validated.
  5. Add the machine learning function. (That means add new data to the history pile and automatically adjust as new things happen.)
  6. Go.

There are a couple of things to take away from this simplified workflow:

  • Machines and machine learning are valuable predictors when history repeats itself or when all of the variables can be considered.
    The difficulty here is that no history is perfect, that great organizations depend on varying from history and that a focus on repeatability drives the algorithm to greater and greater alignment with the status quo.
  • The ‘black box’ in step three above is where the liability questions live. 
    When you talk to algorithm writers, this is where they will readily admit that they cannot describe the decisions that their algorithm makes. The process is a lot like shimming  when the floor and the table are out of alignment (using additional strips of wood to solve a leveling problem). When the ‘shimming’ is mathematical, it means that the algorithm ‘does stuff’ in order to make the outcome align with history. Because you can only see the results as outcomes, the system makes recommendations before you can tell if they will create liability.
  • While machine learning algorithms may get better with experience, there is a large question associated with how to treat them while they are learning.
    For example, the recent fatal Tesla accident is a critical example of what happens while the machine learning is getting better. In one way or another algorithms fail when they encounter experience that is divergent from history. (Everything interesting about adaptability, agility and innovation is a divergence from history.) In other words, the better an algorithm gets, the less likely it is to produce innovation. In automobiles, less innovation is a good idea. In organizations, the question is more variable.

Great HR recommendation or prediction schemes probably cannot rely on historical data alone. One way that Amazon manages these same dynamics in its recommendations by including history of other relevant users, That might look like: “In this case we recommend X but you might consider that Y also worked in this other case.”

When I talk with the vendors who might be delivering these sorts of tools (Automated Predictions and Recommendations in HR), their initial inclination seems to be to blame the user if a bit of prediction goes awry. They say things like:

  • Algorithms cannot control the bias that already exists in the system;
  • They are recommendations, after all; or
  • The right way to use that information is to take time to think about it before acting on it.

It will always be interesting to ask a prediction/recommendation provider whether they will guarantee the results of their forecast.

But you can also expect to see companies that price their services like insurance offering guarantees of performance coupled with clear performance requirements on both sides of the deal. Expect to see retention rate guarantees, recruiting pipeline guarantees, and maybe even some actual business performance guarantees.

Pay attention to the companies with that level of confidence.

Over time, HR will earn its stripes as a guarantor of business results. The path to that eventuality may be a bit rocky.