At the simplest, AI is the next level of abstraction in software. All providers will be delivering some sort of predictive, learning, or recommendation service as a part of their offerings within three or four years.

The large providers of cloud processing/storage services like Amazon, Oracle, Google, et al are in an arms race to deliver usable AI tools at no charge to their clients. Their motivation is simple. AI uses massive quantities of processing and storage. Growing the capabilities of their clients is good for business.

Meanwhile, there is a shortage of technologists who actually understand the implications and ramifications. A search of LinkedIn for data scientists shows slightly more than 22,000 PhDs. Even if that number is off by 100%, there are nowhere near enough competent practitioners.

But, that doesn’t stop the marketing machines from deploying high-intensity hype campaigns. Since this is HR and all sales in our industry boil down to cost savings schemes, the claims have to do with efficiency rather than effectiveness. As a reminder, efficiency means getting things done quickly while effectiveness means getting the right things done.

The argument that you are most likely to hear is, “Machines have a higher rate of accuracy than their human counterparts.” Sometimes, it’s framed as, “Machines perform measurably better than people.” Another version goes, “We let humans learn their jobs, why shouldn’t we do the same for machines?”

In each case, the defender of machine led decision-making is making the assumption that humans and machines make the same types of errors. They’re saying that any old 80% is the same as any other old 80%. In the case of machines, it’s an 80% likelihood that machines can repeat prior performance. In the case of humans, it’s a question of whether they can make the right decision 80% of the time.

If this were a Pareto analysis, we’d be wondering whether the 80% is the 80% of the most important results or the 80% that is the most trivial. All we really know at this point is that machines can repeat history with 80% accuracy. The measure itself is a thing of nonsense rivaling Alice in Wonderland’s most tortured logic.

‘I know what you’re thinking about,’ said Tweedledum; ‘but it isn’t so, no how.’

‘Contrariwise,’ continued Tweedledee, ‘if it was so, it might be; and if it were so, it would be; but as it isn’t, it ain’t. That’s logic.’

‘I was thinking,’ Alice said politely, ‘which is the best way out of this wood: it’s getting so dark. Would you tell me, please?’

– Through the Looking Glass

This is not to say that all AI offerings (or even most) are flagrant examples of hype-infested vaporware. They are not. We are seeing tremendous strides being made by the majority of providers. The companies offering intelligent software almost all look like laboratories or innovation centers. The workers diligently spend their time attempting to build models that their offerings can use to make our lives easier.

But, this is new territory. Harnessing these powerful new technologies requires a different kind of thinking. This is not more of the same old
software. The products and services offered under the rubric of AI make decisions and recommendations that used to be the province of human beings. We don’t really know if they are any good at it. We don’t really understand the consequences of using the tools.

Read previous post:
HRExaminer Radio Executive Conversations Badge Podcast Logo
HRExaminer Radio – Executive Conversations: Episode #331: Jason Roberts, Global Head Of Technology And Analytics, Randstad Sourceright

John Sumser speaks with Jason Roberts, the global head of technology and analytics for Randstad Sourceright. A leading authority on...