graphic for The 2019 Index of Intelligent Technology in HR Tech

 

2018-03-12-hrexaminer-photo-img-robot-ai-jobs-automation-cc0-via-unsplash-by-alex-knight-199368-544x321px.jpg

“The PWC paper mentions ‘learning mishaps.’ A ‘learning mishap’ occurs when the machine doesn’t have all of the information but gives a recommendation or makes a decision nonetheless.” - John Sumser

We are lurching into the future.

In a well reasoned paper (Will Robots Steal Our Jobs), PWC lumps AI with other forms of business automation. They forecast the eradication of about 30% of jobs by the mid 2030s. The change happens in waves as we learn to more fully embrace the emerging tech. The report alludes to job growth caused by various forms of automation but doesn’t spend much time imagining that growth.

In what seems like an afterthought, the PWC paper mentions ‘learning mishaps.’ A ‘learning mishap’ occurs when the machine doesn’t have all of the information but gives a recommendation or makes a decision nonetheless. As we develop skill at using machine inputs to our decision making, we will encounter ‘learning mishaps.’

The way that most organizational life works is a cycle. The organization strives for routine process. Once they are in place, things run smoothly until the chaos happens. The chaos is any of a thousand events from a merger or reorganization to an economic downturn to a new product launch. Each chaotic event interrupts the routine. The organization works hard to re establish a new routine that includes whatever was learned in the chaos.

That’s the cycle of organizational life. Routine precedes chaos which precedes routine. As an organization, it is the conversion of chaos into routine that propels us forward. (Or maybe its the other way around or maybe its both.)

In each transition between routine, chaos and back again, machines must be retrained.  While they are being retrained, the quality of their decisions declines. This decision variability is impossible for the machine to understand. The humans who supervise the machine must be the ones to understand and evaluate the quality of the machine’s work.

Since the machine can’t understand its own limitations (they are much like young humans in this regard), their supervisors and managers must do it for them. In our research, we call this the ‘Latency Problem.’ The difference between what the machine understands and reality is a high risk source of discontinuities and ‘learning mishaps.’

The machine’s ability to perform predictably is the result of a combination of factors:

  • Quality of the Underlying Model. For the most part, data models are judged to be acceptable if they can predict a percentage of last year’s results. That does not mean that they have a clear picture of reality (and that’s where some of the risk lies).
  • Traing Effectiveness. The difference between 80% and something better comes from allowing the system to learn based on real time data. This is a pure volume game. The more instances in the training set, the better the model will perform.
  • Latency. The machine sees the world through the narrow lens of things that can be measured. When circumstances change, it must retrain. Latency is the difference between the real world and the machine’s picture. At. its worst, latency means things are garbled. At best, it’s like ther difference between a color photo and a black and white version.
  • Task Complexity. Simpler tasks involving a repeatable decision tree and yes or no answers work best. Complicated realtime judgments in rapidly changing circumstances are much more difficult.
  • Data Quality. Self reported data manually entered by humans is the worst. Data that comes directly from measurements that a machine makes is the best. There is every reason to believe that personality assessments will improve in quality because they are increasingly relying on machine generated data.
  • Feedback Loop. When the machine errs or can not perform (resulting in a blank screen or a call to a human for assistance), there must be a way to simultaneously answer the question and pass the feedback about the defect to the right place.
  • Self Diagnostics. This is the trickiest. The machine should be able to tell you when it suspects that its answer is incorrect.

With each new installation of an intelligent function, you need a ‘manager’ who:

  • Understands the whole organization well enough to troubleshoot multiple issues simulataneously
  • Understands the data sources well enough to evaluate their quality
  • Understands how to ‘fix’ the errant AI.

Implementing an AI project will involve training and retraining the tool itself, the users, the managers of the users, the managers of the tool in usage and development, and the overall management team. It’s a complex undertaking and the foundation of an argument that you can’t implement AI without an overarching transformation process.

 

graphic for The 2019 Index of Intelligent Technology in HR


 
Read previous post:
HRExaminer v9.10

“Harnessing AI requires a different kind of thinking. This is not more of the same old software. The products and...

Close