HRIntelligencer 1.05

On June 27, 2017, in HR Intelligencer, HRExaminer, by John Sumser

HRIntelligencer logo 544px
Big Picture

  • Burned by the bots: Why robotic automation is stumbling. From McKinsey, echoes of ‘who’d have guessed that things are complicated?’ The prognosticators weren’t people who actually work! Standard processes often turn out to have many permutations, and programming bots to cover all of them can be confounding. And, they require maintenance and supervision as an ongoing thing.
  • How Blockchains Could Transform AI. Much of what looks unique in its current form is really a reuse of Open Source Machine Learning MAterial. Our ability to trust and understand the ML we see will be dependent on the ability to understand its foundations. Blockchains offer an interesting approach to documenting and source attribution.
  • Why Do So Few People Major in Computer Science? via Hung Lee. A thoughtful inventory.

HR’s View

  • FairML: Auditing Black-Box Predictive Models. This article looks at criminal sentencing and bail access to understand the fairness of ML applications. Worth understanding because the principles apply to many HR ML apps.
  • Venture Capital Firms With More Teenage Daughters Perform Better. “Parenting daughters reduces the bias that one has towards women, which leads to more female hires,” the researchers wrote, noting that firms with a greater number of daughters over the age of 12 also lead to even greater gender diversity in firms. “This is consistent with fathers observing potential gender biases that their daughters face as they get older.”

Execution

Tutorial

Quote of the Week

“Doctors using AI today are expected to use it as an aid to clinical decision-making, not as a replacement for standard procedure. In this sense, the doctor is still responsible for errors that may occur. However, it is unclear whether doctors will actually be able to assess the reliability or usefulness of information derived from AI, and whether they can have a meaningful understanding of the consequences of those actions.

This inability arises from the opacity of AI systems, which—as a side effect of how machine-learning algorithms work—operate as black boxes. It’s impossible to understand why an AI has made the decision it has, merely that it has done so based upon the information it’s been fed. Even if it were possible for a technically literate doctor to inspect the process, many AI algorithms are unavailable for review, as they are treated as protected proprietary information. Further still, the data used to train the algorithms is often similarly protected or otherwise publicly unavailable for privacy reasons. This will likely be complicated further as doctors come to rely on AI more and more and it becomes less common to challenge an algorithm’s result.” – When AI Botches Your Medical Diagnosis, Who’s to Blame?

About

Curate means a variety of things: from the work of vicar entrusted with the care of souls to that of an exhibit designer responsible for clarity and meaning. At the core, it seems to mean something about the importance of empathy in organization. HRIntelligencer is an update on the comings and goings in the Human Resource experiment with Artificial Intelligence, Digital Employees, Algorithms, Machine Learning, Big Data and all of that stuff. We present 8 to 10 links with some explanation. The goal is to give you a way to surf the rapidly evolving field without drowning in information.



 
Read previous post:
2017 06 28 HRExaminer photo img three part series cc0 via pexels abstract red lights digital computer tech technology ai machine structure light led movement 158826 sq 200px part 1.jpg
AI / ML / Big Data / Predictive Analytics: Risks, Ethics, Liability Part 1 of 4

"Some of today’s tools (and all of tomorrow’s) do much more than record and report. They suggest, recommend, decide, evaluate,...

Close