2016-12-09 hrexaminer eab bio photo tom janz full.jpg

Tom Janz, HRExaminer.com Editorial Advisory Board Contributor

With this piece, we introduce Tom Janz as a member of our Editorial Advisory Board. Tom is a leading thinker in I/O psychology and predictive analytics. He’s been a leading edge I/O thinker his entire career. Here, he takes me to task for the opinions I laid out in Predictive What? a couple of weeks ago. Above all, HRExaminer’s utility as an industry analyst firm comes from the diversity of opinion we deliver. We’d rather that our deliverables make you think more and be certain less. – John Sumser


John, You Ignorant #$X!&

John: I’m right there with you when it comes to lamenting low levels of prediction power being brought to bear on hiring the best people from among those who want the job. But let’s get clear about the difference between what we know and what we do. We know a lot more about how to predict performance value than your column asserts with a whine, but little evidence. We don’t do much with that knowledge, however. Got your back there. 

Sure- you reference the busy brain and the computing power needed to simulate brain activity. Fascinating, but not evidence to support the whine. You invoke your and my, and Walter White’s favorite—Heisenberg. But again, points for tapping on an emotional warm spot, but no points for evidence. Quantum mechanics is not people science. You cited Goodhart’s Law, but when I did that annoying thing known as “checking it out”, it meant something else. Per the Business Dictionary, Charles Goodhart stated that when an economic measure becomes the target of government policy, it can no longer be used as the measure. No doubt that citations could be rustled up for the notion that people often initially see what they expect to see, but that is why careful measurement forms the bedrock of science. Evidence involves quantities, metrics, measurements, observations, and experiments. Pithy phrases and witty asides– not so much.

You offer the opinion that 100 questions falls way below the number needed to predict human performance or repair an organizational conflict. You reference SHRM’s certification exams as having “way more” questions. And clever you for knowing that even 100 questions is more than over 90% of top performing candidates are willing to hang in there for, when filling out job applications on their smart phones. So how about six questions then? Surely that could never work! Yet field studies I have conducted on sixty senior managers at a Saudi bank, backed up by a second set of studies of 204 sales professionals in 13 growth-oriented web companies is the US, finds that a six-item measure of Personal WorkSTYLE correlated .24-.28 with 3-year averaged appraisal ratings for the bank, and 3 year averaged sales records for the web companies.  Boom. Yet another beautifully elaborate theory laid to rest by a nasty little fact.  But we can do so much better than predictive accuracy in the mid .20s (or the mid .30s if corrected for criterion unreliability as some authors do).

The most widely-read reference standard, even among thought leaders in recruiting, on the power of employment prediction methods is the 1998 Schmidt and Hunter article in Psychological Bulletin titled: The Validity and Utility of Selection Methods in Personnel Psychology: Practical and Theoretical Implications of 85 Years of Research Findings. And that was 15 years ago, so now there are 100 years of findings. Mental ability wins top spot at the best single predictor across all jobs, coming in at a corrected average correlation of .54. Structured interviews, mainly behavioral, come in second at .53. Combining mental ability and structured interviews boost predictive power up to the low .60s. Back to the 2016 research on sales professionals drawn from 13 web companies, a combination of fake-resistant measures of integrity, intelligence, and energy delivered a cross-validated prediction power in the mid .60s, using a 10-fold multi-validation process implemented in the statistical package R. 

To make great hiring decisions you first must hire and optimize the right hiring methods. Hiring new-fangled online testing sites because they: A| have lots of animated and colorful images, B| feature realistic graphics that make your brand look appealing, C| produce edgy info-graphics in pastel shades, or D| make incredibly detailed predictions on what candidates will do, or E| are being used by Google/Apple/Facebook/GE/IBM/OtherBigCo—will invariably lead to hiring services that can’t answer the simple question, “What is the predictive accuracy of your service?”. If they do answer that question, you really need to come back with—“Great. How do you know that? How sure are you that it is that predictive? How did you arrive at that level of confidence?”. Otherwise, good luck.

And even hiring an assessment service that delivers strong prediction power doesn’t mean your hiring managers will actually use the analytics it provides to guide decisions. A conversation with a fraud and risk director at a global banking firm was revealing. He described how they were moving the candidate vetting process from HR to the fraud and risk group. It seems that hiring managers routinely used the firm’s appointed assessment service, received the 30+ page reports, and then just as routinely filed them in the circular file, since they were way too complicated to help with choosing the right candidate, even after the field had been narrowed to the traditional “short list”. So, the road is long and the discipline needed to deploy best practices steep. But let’s at least acknowledge what we know and forgo the wrong type of lamentations.   

Read previous post:
2016 12 09 hrexaminer feature img v746 recipe human resources results photo img cc0 via pexels photo 187041 tablet chart desk pen pencil note paper by burak kebapci full sq 200px.jpg
HRExaminer v7.46

How does HR reframe the discussion with business leaders to drive the most significant and sustainable outcomes? Mark Berry has,...