The job interview is a poor predictor of success. While current interviewing practices are ineffective, they are also familiar and comfortable to your managers. Bob Corlett begins a two-part series on how you can engineer a new hiring sequence that works.

What is a job interview supposed to do?

  • Predict which candidate will be most successful in the role.
  • Encourage that ideal candidate to accept the job offer.

Most interviews accomplish neither goal.

Interviews play a pivotal role in the hiring decision, but that role may be unwarranted. Extensive research shows that the typical job interview is a poor predictor of success.

It’s no wonder employers are dissatisfied with the uncertainty inherent in hiring. Even when the established procedures are followed to the letter, a terrific hire is more of a surprise than a certainty. Consequently, making a hiring selection is one of the riskiest decisions managers must face. There are inherent risks in every aspect of the hiring process, but they pile up very quickly during the interview. Due to a series of cognitive errors and unexamined biases, common interview practices are far more likely to repel a top candidate than to predict their success on the job. Hiring results are undermined at every turn by faulty opinions rather than supported by solid evidence.

photo of Bob Corlett on HRExaminer.com

Bob Corlett, President and Founder of Staffing Advisors and HRExaminer Editorial Advisory Board Member

An evidence-based approach to interviewing eliminates a) the tendency to value opinion over evidence and b) the over-reliance on “rules of thumb” instead of data. This is similar to the analytics-based approach that has likely swept through every other functional area in your organization.

Building a Case for Change

Your current interviewing practices may be ineffective, but they are certainly familiar and comfortable to your managers. So how can you build a case for change?

Think of the last five people who either quit or were fired from your organization. Who was blamed for their departure? Most organizations blame the person who left. This reflex makes it hard to change your practices, because blaming the exiting employee avoids internal accountability for their selection.

Process improvement requires tracking metrics that demonstrate whether your interviews achieved your desired outcomes:

  • Did you hire people who were measurably better than their peer group? More competent than other people working in similar roles?
  • Did your ideal candidates readily accept your job offers?
  • How did your new hires work out in the long run? Did they stay more than 3 years? Get promoted?
  • Did your hiring results vary by manager or department?

“Hell is other people.” – Jean-Paul Sartre, No Exit

One interpretation of Sartre’s famous line is that we are so afraid of being judged negatively by others that we constantly adjust ourselves to conform when someone else is present. So, since interviewing means putting other people in hell, it sure would be great to know it was worth it.

Sadly, it’s not. Research shows that Sartre was right. We are all poor judges of other people.

When Jack Welch held the top job at GE, he was famous for his “Rank and Yank” employee appraisal methods. Every year, promote the A players and push out the C players. This theory was based on the (deeply flawed and unexamined) assumptions that employee performance can be fairly assessed, and most employees consistently perform at a similar level year-over-year. But when Wharton professor Peter Capelli explored those assumptions (by examining all of the performance appraisal data from a major US corporation over the course of 6 years) his research called the entire GE performance appraisal system into question. Capelli found that “…knowing this year’s scores explained only one-third of the next year’s scores across the same employees.”

In short, people who are good performers do not always tend to be good performers, and vice versa.

What Predicts Good Ratings?

Now here’s the really bad news. Capelli’s research also showed that hard work had little to do with performance ratings.

The best predictor of a good rating was the employee’s demographic similarity to the boss. Or as Capelli puts it, “how you and your appraiser map onto each other. Are you similar? Then you get higher scores. The more different you are in terms of ethnicity or age or sex, the less well you’re going to do.”

But the bad news does not stop there. As Capelli notes in another article:

“The belief in the A player, B player, C player model is consistent with The Fundamental Attribution Error, a very common bias where we assume that the actions of individuals are caused by who they are rather than the circumstances around them… hence, we see employees performing poorly as being chronically bad employees.”

Those are some damning conclusions. Managers tend to give ratings due to factors unrelated to performance. And employees tend to get credit or blame for circumstances beyond their control. And this research was about how we rate and evaluate the people we actually work with. What does that portend for our ability to interview successfully?

Most HR Data is Bad Data

Marcus Buckingham corroborated Capelli’s research, citing three psychometric studies involving half a million participants.

Buckingham concluded in an article in the Harvard Business Review, “Neither you nor any of your peers are reliable raters of anyone. And as a result, virtually all of our people data is fatally flawed.”

One issue he investigated was the Idiosyncratic Rater Effect, in which:

“My rating of you on a quality such as ‘potential’ is driven not by who you are, but instead by my own idiosyncrasies—how I define ‘potential,’ how much of it I think I have, how tough a rater I usually am. This effect is resilient—no amount of training seems able to lessen it. And it is large—on average, 61% of my rating of you is a reflection of me.”

The more an evaluator grows in a particular skill, the worse other people will rate by comparison. (This is often referred to as a “Shifting Baseline.”)

Buckingham ponders, “…if we thought for one minute that these ratings might be invalid, then we have to question everything we do to and for our people. How we train, deploy, promote, pay, and reward our people, all of it would be suspect.”

Again, this makes the interviewing dilemma even more daunting. How can we rely on bad data to make good decisions?

In hiring, your information is incomplete. You don’t know much about a candidate’s work environment and how it compares to your own. You must actively struggle against the structure of the interview conversation itself, which is awkwardly stilted and uncomfortable as compared to a normal conversation. (With no shared context, interviewer and candidate find it far too easy to misinterpret each other’s meanings.) Your colleagues might join you in the interview process, creating complicated group dynamics that rarely result in better decision-making.

Beyond the candidate questions, there might be problems with the design of the job itself. Defining the performance expectations, understanding what competencies drive impact, assessing those competencies, and then forecasting how the job might change in the future are complex cognitive tasks. Each element is crucial to successful hiring, but often hard to define. (We offer some assistance here.)

Yet, despite the obstacles, you can still engineer a hiring sequence that works. You just need to adjust your approach.

But, more on that in Part 2 of this article.

Have you read the full series?
  • The Case For Evidence-Based Interviewing: Part 1
  • The Case For Evidence-Based Interviewing: Part 2


Read previous post:
HRExaminer Radio Executive Conversations Badge Podcast Logo
HRExaminer Radio – Executive Conversations: Episode #280: Jonathan Duarte

John Sumser speaks with Jonathan Duarte, a veteran CEO and Founder in the HR Tech industry, having started GoJobs.com, one...