HRExaminer Radio Executive Conversations Badge Podcast Logo

HRx Radio – Executive Conversations: On Friday mornings, John Sumser interviews key executives from around the industry. The conversation covers what makes the executive tick and what makes their company great.

HRx Radio – Executive Conversations

Guest: Mike Hudy & Isaac Thompson, Modern Hire
Episode: 360
Air Date: April 10, 2020

 

Transcript

 

Important: Our transcripts at HRExaminer are AI-powered (and fairly accurate) but there are still instances where the robots get confused and make errors. Please expect some inaccuracies as you read through the text of this conversation. Thank you for your understanding.

Full Transcript with timecode

John Sumser 0:13
Good morning and welcome to HR Examiner’s Executive Conversations. I’m your host, John Sumser. Today we’re going to be talking with Mike Hudy and Isaac Thompson from Modern Hire. Modern hire is the company that results from merging Montage which is a video interviewing platform and Shaker which was a deep research assessment modeling company into a single offering that gets you some level of advanced screening in the hiring process. So Mike, Isaac, how are you?

Mike Hudy 0:46
Doing well, doing well. Good morning John.

John Sumser 0:48
So would each of you take a moment and introduce yourself?

Mike Hudy 0:51
Sure, glad to. So this is Mike Hudy I’m the Chief Science Officer at Modern Hire and backgrounds is I’m a PhD in Industrial Organizational Psychology and I have been in the talent assessment space pre employment assessments, talent acquisition space, pretty much my whole career 20 plus years and also one of the founding partners of you know, john mentioned the one the assessment research side of Shaker International is one of the founding partners of Shaker International.

John Sumser 1:21
And Isaac, Isaac how are you?

Isaac Thompson 1:22
Yeah. Thanks for having us on, John. I’m Isaac Thompson. I first joined Modern Hire in 2017. I was the first data scientist it was a small company, came from Red Hat. You know, my job is being a key to building out solutions that have never existed, AI solutions, and that’s a curvy road. Hopefully we can get into that the interesting findings along that road and this podcast what we found with AI is amazing and there is a story there.

John Sumser 1:23
So Modern Hire is this mashup of two companies happened around the end of last year. What is the complete company do?

Mike Hudy 1:58
So the simple way of describing it is we help companies make better, faster, more fair decisions about talent in their hiring process. At the same time, we’re creating personalized, engaging experiences for candidates. So they get a high level like, why does it exist? What are we trying to do? That’s what modern hire is all about getting into the how we do it. And John, you touched on it already a bit. But you know, so our platform combines pre employment assessment with interview technology, interview, technology, being video interview, audio, text, on demand, and live auto scheduling. So those are all the tools we have in our toolkit. And then underlying that all kind of best in breed science on that platform that includes advanced analytics and artificial intelligence, like Isaac referenced all on kind of one platform. And we’re fortunate enough that we actually work with big companies, we work with 47 of the Fortune 100 and just to give you a feel for the kind of scale that we’re working with, in 2019 we touched over 20 million candidates either through assessment or interview tech. And what we’re seeing so far in 2020, in the midst of the COVID-19 crisis is actually volumes are ramping up. We’re seeing even more candidate touches here in 2020.

John Sumser 3:17
So that’s great. So, essentially, what you’re saying is that the company offers a technical solution that should be more cost effective managing the front end of recruiting, and you use an entire Bank of technology to get there, some of which is AI. So in all of that in order to make a company fly, there has to be a sort of underlying big question. What’s the big question?

Mike Hudy 3:42
Yeah, I love that question. And we’re pretty passionate about it. When we brought these two organizations together. The real problem that we were solving that thesis we came together to solve is that hiring is fundamentally broken that if you look at the typical hiring process, it consists of a series of disjoint point in time solutions where candidates get handed off from one vendor to another in that journey and create a poor candidate experience. But the other thing it does is it creates a loss of data. So when for instance, when you candidate comes, hits an applicant tracking system fills out some information, provide some information, that’s one hurdle. And then they’re moved on to the next vendor as they move on, and which might be a pre employment assessment. Their data doesn’t travel with them, it just stays with the applicant tracking system. And then on to the vendor. The assessment provider who administers and assessment scores it up, it’s used as potentially a hurdle. And then they move on to the next step of the process. Should they advance it again, the data doesn’t travel with them. And so really the big problem we were trying to solve it that brokenness and to say, Okay, what it really should be is a seamless, personalized conversation between company and candidate from post apply to hire all of that data, you continue to learn about the individual and as you learn about that individual, you add that data to the equation you get a more robust understanding of the individual and you are in a better position to make smarter decisions as a result of that.

John Sumser 5:03
That’s pretty interesting. Am I hearing you say that the roadmap includes an ATS and all the places where the data is stored? Or what’s the answer that you offer to the, to the problem that you’ve described.

Unknown Speaker 5:15
We never want to be an ATS. We’re not going to go there. But what we can do is take the data that’s collected in an ATM and add it into our algorithms to process alongside pre employment assessment data, interview data, and any other data that we can get and that journey from that post apply many post ATS all the way to hire

John Sumser 5:35
Cool.

Isaac Thompson 5:35
The big question one of the interesting things about Montage is that it was built on the foundation of virtualizing interviews and Shaker was built on the foundation of virtualizing assessments, especially trying out the job, these high fidelity assessments. So in a virtual world, you have to still figure out how to pair skilled and motivated labor with those opportunities. And you have to do that virtually. And that’s where the AI comes in. And within AI we’re dealing with today he was talking about how to create a generalizable AI. This would look across data types, resumes, interviews, situations, even acoustic data. We want to try to train this AI to understand and have confidence and surety around its predictions, how it promotes or it represents the human to decision makers. But at the end of the day, how I look at it is, rather than being a pure judge, we have to have a platform to be a facilitator to have that conversation between labor and opportunity. And hopefully, we can be that facilitator and be a platform that allows that conversation to happen when some of the hides in the world to try to push away that conversation, like human biases, for example.

John Sumser 6:50
And in that sort of scenario, you have to have a really solid grasp of the underlying culture of the company. So are you imagining assessing that or doing some work in this “what’s the context that’s larger than the job” sort of question.

Mike Hudy 7:05
Yeah, we do that we represent that today in a couple of different ways. So for some of our clients, we actually design culture fit measures, a actual measure to get kind of go in assess what are the key elements of the culture, and really the key elements that make it different than other cultures, that this is the type of culture and this is the type of individual that works really well in this culture. So we have measures like that. And that’s one way of tackling it. The other way of doing it is just doing it through pure analytics is that we model against company specific metrics and outcome. So we start with measures and then we’re basically tuning it to that environment, meaning the metrics and what define success, those success criteria we’re using are reflective of the culture, what drives success in that environment. So being able to tune our algorithms to a specific company metrics represents that as well.

Isaac Thompson 7:53
So what do you mean by culture, John?

John Sumser 7:56
That’s a great question. And I imagine you guys have to have an answer to that, in order to represent it. I’m less clear about what culture means. It’s a word that’s tossed around pretty heavily, actually. And nobody means the same thing when they use it to describe it. So that’ll be key in understanding what you do. But rather than run down that rabbit hole.

Isaac Thompson 8:16
Yeah.

John Sumser 8:16
The question that I want to ask you is, if you model the company’s desired outcomes, how do you avoid perpetuating the biases that exist in the company?

Mike Hudy 8:29
Isaac do you want to take that one or do you want me to take it?

Isaac Thompson 8:31
I can take a stab you know, the interesting thing that you said the nuance around culture, I have a love hate relationship with that word. I don’t like it when it’s used to perpetuate the status quo and the norm of a company and to perpetuate their biases. So we’re a flip flop organization, we hire people only with flip flops. That’s not so we need to try to understand what is culture What are those key elements of culture? Really, like hoody mentioned, try to distill and find what are the job relevant constructs inside of that, like, if it’s flip flops that might be well, it’s actually adaptive, it’s free. It’s autonomy. If it’s customer service obsession, it might be something about making sure Customer service is forefront and all of the conversations that you have. And that can be a job relevant construct. So distilling and differentiating between just cultural norms and actually job relevant criteria, I think is key for us to build these algorithms and train them on.

John Sumser 9:36
So I think we should probably have a long show just on this topic. I imagine this is an astonishing time could be thinking about that question, because now that the status quo is interrupted, it’s a remarkable time to be able to see what the culture actually is and write it down. So I am going to assume that every spare eyeball is to that question there because the difference between what you do in times of abundance and what you do in times of scarcity is where you get enough data points to actually flesh out what the culture is. But let’s sort of leap from there into what makes this such a great opportunity to see what culture is you guys gonna be flooded with people trying to figure out how to do the components of recruiting that you do cheaper and more effectively given radically increased candidate volume. Tell me about what’s going on in the in the virus world with you guys?

Mike Hudy 10:37
Yeah, so I mean, obviously, we haven’t seen anything like this pandemic. But what we have seen, we’ve been through cycles. We’ve been doing this a long time. So we’ve lived through the previous two recessions. And we see kind of how things change between you know, as john, you said, the abundance versus scarcity models and there’s certain lever levers that kind of move in these different environments. So we’ve been This very long run of record low unemployment. And really what evolved there this long run as the candidate as King and candidate experience, addressed and became a refrain of all our clients and rightly so, we started the company with this idea of candidate experience and thinking about the candidate first. So really swung around to that. And it was about candidate experience. And then also efficiency, like we want to quickly process candidates because they have so many opportunities, they’re not going to hang out at our career site, they’re not going to go through our process, so we need to quickly move them along. So it’s really a candidate experience focus and an efficiency focus. Now in an environment where you have fewer jobs, and we have an abundance of candidates candidate experience still important you don’t want to you don’t want to undo all the goodness we did there and efficiency still important. But what comes into play an increased focus is on quality. Now I have so many candidates for every opening, I need to figure out which ones are the quality ones which ones are going to perform in my inbox. So there’s a, there’s a real focus on I can’t touch anyone, I can’t touch all these candidates anymore. So I need tools. And I need smart science to help me kind of tier candidates so that I know, okay, I have these hundred candidates, these are the best fit 20. And I’m going to start with those and kind of work my way down. So it kind of slips to a quality focus. The other thing that we saw is with the environment that it was and candidates having all the choice, there was a huge focus on retention turnover as the driver so many of our clients were just just get us people that are going to come here and they’re going to stay, and then we’ll figure out you know how to train them up and get them to perform. Now, when you have a scarcity of jobs, turnover is going to go down on its own now. So that’s kind of less of a focus and now companies can focus in on I have my pick of the litter now I have so many candidates and I have more diversity in my candidate pool people with all different kinds of backgrounds so I can be more discriminant about the ones that I choose and really focus in on getting people who are going to be great performers.

John Sumser 13:03
So are you saying that where a month ago we were worried about the risk that people would leave now we’re worried about the risk that they won’t leave?

Mike Hudy 13:15
That’s an interesting way of putting it. Yeah, yeah.

John Sumser 13:18
So, how are you different? You know, at the video interviewing end of this thing, there isn’t anybody who doesn’t live on Zoom these days. And so how are you different from other ways of collecting this data? What’s the thing that makes you different?

Mike Hudy 13:35
I think what makes us different is that we’ve been doing this for a long time. We’re not a kind of fly by night just came along and have a bunch of data scientists and we throw them at this challenge. We’re an organization that’s stacked with we have 40 Industrial organizational psychologists and data scientists and industry organizational psychologists, they’re behavioral scientists. They’re experts in understanding behavior in the world of work. And so I think what really differentiates us The way that we approach this problem, it’s not a purely empirical challenge of throwing a bunch of data in the hopper using advanced analytics and tools. And voila, here’s the algorithm, it really starts with a fundamental understanding of human behavior. And it’s theoretically driven. It’s what about people? What about candidates are we looking for? And how does that relate to performance and having a rational understanding of we’re going to design a measure that looks like this, because it’s going to tell us this and then we’re going to go test that hypothesis with data. And so this very rational approach that we take feely driven approach, we’re kind of able to blend the best of both worlds of theoretical understanding that our IO psychologists give us and then layer in and many of our IO psychologists also are data scientists. So they have both of those backgrounds. They understand how to use the advanced analytical tools like Isaac is a perfect example of that. So they have both of those were a lot of the companies out there now that are going after solving this issue and using artificial intelligence to automate process And to design algorithms are doing it with dusk Pyrrhus ism and just what the data is telling them?

Isaac Thompson 15:06
Well, I think when I look under the hood when we open the hood of these alternative vendors or methods, like who said, a lot of them are not using science, or they’re not using the latest in AI. So there’s those two buckets that are so important one is the measurement science and one is the latest and greatest in AI. And you find that people without that solid foundation in measurement end up jeopardizing by poor decisions. For example, they might take a resume and link it with job performance. Job performance is super biased. Resumes are biased. So if you build an AI off of that relationship, you’re going to have a very biased AI. On the flip side, you have measurement scientists out there that are doing this really in a way that is 10 or 20 years antiquated and there’s been huge gains in the AI world that allow us to be more accurate and to represent the candidate in a much more accurate way. And so if you traditional measurement science without the latest in AI, you miss the big picture, you miss a lot of the nuance that a candidate brings to the table. And so, you know, it’s very rare to find measurement science and the latest AI being promoted. I’m not saying that’s impossible, it’s just rare. And then once you get those two, you have to go through a continuum of steps. So you know, we’ve been doing the deep learning with measurement for over three years now. And there’s a lot of mistakes and learnings and successes that we’ve had over those years. And so that continum, that path, that takes time.

John Sumser 16:32
So give me a couple of examples of the nuance that you think you can pick up that others are not picking up.

Isaac Thompson 16:37
Traditional natural language processing, I used to hate it. Anytime somebody would ask me to do that. I’d roll my eyes because it was all correlational meaning that if the words were in the response, then you would correlate it to an outcome. So there was no way to represent true nuance of language.

John Sumser 16:57
Right.

Isaac Thompson 16:57
This technique of just counting how many times somebody says, You know, I talk to people, to account how extroverted they are, that’s the standard in our field for

John Sumser 17:05
Is it really?

Isaac Thompson 17:06
the last 20 years. Yeah,

John Sumser 17:08
Wow.

Isaac Thompson 17:09
Deep learning comes on the scene. Yeah it’s real bad. Deep Learning comes on the scene. And all of a sudden, there’s a mathematical way to capture word meaning and context of what that word was said, like the series. And so with that, we can replicate a panel of expert judges on this something like a podcast like this, if we transcribe it we could the deep learning could represent or replicate expert judges in extracting things like customer service orientation from that raw text.

John Sumser 17:40
And so what you just said is that most of the NLP stuff boils down to self recording, and that somehow you guys have come up with a method for analyzing the same raw input and getting something that is more reliable than self reported. Is that right? Did I get that right?

Isaac Thompson 18:01
Yeah, I think it’s even worse than self reporting. I mean, it’s like an expert going in there and counting how many times somebody says a certain keyword, it’s really bad. And then they might link it to self report. Ours is much more nuanced. And you do have that right.

Mike Hudy 18:16
And that’s AI side of things. Another angle to come at this is, you know, the expertise that we bring is getting a better starting point with the data that you’re using to model to begin with. So one of the things that our team has expertise in is designing assessments, designing measurement, and so designing high fidelity simulations that what you’re getting from candidates is information that is clearly related to the job, it’s going to be more predictive and it’s fair. So john, you reference just taking data and replicating and replicating byesies. A lot of times we’re designing these measures and we’re designing them around actually what drives success in the job rather than going and getting a database that just happens data that happens to be convenient, but convenient, but also full of bias.

Isaac Thompson 19:00
I want to elaborate on that a little bit too. In the future, we’re going to see artificial intelligence that can extract all kinds of personality, emotion and job relevant constructs from every type of virtual interaction we have that’s coming down the road. So like, tweets, to resumes, all that kind of stuff. And once that AI comes on the scene, the IO psychologist, the traditional measurement scientist can be like, Oh, my God, what do I do? What’s my occupation? Like the AI just automated my science, but I think what Hudy is talking about is really the future of IO psychology as well. It’s like so you had a resume, it didn’t show the AI didn’t pick up any customer service. So the AI has to understand like, Hey, I actually don’t know this person’s customer service abilities. So then it relies on a human expert or a series of human experts to say what question Do we ask them next to elicit the ability they have to be oriented towards the customer? And so that’s where after the AI come and takes over assessment, there is still a place for measurement scientists to come back on the scene and guide that AI and help ask those questions that matter?

John Sumser 20:11
Well, I think what you’re saying is that most of the intelligence that’s coming is focused on the measurement of a facet of things. And that they human input to that process, once the measurement is complete, is putting all of the pieces together because they vary by so many factors that it’s going to be challenging to get all of that into a single decision making model. Is that what you’re saying?

Isaac Thompson 20:37
Getting them all into the single decision making model is beyond what I’ve been talking about. So imagine that you have all this data and you’re pulling out all this information about the human. How do you represent that information to a key decision maker? That’s a whole different can of worms, you know, because even information like the order of information could carry inherent biases or whatever, you have to train that information to relate to the job that is being hired for in the organization they are. And I think that’s where you’re talking about the cultural nuances. That’s where a lot of our tradition is built on building custom assessments. And that means training those generalizable models onto what matters for that company.

John Sumser 21:22
Got it. So we’re gonna crash through the time barrier here. But let me ask you what you think the big ethical issues are?

Mike Hudy 21:29
Yeah, and this is one that I think really speaks to I alluded to previously, but it’s where we differentiate and first and foremost thinking it colors. The rest of this is taking a humanistic approach that we have modern higher where SAS or SAS technology company, we work with big data, we use artificial intelligence, we model data, but we do it with the lens that in the end, it’s people making decisions about people and so everything that we do is done with that lens of what about the candidate? What’s the case? Candidate experience, how can we make it more engaging? How can they take something away from it, where they’re learning about the company and the job? And how can we be transparent with them so they understand why we’re doing this, how the data is being used, etc. So they kind of do that. That’s a great starting point. And then my second point kind of flows from that is just the scientific rigor that we take. And Isaac was representing that some is that just starting with theoretical understanding of why and designing our measures as such, and really using data analysis to test out those hypotheses and model them. What it does for us is it enables us to understand why things are working and allows us to get away from the black box that’s very prevalent with AI as we know, it’s working, but we don’t understand why we start with the understanding of why and build our models around that. And with having that understanding, it’s a lot easier to share it back with your clients, your stakeholders, with legal With candidates themselves. And then another one that you have to mention is fairness. And john, you talked about using data that has bias and just replicating that the way that we look at it is what we’re able to do is actually eliminate bias, we’re able to actually encourage diversity. So the alternative being allowing humans to make decisions on you know, whatever they choose to make decisions on humans are flawed biased decision makers, what we’re doing is we’re honing in on job relevant factors, the things that are just the things that are documented to be relevant for the job. And we’re actually analyzing data and tuning our algorithms and doing that on an ongoing basis to encourage that they’re getting a diversity of candidates not just kind of mitigating adverse impact but actually the flip of that and making sure that they’re not stuck on rules of thumb like years of experience doing this and we’re actually you know, only focused in on factors that link to on the job success.

John Sumser 23:56
We should schedule soon another conversation because you just put a whole bunch of interesting topics of the table, and I can tell you that it’s my very loud and very public view that it’s not possible to eliminate bias. And so that’d be a great conversation, because I assume that think about that very hard. And, you know, I have a little poster on my wall here, that list 135 kinds of unconscious bias.

Isaac Thompson 24:21
Wow.

John Sumser 24:22
That would be an exhaustive audit to run. You know, so, but it’s good to be wrestling with the question. And it leads me to the sort of the last question that I have. Is you create insights and offer those insights to decision makers, but you don’t I can’t imagine that you claim to offer the truth. You offer a probabilistic view of things. There is an 80% likelihood This is right. There’s a 90% likelihood that this is right. Because if you said here’s the truth, you’d have to accept liability for the decision making and no Board of Directors will let you do that. So, how do you teach people to weigh the risks in a probabilistic decision? Because if it’s a 90% likelihood that this person is going to be okay, I think that means there’s a 10% likelihood that it’s going to be a complete disaster. And so you have to teach people how to work with probabilistic information. And human beings are notoriously bad, at handling this stuff. So how do you handle this?

Mike Hudy 24:32
Absolutely right. And to take your example, that 90/10, but you know, what we’re doing is we’re improving it from, say, 70/30. So we’re increasing the odds. It’s never you’re trying to predict human behavior. So you’re never going to get at 100% right. So what you’re trying to do is you’re trying to increase the odds, and you’re trying to increase the hit rate. And then you’re arming humans, decision makers, recruiters and hiring managers with that information. And typically, the way it’s done is you’re prioritizing, you’re tearing candidates, like tier one would be the 90% odds of getting it right and then tier two would be the, you know, the 70%, you know, in terms of being successful. So you’re you’re fishing in different pools that have different success rates. But we’re still it’s up to the human to make the decision. And to that ends not only providing scores back, but what we can do with our science is arm that human with better evaluation say, hey, as you evaluate these candidates for this particular candidate, here’s some areas that you might want to probe in on here were a few concerns follow up in this area, or these were some big strengths. Ask about those that confirm those. So it can actually help the interviewer the individual vet further vetting the candidate be smarter about the way he or she evaluates that candidate. But you’re absolutely right. It’s all about probability, increasing that probability, but then arming that decision maker with better data and better guidance in terms of where they should be further evaluating individuals.

John Sumser 26:46
We can have this conversation for a couple of hours, I think, but we have exhausted our time slot. So thanks for taking the time to do this. Would you please reintroduce yourselves and tell the audience how to get ahold of you?

Mike Hudy 27:00
Sure, yeah, again Mike Hudy and I can be reached at Mike dot Hudy, H-U-D-Y at modern hire.com.

Isaac Thompson 27:09
And I’m Isaac Thompson that can be Isaac dot Thompson at modern hire dot com. Also can hit me up on social media, especially LinkedIn.

John Sumser 27:17
Okay, thanks for doing this guys. I really appreciate you taking the time you are at the cutting edge. It’s a treat to get a chance to walk through a little bit of it with you.

You’ve been listening to HR Executive, yeah, you’re welcome. You’ve been listening to HR Examiner’s Executive Conversations, and we’ve been talking with Mike Hudy and Isaac Thompson from Modern Hire. Thanks for tuning in this week. And we will see you back here same time next week. Bye Bye now.



 
Read previous post:
2020-04-14-HR-Examiner-article-Mochael-Kannisto-PhD-Maybe-We-Need-An-AI-Safe-Word-stock-photo-img-cc0-by-redrecords-red-led-traffic-cone-2743739-sq-200px.jpg
Maybe We Need An AI Safe Word?

Michael Kannisto, Ph.D. discusses how academic research is calling into question the validity of using AI to select employee candidates.

Close