HRExaminer Radio Executive Conversations Badge Podcast Logo

HRx Radio – Executive Conversations: On Friday mornings, John Sumser interviews key executives from around the industry. The conversation covers what makes the executive tick and what makes their company great.

HRx Radio – Executive Conversations

Guest: Charles Hipps, Founder and CEO of Oleeo
Episode: 359
Air Date: April 3, 2020

 

Transcript

 

Important: Our transcripts at HRExaminer are AI-powered (and fairly accurate) but there are still instances where the robots get confused and make errors. Please expect some inaccuracies as you read through the text of this conversation. Thank you for your understanding.

Full Transcript with timecode

John Sumser 0:13
Good morning and welcome to HR Examiner’s Executive Conversations. Today we’re going to be talking with Charles Hipps, who is the founder and CEO of Oleeo. And Oleeo is a hiring system with its roots in the UK and that is now headed to the United States to make a big splash. So Charles, how are you?

Charles Hipps 0:35
John, I’m given the circumstances, I think incredibly lucky. I’m healthy, my family as well. And you know, everyone in the business is doing well, too. So it’s a pleasure to be here.

John Sumser 0:47
That’s great to hear. So take a moment and introduce yourself. You’ve been on the show a time or two before but I bet there are one or two people in the audience who may not know.

Charles Hipps 0:57
Thanks for giving me the opportunity. So, I don’t like I think of myself as an older fellow, but I’ve been doing this 25 years now, I’d like to think we were one of the first ever e-recruitment platforms. We started off straightaway as an online platform. I’ve run the businesses both, you know, publicly traded stock market listed company and also as a private company. Prior to that some of the stuff you’ve may not know is, you know, I was a engineering Economics and Management major at Oxford worked for one of the big global strategy consultants founded a recycling firm before I started Oleeo in recruitment. And on a very personal level, my family joke that I’m unique in knowing how to have fun in a birthing pool, and that is because I’m a little water obsessed. And the joke is that I could be found swimming up and down the birthing pool at the birth of my children. And

so anyhow, yes. And later today, I shall be out on the term. So we are very fortunate in London that well we are self isolating. We have Without out for a half an hour of exercise, and you will find me kayaking up and down central London paths, big band and other monuments with my children in the back part of the kayak. So yes, personal passion is water.

John Sumser 2:13
Huh, So tell me about Oleeo. What does the company do?

Charles Hipps 2:17
So we are obviously very much focused on innovation and in particular in automation and the use of data in recruitment. And for the past four or five years, we’ve been implementing the use of analytics and data through our our end to end recruitment platform. And we sort of come to think of what we’re doing, as you know, trying to create the next generation of recruitment software, which we call recruitment enablement. And at the heart of this recruitment enablement is leveraging data and automation to achieve great and diverse hires. And the way we’re trying to do that is enable the recruitment teams through building in a hard wiring into the software insights through to multiple kind of tailored and personalized recruiting workflows, and through removing a lot of the kind of tedium, a lot of the automation and lots of the admin from recruitment through automation. And I think kind of at the very core of what we’re trying to do is try and help our clients with no recruit great and diverse hires. But a big part of that is also about the kind of whole efficiency and production of administration.

John Sumser 3:30
So is this an applicant tracking system plus, or is it a broader recruiting tool? Help me put it in the context of other suppliers.

Charles Hipps 3:41
So Yeah, I think well, that’s very, that’s great question. It’s an end to end recruitment platform, encapsulating, you know, ATS, CRM, onboarding. And what we’ve tried to do is infuse it and build into it, data and intelligence. So that recruitment can become a really kind of data driven analytics lead function.

John Sumser 4:07
Okay, so that’s interesting. Do you call that stuff AI? If not, if not how do you describe it to somebody else?

Charles Hipps 4:16
I think, you know, the heart of it is it’s about his about data driven and analysis led recruitment. We definitely, you know, use a wide range of technologies, including kind of many traditional forms in terms of descriptive analytics. But we also do use, you know, many of the tools like machine learning or decision trees or neural networks that are often described as AI. You know, the key to our business is about hardwiring those insights into operational recruitment. We call it Intelligent Automation, we kind of tear away generally from the word AI because it means so many different things and emotive things to different people. But, you know, ultimately, we we kind of use all forms of analysis, I think, particularly, I mean, it’s a kind of it’s a good question. We, I think starts off by trying to explain that, you know, we are doing real time analysis on data, and then making recommendations in the platform. So, you know, we’re nudging recruiters to use inclusive language, it when they’re writing a job advert, for example, or we are recommending which candidate should be taken through to interview. And, you know, I would start by explaining to someone that we’re just we’re using traditional and more advanced analytics to do that. I think if you start by telling people at AI, you know, as I said, immediately paint a picture with positive and negative connotations.

John Sumser 5:39
Well, this approach would have to depend on two fundamental things. One is the quality of your clients job descriptions, and the other is the quality of your capacity to understand the applications that come your way and So let’s talk about both of those things sort of separately, the clients capacity to describe their jobs is somewhat of a bugaboo most places because the people in charge of writing the job descriptions are rarely the people who actually do the hiring. How do you overcome that with a system with asystem like yours because that’s a huge amount of error introduced into an analytical system.

Charles Hipps 6:25
I think you’re 100% correct. You know, if you were an occupational psychologist, and you were, you know, trying to match someone to a job, you do a job analysis, you wouldn’t use a job description. And I think, you know, the same in essence, it’s the point you’re making. That’s, you know, the job descriptions aren’t often a job analysis. So, I think the two the two things we do, I mean, in some extreme cases, you know, we will create a job analysis behind the job description, and, you know, in other cases, our first our first point is to create a lot of transparency. So what we will do is we will first of all build a model that explains how decisions are being made today. And that would look at what information is being put into a job description, what information was being put into the candidates response. And, you know, we would say to to a client, look, this is going to be the basis of which you’re making decisions to whether to interview people or whether to hire those people. And does that reflect the realities? Are those good things to be making decisions about? Sorry, other good data points on which the making decisions, if they’re not good data points, which decisions, which is? I think, in kind of your point, we try and put in place a program of of improvement, we would say, Well, you know, why can’t we improve the job descriptions? Can we come up with a job description library? Why, you know, are we collecting this information about a candidate? You know, why don’t we collect different information about a candidate I was looking earlier today through one such analysis And it was for a client who was using a combination of tests and resumes to make decisions. And you could see all the individual data points that they appear to be using to make those decisions. And you know, for example, the past doesn’t seem to be in any way, you know, highly predictive of who was defeating later. And so the question was, why have that test? So the answer is, yes, you’re completely correct. The quality of decision making can only be based on the quality of the data point. And, you know, our first job is to actually understand how the job is being made today, which we can vary in our platform real time build models that explain that, and then we go, Well, how do we improve the data plane? I think that To be honest, is, you know, one of the areas of, you know, massive improvement in decision making.

John Sumser 8:43
That’s really interesting. So that suggests that you look, a lot of the people that I’ve talked to who offer not identical functionality, but similar kinds of functionality don’t pay as close attention to individual job trajectory. As you just described, do you think that’s a difference between the way that you do it and the way other people do it?

Charles Hipps 9:06
Definitely, I think, you know, most of our clients are in a very large corporates, I would literally couple hours ago I was on on the phone with someone who’s, you know, Global Head of, of people analytics. And you know, one of the things he said when he was looking through I said, he said, You are incredibly thoughtful and profound in the amount of analysis You’re giving me to consume in order to explain what you’re doing. And you know, what you’re doing is, you know, obviously completely tailored to the petitioner role or set of roles. We’re looking at 39 different roles that that that we’re looking at. So yes, the platform you know, is ingesting the data real time and looking at each thing each, you know, jobs or set of jobs, you know, real time giving a lot of transparency as to what’s going on today. And giving Client the opportunity to talk about how they’d like to do it better in the future. And so it’s very much not, I think your answer your question, yes, I believe that’s different from a lot of other people, we perhaps even go a little bit overboard in terms of providing in depth insight and explainer as to what’s going on. And we just think that’s kind of key because otherwise these things are both a black box that you know, some horrible mistakes can be made.

John Sumser 10:25
So that gets us a little bit of the job descriptions side of things. Now, you’ve got this intelligent engine, which is going to be largely natural language processing and some machine learning thrown on top of that, how do you account for in your processing models, how do you account for the variations between organizational cultures and regional cultures because you’re a global enterprise and and, you know, at its simplest the language that we use to describe something in the States is different than it is in the UK. But in specific companies, different skills have different meanings or levels of importance. different titles have different implications for status. And so when you think about batching it really is, it’s a cultural phenomena that you’re trying to capture inside of your processing tools. And it suggests that you might have different processing models by client. Do you?

Charles Hipps 11:31
Yeah, absolutely. 100% correct. I mean, I think our approach is, you know, really well suited to the issue, you know, you’re talking about, you know, what our models do is, you know, they will, you know, look at these, what we would call features, look at these particular data points, you know, extracting them from job descriptions and stopping them from from, from candidates that Whether that be resumes past performance of interview, we will extract those. And then, you know, what we’re doing is we are understanding the significance of those things for that particular file. This is not generic in it, you know, this is just the extraction processes. The process of building a model is a, you know, standardized process. But, you know, what we do is we build an algorithm, you know, per client per job, almost whatever level, it makes sense. So, you know, it understands that, you know, the US might use the language, you know, use language differently and reflect that. It’s not so difficult. Once you’re set up to do it, it’s not so difficult and that’s what people need to do, I believe.

John Sumser 12:47
So, okay, so So you start with that, and now you have Coronavirus hit, right. And so, there’s some question about whether or not the jobs change. There’s some question about whether or not Who was a good fit in a world where you went to the office is a good fit in a world where you stay home? And so, so there’s some question about the validity of results that you find today versus the results that you found a month ago. How do you navigate that?

Charles Hipps 13:18
Yeah, I think I mean, I think that’s a really, really good question. You know, you could apply that question to humans as well. You could say, you know, my work, you know, other same assumptions that I use, you know, a month ago, correct. You know, today, should I be making decisions in the same way? And I think a lot of that depends on the particular question you’re asking. So I think if we were trying to if we were building something which was going to, you know, predict which restaurant you should go and eaten, or whether you should eat in or a child or whether you should buy Something online or go to the store, then I think, you know, the model would be pretty broken across that discontinued discontinuity that we’ve, we’ve experienced. But I think, again, you need to be intelligent if we if we look at some of the ways we’re using, you know, models, and also think you think about the process with no intro. So, if your question is, you know, what language should I use in the job advert for it to be inclusive to both males and females? A lot of that won’t change because of this particular discontinuity. But that behavior will, you know, that behavior does change in time, you know, as you can see, with how the dictionaries evolve over time, our use of language, you know, does have all of the time. So I think, you know, using, you know, tomorrow the same model as you use yesterday in that particular case, you know, isn’t, you know, wouldn’t come up with really crazy decisions. But what’s important is that you’re gathering information real time, you know, your proportion of in this case, you know, females and male browsers and, and how they react to the language in your, in your job advert and seeing whether, you know, assumptions you had in the past are still true, then evolving your algorithm as people, you know, how people are influenced by the language change, or, you know, if we take the example we were talking about earlier selection of people, you know, my view is that, you know, most of our clients, you know, are looking for people with, you know, strong reasoning skills, self motivated, you know, strong interpersonal skills as as a kind of example. Now, I think, you know, over this discontinuity, they’ll be looking for people perhaps, who could work more independently, who perhaps are better able to deal with change, but 80% of the components have stayed the same. And so I think using a model from yesterday Today is not going to be not going to come up with really crazy results. And then as we see the check clients behavior changing, you know, our models are working real time to come up with new models, which predict, you know, and deliver recommendations that reflect the new types of behavior in the new world. But I think you’re right, john, you have to keep a really good eye on this, because you could easily fall off a cliff from one day to another, you know, with certain kinds of models and some kind of situations.

John Sumser 16:30
So I’ve looked under the hood of your product a couple of times over the years. And it seems to me that you have the best, the most interesting approach to watching model performance over time. And as I thought about it, in the past, it was always you had a perfect system for catching models as they degraded, but what if the same principles apply, when you have you been caught you the discontinuity But I wonder about that. But but so you have this discontinuity and the models abruptly fail. Can you see that data?

Charles Hipps 17:08
Yeah, I’d love to show you actually, it’s been a few months since we took a look under the hood together, we’ve come up with new, you know, even more insightful tools, which kind of shows you real time how the model is performing. Again, you know, the parameters you’d you’d want to look at. And so, yes, it would be very evident day to day that you know, the predictions or the recommendations you were making when no longer fit for purpose. Okay, so what are the big ethical issues that you run into? I think, obviously, data privacy and content I think is really important in making sure that people understand what you’re doing with their data and that you have their consent to do it. I think, I guess the thing that we are particularly excited and interested in is ensuring the bar Is perpetuated, ensuring that there’s kind of equal opportunities and fairness in opportunity. So, you know, one of the things I would show you on screen is, you know, the constant monitoring of any, you know, what’s the underlying bias and current decision making any new way that we recommend to make decisions monitoring? Well, first of all validating that they have no bias. And then real time, as we discussed, checking that there is no, you know, live bias and those things. So I think, you know, the privacy of the data, the bias, then kind of final one I would have, and I’m sure you’ve, you’re going to pick me up on some things I should have, you know, think about that. I probably haven’t. But the other the last one I would think about is kind of fairness and transparency and decision making. And that would be about, as I said, trying to make it really clear to people, the basis on which we are making recommendations, and so that you know, potentially you can explain also to Canada To the position we’re talking about, know why a decision has been made and why it’s a fair decision and defensible. So those would be kind of my top three. You’re an expert on this what are the other kinds of ethical things you think I should be concerned about?

John Sumser 19:14
Well, you know, I think it’s confusing time. And so a lot of the things that look like really important ethical considerations a week ago, maybe, maybe reprioritized right now, but I am parents, there was an implication in your description that you’ve got algorithms that monitor for various kinds of bias are not going away. And so I wonder if you know, the conversation about bias is a tricky one. There are biases that are illegal in the States, I assume their biases that are illegal in the UK, and that’s an important area to protect us compliance, but biases in decision making are how does your raking happens, sort of. And so being able to offer feedback about biases that are not illegal, but maybe coloring decision making is something that you’re in a kind of a unique position to do it on the hiring front. Do you do that?

Charles Hipps 20:16
Absolutely. I mean, perhaps not on as profound a level, as, you know, perhaps we may be aspire, but, you know, some of the biases that we’re looking at all the time are things like, you know, biases towards a certain school, and whether, you know, and the schools that that people come from, you know, are predictors of success, because in reality if you’re using Google as a basis on selecting someone, but actually it’s not what we’re predicting people’s performance in job then, you know, in my view, you shouldn’t really be using it in Europe, we have a very wide spectrum of protected characteristics to your point. including things like kind of social mobility, though in some cases, we’re also measuring some things which are perhaps not a legally prescribed one in the US, but are kind of also more legally prescribed or more going to be more funds and looking at some of those sorts of things in Europe. So I think we’re looking at a wide range of things from the mundane like tools to you know, obviously through minority and gender, but to some, you know, more interesting protected characteristics as well.

John Sumser 21:30
You must have some interesting insights of the requires, we’ll have to talk about that in a later show and figure out how to talk about it. So you don’t make anybody look stupid. But but but I imagine that the more interested your clients are in understanding how their bias is expressed in their decision making, the more complex and interesting your analysis can be. Yeah?

Charles Hipps 21:54
Yeah, yep. Yeah, we have some fantastic charts which I’d love to show you which kind of We, you know, we, we basically identify all the decision making variables. And we look at how those introduce bias and process. And we could do that for any type of bias. And then we write these interesting optimization functions in order to try and remove any of those bias. So it’s a very interesting topic where you’ve got lots of interesting data that I’d be very eager to get your views on.

John Sumser 22:24
Oh, well, let’s schedule that. And that’s a great segue. This has been a fantastic conversation Charles as usual. And I imagine that some people are listening going, why does John you get to see everything when Can I see it? So why don’t you on the way out the door here, reintroduce yourself and tell people how to get ahold of you so that they can try to twist your arm and get to see the things that you promised me?

Charles Hipps 22:48
John, that’s fantasticly kind of you. So, obviously, one of the easiest ways to find me is to go to our website, which is oleeo.com.

And you can also, I’d be very happy for people to email me. You can email me at Charles.Hipps@oleeo.com. I’d be delighted to hear from anybody who’s listening in either via the website or direct to me.

John Sumser 23:19
Fantastic. Thanks again for doing this. Charles. It’s always a great conversation. I appreciate the scientific rigor you bring to this. And again, thanks for taking the time to do it. You’ve been listening HR Examiner’s Executive Conversations and we’ve been talking with Charles Hipps who is the CEO and founder of oleeo, a fully configured recruiting system that’s built on a disciplined analytics platform. Thanks for listening, everybody, and we’ll see you back here next week. Bye Bye now.



 
Read previous post:
2019-12-17-hrexaminer article by john sumser 3 takeaways for AI in HR Tech from iCIMS inFLUENCE event photo img cc0 via unsplash markus spiske TaKB 4F58ek sq 200px.jpg
Key Principles for Intelligent Tools and AI Deployment in HR

John Sumser identifies three things all companies should be doing with their HR Tech to prepare for their future with...

Close