HRExaminer Radio Executive Conversations Badge Podcast Logo

HRx Radio – Executive Conversations: On Friday mornings, John Sumser interviews key executives from around the industry. The conversation covers what makes the executive tick and what makes their company great.

HRx Radio – Executive Conversations

Guest: Athena Karp, CEO and Founder at HiredScore
Episode: 350
Air Date: January 17, 2020

 

Transcript

 

Important: Our transcripts at HRExaminer are AI-powered (and fairly accurate) but there are still instances where the robots get confused and make errors. Please expect some inaccuracies as you read through the text of this conversation. Thank you for your understanding.

Full Transcript with timecode

John Sumser 0:13
Good morning and welcome to HR Examiner’s Executive Conversations. I’m your host, John Sumser and today we’re going to be talking with Athena Karp, who is the founder and CEO of HiredScore, an artificial intelligence company that sells inside of the global 500 arena. Athena, how are you?

Athena Karp 0:31
I’m great. How are you?

John Sumser 0:32
I’m fantastic. Would you take a moment and introduce yourself?

Athena Karp 0:37
Sure. I appreciate being on the show. As always a pleasure to spend time with you, the CEO and founder of HiredScore, which is a artificial intelligence solution that as you mentioned, powers hiring across the global fortune 500 supporting internal external passive, active and flexible and permanent workforce. This Making across organizations.

John Sumser 1:02
So you’ve got an amazing history. I’ve noticed that you were last year you were a Henry Crown fellow at the Aspen Institute. What is that?

Athena Karp 1:12
Yeah, it’s actually an amazing program about 22 co founders primarily that the Aspen Institute has selected us to work on one leadership, General leadership training and understanding how to move from business world into more significant and meaningful impact on society. So I’m actually focused on future of work literacy programs in high schools in impoverished low income communities where the students today in my shocker to hear this john but there’s actually not a single hour of required digital or technical training in any city, state or federal public school curriculum, and focused a lot on how we can help prepare people for the workforce, irrespective of where they’re born.

John Sumser 1:56
Christy interesting, so just to drill down on that a tiny bit What are the kinds of things that you think people need to be prepared to be in the workforce?

Athena Karp 2:04
You know, I don’t think I have the answers. But I think if you look, especially on a regional and local level, there’s a lot of jobs that are challenging to fill, because the talent is not coming prepared, even without High School, even without college degrees. And I think the staggering cost of college is really impacting how people high school education can or cannot prepare them for the workforce of the future. So we’re thinking actually, to answer that question. It should be much more of a data driven approach by processing local hiring needs and job requirements for roles that do not require college education and feeding that into curriculum. But one of the beautiful things is with online courses, e learning and a lot more digital education products and platforms out there. You can actually easy supplement a lot of the skills needed and kind of taking a model where we can also help improve years old curriculum that we can have trapped in public schools to help for those students who want, you know, to focus much more of their time and efforts on being job ready, and even starting to work in high school where that’s applicable to have more internships and work opportunities for them. Well, that’s a really interesting idea. So we could do a whole conversation. But let’s let’s get on to, what exactly does your company do? Yeah. So um, yeah, and I’d love to talk more about the future of work literacy. I think it’s really, really exciting time for that in terms of hiring. Awesome turns and pirate score. We focus a lot on how global fortune 500 organizations particularly, there’s a tremendous amount of data, especially within talent acquisition and talent management today that gets neglected just because they know I’ll give you an example. Some of our clients receive 123 million applications a year and higher one to 2% of those who apply and it’s not that the 99% hired are not good candidates or would never make it in these organizations. And that entire population is then told we’ll get back to you in the future if there’s jobs that are relevant, but it’s an impossible that cruder every time they have a new role, especially if you spent like 25 to 35. Rex a month would go through a panel of 2 million past candidates every time there’s a new role open, probably doesn’t even have the data access. So you know, the things that we think about are where their pockets across talent acquisition and talent management that are not performed, like reviewing people you told you’d get back to in the future or performance but it’s not inefficient or ineffective manner. A great example and internal mobility is making sure everyone who’s meets the criteria to pose is proactively invited to apply usually because of politics and relationship dynamics between managers and employees. Not every candidate gets a fair chance At a new job. So we think about it, how do we build pockets of process augmentation where we can leverage technology to uncover everyone in the past or uncover employees who available for promotion in a consistent and fair manner, as well as provide candidate scoring for people who apply aligned with job related criteria and job requirements, not criterion, our own requirements.

John Sumser 5:24
That’s interesting. So it sounds like you’re really trying to tackle the way that people develop careers inside of companies. Talk talking a little bit more about what you think is broken there. I I often wonder whether or not the way things are is a demand for something new. And so I’m curious to hear a little bit more about about what you think about how internal hiring worked?

Athena Karp 5:53
Yeah, well, I think so. I think just connecting the dots, I mean, one of the things we think about is how can you augment as More and more recruiters are now called talent advisors and are asked to step up and, you know, do a unlimited number of things, which is both surface great passive candidates, find employees who are viable for promotion is relevant, as well as, as candidates apply, review as many of the qualified people as possible, and do that in the most efficient, effective manner. And we think a lot about you know, that’s just way too much work to do for any recruiter, let alone given their rec volumes and candidate volumes that we see. So how do we provide you know, that augmentation layer to support them in doing that? And as we think about internal mobility, and talent retention, you know, you ask particularly like, what do we think it’s broken? You know, I remember when I worked at a large bank on Wall Street, in the requirements, if you think about the talent management requirements was you No two years minimum time enroll a minimum performance, you know, criteria which was being in two thirds of top two thirds of the class and not the bottom third. And you could even say, express in the HCM, your designer willingness to live elsewhere. So I had tagged that I was, you know, speak some Chinese, that I was willing and desiring to move to our Chinese office or our offices and turning off our office in Hong Kong, and office in a few other places. Nothing ever no opportunities ever surface other than me, proactively finding those new jobs on the career site, applying to them directly, and sometimes had been even told Oh, we filled a job just like this A month ago. But had you applied last month we would have definitely had a role for someone who wanted to relocate. And I think that’s where the breakdown of employees more and more expected, and hope that their employer is looking out for their career options and Career Mobility. And kind of a lot of the burden is put on managers to make sure we see that, especially with the younger workforce, than in an every two years cycle, you’re moved to a job where you can learn new things progressing your career and be exposed to potentially new areas of the business or new skills. And yet, even how that data gets stored, that doesn’t necessarily mean that it gets surfaced, and those people will get recommended in an efficient, almost automated way. So that’s how we think about internal fetch, which is at the point of a job being created. How do we activate and surface for that talent advisor, all of the employees who might be viable for that promotion, put in place good change management so they can approach their HR managers or that person’s direct manager and get the approval to approach that employee for the move. And or send the employee a proactive ping And let them know based on the areas they’ve desired career progress, and or, you know, relocation or otherwise, here’s an opportunity that they might want to apply for. So it just moves from passive to proactive.

John Sumser 9:15
go toe to toe underneath these things, workforce literacy. Decrease friction in internal mobility, better access to existing chemicals. There must be some sort of underlying theme there. That’s the heart of the company. What’s the big question? You

Athena Karp 9:40
know, the big question that we get most excited about is how do we bring technologies and process in some places automation and in some places augmentation to help companies progress, this total talent management strategy? Geez they have where today, data is stored in siloed in, you know, multiple systems, and I think even further exacerbated by the increasing complexity of talent acquisitions and HR technology ecosystems. So even with the rise of CRM and chat bots, and video interviews, and you know, increasing use of a learning management system, and that all of these components are disconnected. Even when you have a end to end system, like a workday or successfactors on a data and on a communication level, those systems aren’t truly connected and there’s not an intelligent thread, you know, moving through all of that to surface the relevant information at the right time for that talent acquisition or talent management, or that HR business leader stakeholder. And so that is really our goal, which is to say, how do we We enable data driven HR agendas for total talent management objectives of organizations.

John Sumser 11:09
Wow that’s a mouthful of buzzwords. I think what you just said is that with some intelligent technology you can cut the friction out of HR processes and make them more effective for the company and the employees that know the wiser that

Athena Karp 11:29
exactly and the candidates whether it’s a candidate who who applied but got rejected or passive needs that have subscribed and are just sitting in the in the databases and system.

John Sumser 11:46
So let’s talk about the technology itself. Do you call this AI when you talk to people about it and what what does it mean for you and what it what is this? What is this stuff actually do?

Athena Karp 11:59
Yeah. We do call ourselves an artificial intelligence company. since our founding, we have, you know, our our third hire was data scientists in the company with about seven years, six and a half years almost. And so from that perspective, yes, we we are an artificial intelligence technology company. But I think often AI gets blown out. And people assume that that means, you know, every part of the technology is blackbox, and full end to end automation and decision automation and not augmentation. And I think that’s when we dial it back, as you said, What’s in the technology and under the hood? We think a lot of what is our obligation as a company to do the right thing and be as ethical as possible and apply the technology we’ve built. So it’s not always about what could we build, but what should we build? And that’s where there’s certain parts of the process where Technology is, you know, far less complex than it could be, because we believe that the explain ability, the transparency of it, and almost making it more easy to comprehend and audited and logged is more important than using the most cutting edge and optimizing towards success if we don’t trust the definition of success. So in different parts of the solution, we use different types of technologies. Some are, you know, deep learning, for example, the ability to comprehend resume data, what is a job, what is a company what is a time enroll, we use quite advanced techniques in that. But then for other components, it might be just surfacing the end user analytics and reporting for them to have then the power to make the decision but never automating something like from apply to hire or automating Who gets decided to make an interview? So it’s really the power of the human as a test, depending on what type of technology we’re using in that part of the workflow.

John Sumser 14:10
So but that, that seems to me to mean that rather than replacing people, you are improving the quality of the data that they have for decision making inside of the process,

Athena Karp 14:27
repair. Exactly, exactly, or where it’s not feasible for a person to have completed a specific task, like, look through the CRM and find all the people in pipeline to be relevant for this role. Then we can automatically surface those that are relevant, but again, require the human review process and that kind of high touch requirement of getting what’s been surfaced before actual decision. surveyed, which are only made by humans being stopped by algorithms.

John Sumser 15:06
Cool. So a few people who claim to do similar things, how are you different from them?

Athena Karp 15:15
Yeah, I think the biggest differences with US dollars, you know, in some, I kind of break them down into five filler pillars, which is one really the deep focus on compliance on transparency on explain ability of all the technology that we build and deliver. Which which is one thing we think that I think makes us quite unique, you know, even going back from 2015 and 16. We’ve been active in submitting information to you know, for example, the iOS call for submissions on the use of data science, in in hiring decisions and really trying to be I think, thought leaders in How do we responsibly deploy technology in the space that is highly regulated. And there are a lot of requirements that can and should be followed in where to and where not to use the tech. So I think that’s, that’s one key component. Another one is building a customized product for every client we work with. So there’s a lot of vendors that just have an out of the box solution, just a generalist AI, we found that the accuracy, and the precision of those products is not what the client expectation has, as well as could have on our list. So we tailor and we almost fully automated that process, tailored the algorithms for every single company we work with, as well as fully configuring it to workflows and company processes and company names. So whether that’s different purge stenzel more deployment deployments being different in you know, a localized product in different countries. Isn’t general product because they want one single standard across the globe. So the customization is important, the deep seamless integration with existing systems. So, you know, to the, to the best of our knowledge, having kind of an unparalleled two way integration and near real time integration with every third party that our clients ecosystems have, whether that’s atss, CRM, scheduling bots, or chat bots or video interviews, HCM, sanella, ambassadors, and even the ambassadors. We really believe in system record theory so that we flow data back to our clients system of records at all times and make sure everything is always up to date. So those are, you know, three of I think, the important important ones we can mention. And the last is just our track record and working with fortune 500 clients and delivering actual business impact and ROI. So a lot of new companies are you know, saying here’s the impact, we expect to Have where we have, you know, referenceable fortune 100 clients that have actually seen and has the numbers behind the impact that our technology has brought to them over the years that they’ve used.

John Sumser 18:16
Cool. Cool. So what are your primary ethical concerns? Well, this is this is, this is the thing that’s starting to catch people. What? What, what do you think about when you think about the ethics of using a tool?

Athena Karp 18:31
Yeah. So I think it’s, you know, there’s a lot that is possible with technology. But then, do we want to build that and do we want to deploy that in the world? is a whole different question. So I would say some of the things that concern, you know, I think are most important, is making a decision of what do we want to automate versus what do we want to augment and where do we want to make sure there’s a human review in various pieces of the process versus an end to end automation. So a great example is, you know, scoring candidates who apply, but not having that lead to all those status assignments. Right. So just because we say this person is a, based on the resume data, they’ve submitted the application data, maybe an assessment tech they took, that was viewed as required. We don’t think that that should mean that this person gets hired, you must have a human review process to, I believe, make the judgment calls. And so this balance of, you know, efficiency and effectiveness and functional performance, but balancing counterbalance that with kind of a ethical standard of where does the technology start and stop and where do the human beings really need To be involved, I think it’s probably the one of the biggest concerns. Another one is just the different ethical standards globally. So you know, we’re a company who has our technology live in China, for example, with a localized Chinese solution. That actually is one to three equal award for the best day in HR solution. But, you know, as a, as the builders of this technology, saying, we believe there should be a global standard to what we apply for that product, irrespective if a local standard is looser than that, or, or there’s local vendors that don’t follow that standard. And maybe they can have much higher accuracy than we do by using data that we don’t believe we should use, but that they don’t, you know, ethically have a problem using so it’s trying not to bring our biases that are ethical framework Work is better than other people’s. But making sure as a company, we have a very clear standard that we follow consistently around the globe wherever our technology is deployed, even if a local standard might be different.

John Sumser 21:15
That’s interesting. So you you mentioned in a sort of a tangential way by and there is a fair, broad disagreement in the AI community about whether or not bias can be removed from results in databases. How do you guys think on that one?

Athena Karp 21:35
So I actually think you have to go back to the beginning and say, not question, do these data sets have bias, but say they do have bias, we assume that they have bias, and we’re going to put the right controls knowing that there will be bias in that data. And I think it’s really important question john to your point is there are many Different types of bias from there’s a human action bias. There’s a data population bias, right? So I’ll give a great example if a job requires a PhD in statistics, right? There’s certain gender groups where if instead of saying PhD in statistics, I said a PhD in a scientific or mathematical field, I would I would have, I would be much more inclusive of female candidates as well, versus just male candidates, but I kind of broadening that. So I think it’s a requirements bias we see in the data set, but also a population bias as a result of what those requirements bias where, you know, we can’t count on our clients or, you know, companies to change the education system in the US and increase the amount of people that have these specific degrees. So you You have to kind of in some places, except and then try to build programs like, you know, future of work literacy that will change the population differences. But then internally be cognizant of those, and maybe have a whole different process for how you define job requirements as a result of that. So I think classifying the different type or versus an algorithm of bias, right, or a technology tool bias, or even a digital bias, right, so that we require applicants to apply online and say, no paper resumes, does that preclude certain age groups, and certain types of workers who might not be digital natives or digital savvy enough to be able to fill out an application? So I think there’s so many different types of bias. We need to understand what’s generating differences, and then what is the appropriate mechanism to combat it to even it out, or potentially to change policies and processes to account

John Sumser 24:01
So goes you’re cool. Do anything specific about the bias that it discovers?

Athena Karp 24:08
Absolutely. I think that’s a really important point. Yeah. So one of the biggest things that we do when we start working with a client is to say, you know, we’re not your lawyers, or Employment Lawyers. And so we’re not going to show or create discoverable evidence of what our different teams offer rates or interview rates by ethnicity groups and gender groups, for example. But assuming that they’re probably not equal for any number of reasons. We don’t want that data to go into the algorithm and training balanced way. So we actually have an automatic tool that goes through that data, and creates even sets of population. For any machine learning that happens for launch or ongoing learning. Sexually. ongoing learning is a really important topic. Which is post go live, you might have verified that the data you learned from was always either, but then you start learning from, you know, recruiter actions and feedback. And now you’ve skewed what was a unbiased learning data set. So we make sure both data out of the box and all ongoing learning is even out by group representation even within job related criteria. Another thing we do is extensive testing to make sure the features we are using to score candidates are only job related features to gel, we see a lot of decisions made based on these lines in the resume called extracurriculars and special interest. And what we say is, you know, you shouldn’t be looking for the word golf or fraternity name, unless you’re hungry. And you actually need five years of golfing experience and that’s not a job related criteria as well as you know, continuing to build features that exclude things like zip codes and actual distance. versus commutable buckets of 30 miles, 60 miles, 90 miles so that you’re giving everybody Gradius a fair chance to all kinds of features and control, and then testing our product for every single client to make sure it treats the same candidates, you know, candidates with the same qualifications irrespective of race and gender the same way. So we have quite a lot of tests for that as well.

John Sumser 26:23
Got it well, so let’s just wrap where we’ve gotten well through our time together, how do you let your clients know which of the processes they’re using? Have intelligence in them? And Doom?

Athena Karp 26:38
Yeah, actually, that’s it’s a really interesting topic, because I think, you know, when we come in the front door, we’re being vetted or evaluated by our client. We are, as you say, in artificial intelligence technology. So we have to go through a lot of scrutiny for things like compliance and proactive virus mitigation, you know, only leveraging and storing and processing data that is absolutely required to minimizing data, only data that’s been consented, etc I think one of the things we’re thinking a lot about is the systems we often integrate with might deploy AI features. Sometimes without even the client, knowing that those features are using AI, or that those features are using data another candidate might not have consented to or publicly spoken. That, you know, I think in increasing data privacy Cognizant world, just because data can be found on Google doesn’t mean a person wants you to use that data to evaluate them for employment opportunities. And so I think just turn to bring light to what are the vetting processes, that any vendor whether AI is your core product, or if there’s AI components and secondary products within your solution should go through, which is part of where, you know, I know, john, you’re supporting a global working group that I’m on as well is building frameworks for vetting and analyzing artificial intelligence for HR products, which would be global standards. And I think as we look, you know, 1224 months ahead, just making sure those standards are just being applied to products whose core or primary, but for any AI components inside of your, for example, HR ecosystem,

John Sumser 28:17
Cool, well, this has been a great conversation we could do another hour or two. I’m sorry that the format constrains it. Thanks for taking the time to do this. Would you reintroduce yourself and tell people how they might get a hold of you?

Athena Karp 28:28
Absolutely. Well, always such a pleasure. Really appreciate those thought provoking and cutting edge questions. My name is Athena Karp, CEO and founder of HiredScore, feel free to reach out and contact me at Athena at HiredScore dot com.

John Sumser 28:42
Thanks very much. We’ve been speaking with Athena Karp, CEO and Founder of HiredScore. You might catch her for lunch sometime if she happens to be in New York City. Thanks for tuning in. Really appreciate you taking the time to be with us today. You’ve been listening to HR Examiner’s Executive Conversations and we’ll see you back here next week. Bye Bye now.