0:00
/
0:00
Transcript

HREx 1.08 Jonathan Duarte

Creating Chatbots Before They Were Cool

Summary

In this episode of the HR Examiner podcast, John Sumser speaks with Jonathan Duarte, a pioneer in chatbot technology for recruitment. They discuss the evolution of chatbots, the challenges of AI in HR, the importance of data quality, and the limitations of SaaS solutions. Duarte emphasizes the need for human insight in automation and the risks associated with AI hallucinations. The conversation also touches on the historical context of Taylorism in recruitment and the future of conversational interfaces.

Takeaways

  • Chatbots have evolved but still rely on scripted responses.

  • AI in HR faces challenges due to the need for deterministic answers.

  • Data quality is crucial for effective recruitment processes.

  • SaaS solutions often fail to meet the unique needs of businesses.

  • Understanding Taylorism is essential for process automation in recruitment.

  • Conversational interfaces will improve but won’t cover all use cases.

  • Security risks in AI must be addressed, especially in sensitive areas like healthcare.

  • Hallucinations in AI responses can lead to misinformation and legal issues.

  • Human insight is irreplaceable in understanding complex processes.

  • Automation should not replace the knowledge of experienced employees.

Time Stamps/Chapters

00:00 The Evolution of Chatbots in Recruitment

02:36 Challenges of AI in HR and Recruiting

05:37 Data Quality and Its Impact on Recruitment

08:28 The Limitations of SaaS in Recruitment Processes

11:15 Understanding Taylorism in Modern Recruitment

14:11 The Future of Conversational Interfaces

17:05 Security Risks of Large Language Models

19:53 The Dangers of Hallucinations in AI Responses

22:37 Navigating Data Integrity in AI Systems

25:25 The Role of Human Insight in Automation

28:16 The Future of Work and Process Automation

Transcript

John Sumser (00:00)

Hi, I’m John Sumter and this is the HR Examiner podcast. Today we’re going to be talking with Jonathan Vorkate. Jonathan is...

Among a bunch of other things, he’s one of the real pioneers in the chatbot craze that we are in the pros of right now. As long ago as 2016, he was building chatbots and has grown that view of the world as the technology has evolved. And so it’s really going to be it.

Great conversation, because we’re talking to somebody who’s been there on the ground floor for a decade now. How are you, Jonathan? Introduce yourself.

Jonathan Duarte (00:39)

Yeah. ⁓

yeah, fantastic. Thanks John. And ⁓ so I come from I think now 30 years in recruitment tech built one of the first job boards in 96. ⁓ Then built job distribution so that data play on moving jobs between job boards and ATS is then built a small business background screening company called Good Hire was on the.

founding team for that organization. And then while we were doing that, there was this little company at the time called Uber who needed to hire 600,000 drivers in 180 days, which they did all over text messaging in 2016. No one talks about it. Good thing is I was one of the guys building parts of the API in the first part.

for the background check on drivers. So I saw this early on, this conversation happening. No one was calling it a chat bot because it was all scripted, right? And my co-founder at the time said, let’s just try this for job search and see if people would do it. 10 years ago, last month, we built the first chat bot in the United States for job search. went viral.

after two Facebook posts to 103 countries in 30 days. So I quit my job and decided conversation is the way we’re actually gonna communicate and do UI in the future.

John Sumser (02:15)

That’s awesome. So talk a little bit about how it’s evolved since then. Because on one level, you’d be excused for thinking that AI is here and AI is what chatbots are all about. But when you look under the cover, most of what you see is the same scripted stuff that you were doing 10 years ago. ⁓

Jonathan Duarte (02:36)

Yeah, yep.

And I think it’s always going to be that way ⁓ because there are things that there’s rules in a database and where this data comes from that ⁓ sometimes it’s just better to just get the deterministic response rather than trying to create a generative response on something. And I think where we’ve come from in last 10 years, so

I built the, or was product manager of building the first conversational chat bot for Wells Fargo ⁓ on a project. Then I left there and built the first contact center, healthcare chat bot for Kaiser Permanente. And again, both of these huge enterprise, huge security issues. And they, on the enterprise, they haven’t moved that much. And the reason why is because the getting the data

back to the individual through the multiple solutions is really hard to do. And there’s so many data exceptions that to really have the kind of conversation we want, ⁓ human-like as much as possible, it’s going to take a while unless you own the entire pipeline on the back.

John Sumser (03:53)

I wonder if it’s even possible to get it right. Because particularly in HR and recruiting, probabilistic answers really aren’t good enough. So you say, how much vacation time do I have? If the answer is, I think it’s about a month.

Jonathan Duarte (04:04)

Yep. Yep.

John Sumser (04:10)

It’s not going to work, right? There are in business, most of the answers that matter are binary. They’re not probabilistic. There’s, when am I going to get to be the CEO of the company? And it’s like, well, if you do these five things, you might get there in 30 years. That’s where probabilistic conversation is useful. But did you get my resume and what do you think of it?

Jonathan Duarte (04:10)

Yeah.

Exactly.

John Sumser (04:40)

Well, I’m not sure we really got it, you know, ⁓ that sort of.

Jonathan Duarte (04:45)

Great example,

right?

John Sumser (04:48)

Yeah, yeah, yeah. That can’t work. And there is nothing that I’ve seen that actually is a guaranteed fix for getting a scripted tool to behave like it sounds like a human, right? There’s just too much error rate inside of it. And you don’t really hear people talking about error rates, but if the...

If the error rate generally runs around 20%, right, 80%, that means one out of every five letters is wrong, one out of every five words is wrong, one out of every five answers is wrong. And that’s not good enough. If you had an employee who behaved like that, you’d fire him.

Jonathan Duarte (05:37)

Yeah, I think another example of that is from a business perspective. So if you look at it, why would a company invest in a employee self-service chatbot to do something that the UI can do that’s deterministic? Like how many hours do I have for PTO, right? Yes, the cost to get that answer is training the user.

building the system, updating it, but those are known costs. We’ve had those for 50, 60 years, right? They’ve changed over time, but those are known costs. And that’s only one thing that we train the user on. We train the user on, hey, like I became a part-time ski instructor at Tara at Palisades, Tahoe. I went through literally 20 hours of training, the ROI to get someone up to speed.

⁓ you know, I look at it from a business perspective, the ROI to get an employee up to speed doing ski school is so expensive, but we’re also the revenue, one of the major revenue streams of the company. So if we’re talking about a manager management executives, those, that training, if we know we can get it deterministic and there are a thousand other people around the company that can just say, Hey, John, how do you get to my payroll?

How do you do? That’s easy. We already have a system in place that works. So there’s really no reason to upend that system and get a 20 % failure rate. That’s why AI is not going to be as spectacular for deterministic issues. It’s great for helping write emails. But what’s next?

John Sumser (07:29)

Yeah,

you know, you know, one of the one of the interesting experiments I spent a bunch of time around the time that you were building out chat bots with a company called Socrates and Socrates was trying to build.

a chat interface to employee handbooks. And what they found was, first, the data isn’t any good because nobody ever updates their employee handbooks. And if the company has more than one location, the policies are different between locations. So you can’t actually go into big company X’s chat bot for HR and say,

Can I wear shorts to the office? Because yes, you can in Florida, Texas, and California, but no, you can’t in Minneapolis, right?

Jonathan Duarte (08:25)

Yeah, and that’s not even

a global enterprise. That’s just a company who has people in Colorado, Florida, and California, and New York. That’s not big. That’s not a huge enterprise.

John Sumser (08:28)

Right.

Yeah, but the actual way that companies are run is way different than the SAS model would have you think. One of my soap boxes these days is the reason spreadsheets remain popular is because SAS software sucks. ⁓

Right? SaaS software is the idea that everybody fits into the same workflow. And if you don’t fit into the same workflow, then you’re going to have to fix it yourself because we’re the software company and this is the workflow. And particularly in important, critical important things like recruiting, ⁓ companies do it really differently. They do it...

Jonathan Duarte (09:02)

Yeah. Yep.

I’ve

never talked to two companies that hire the same part-time worker the same way or any worker.

John Sumser (09:32)

Yeah, so underlying all of the dissatisfaction that’s in the market with recruiting software is this problem that there isn’t a universal tool that will work for everybody everywhere.

Jonathan Duarte (09:46)

Yeah, yeah. I think that’s the hardest part about this. I also, I had listened to your talk with Matt Charney and really interesting they brought up Taylorism because at UCLA, I studied Taylorism from one of preeminent professors in the space. And it was phenomenal because I looked back and it just turned out that

John Sumser (10:00)

Uh-huh.

Jonathan Duarte (10:14)

I actually saw Mary on my professor who was ⁓ married to John Lithgow. ⁓ They were on the Oscars last year. And I’m like, my God, I can’t believe. I just thought back in my life, since my first career role, all I’ve been doing is implementing Taylorism my entire life. Process automation.

John Sumser (10:39)

So one of the most interesting days I spent, you know, Jerry Crispin. Jerry Crispin is a graduate of the Stevens Institute of Technology, which sits on the Hudson River looking at New York City. I think the town is Jersey City. And there is a Frederick Taylor Library that we went into and hung around a bit because

Jonathan Duarte (10:55)

Mm-hmm.

John Sumser (11:06)

I have a fascination with Taylor, Jerry has a fascination with Taylor, and it’s worth figuring out how to get there to see the library.

Jonathan Duarte (11:10)

Wow.

Yeah, yep. It was weird. My dad went to Harvard Business School and one of the books he had on a shelf was the scientific management book for Taylorism. And I had no idea. I know I had thumbed through it before, but I had no idea that that’s exactly what my career was going to be.

John Sumser (11:29)

Uh-huh.

Jonathan Duarte (11:41)

different in recruiting, it’s different in HR, but it really comes down to, have a technical mind, I’m not a programmer, but ⁓ understanding the process flow of data, ⁓ that I think is one of the hardest skills for most people to get to actually understand business and understand the data flow.

John Sumser (11:59)

Yeah, well, I agree. And there’s a whole question. One of my favorite rabbit holes these days is what is data quality and how do you tell? And when you look at the earliest stuff that I saw about trying to figure out recruiting,

was all about making sure that the data was clean and it was never clean. ⁓

Jonathan Duarte (12:28)

Whoever

they’re thinking they’re going to get clean recruiting data and they’re still writing checks for it. I’m like, good luck.

John Sumser (12:38)

Yeah, well, but so what do you do if data is the heart of the answer? ⁓ What do do if you can’t clean the data? How do you deal with that?

Jonathan Duarte (12:48)

I think we have to look at like holistically going back to recruiting that you and I, like I graduated from UCLA in 93 with a resume on a piece of Strathmore paper, right? So if you, and this is again, just goes back to Taylorism, forget the technology or anything else, just go back to the process. It’s finding and having candidates that are qualified ⁓

available and interested at that time. And then we can throw computers in there to increase the engagement. We could throw computers in there to match. We can do all these things. But the way it’s set up right now is there’s so many vendors in that process that we just don’t get the data. I built this original source of Hire Protocol in 2001. And

many of the job boards and ETS companies are still using it, but they, I’ve seen companies where they still have a dropdown. How’d you hear about us? Like, Like, I, and it gets, and then I, you know, I don’t ask myself that question. Why anymore? Because I know that the TA teams, ⁓ you know, what is it? I think a director of TA last 18 months, typically. So there’s no continuity.

John Sumser (14:11)

Right.

Jonathan Duarte (14:14)

HR hasn’t been as strategic as I think the businesses need them to be. And talent, there’s lots of people who could say HR isn’t a technical area. I’d say you don’t know what you’re talking about. It’s probably the biggest because it is the foundation of payroll. And if payroll is not done right, you don’t have a company.

John Sumser (14:34)

⁓ So that leads me to a question about how long do you think it’s going to be before it’s possible to have a real conversational interface?

Jonathan Duarte (14:46)

Um, I don’t think we’re ever going to see a hundred percent of use cases. I think in.

We’ll get there, but we have to be really clear about use cases. ⁓ for instance, when we were building the Kaiser solution, we were really clear about we can only use conversational on non-health care related issues. Number one, like we can’t try to have the system start billing, start doing ⁓ interviews ⁓ or scheduling.

Those just needed to be deterministic and just done. Don’t break what’s working. It’s too massive of a scale. But we could answer ⁓ level one support questions. ⁓ How do I log in? How do I reset my password? And my view of all of this is that it’s, we’re gonna see this over the next five to 10 years is that level one support of how do I find my PTO? That little chat window.

can give you instructions within Workday or within SAP. So the really simple solutions that we spend, we do spend a lot of time training people on, we’ll be able to normalize that down to some simple instructions on that platform. But we’re not going to see agents of agents that can communicate across big data sets very quickly.

Because getting the data, number one, is a core issue. But then understanding where the data is and the exceptions to the data almost is so expensive that the solutions just aren’t going to match up right away.

John Sumser (16:35)

Yeah, I guess I... Go ahead.

Jonathan Duarte (16:35)

So I think we’ll get there, but simple things.

John Sumser (16:38)

Yeah, okay. So, so one of the things I’ve been puzzling about, how do you get a large language model to remember in my applications, not in, not in the use of chat GPT, but I want to go over here and do some sort of chat project. And the first thing that you went into is that it doesn’t have a memory. So you can’t.

Um, actually get it to do the same thing twice in a row. Cause it doesn’t remember how to do it. How do you fix that?

Jonathan Duarte (17:11)

Yeah, yep.

If I had that answer, swear I’d be getting one of those $10 million checks from Metta, right? Or something like that. I think the ⁓ issue, and we have found this, some pretty smart people in the recruiting space have found this too, that you could take the exact same set of resumes, even on the same model, but do it from a different country or do it from a different ⁓ state.

which is gonna mean you’re gonna go into say, chat GPT from two different locations and two different servers and the answers come out radically different. I think this point is that because it’s probabilistic and we aren’t setting criteria upfront, we don’t own what’s gonna come out on the backend, which is why.

Anyone says, hey, we’ve got an end to end chat bot. We’re using open AI or some other LLM to match candidates. I’m like bankruptcy. Like that is never going to fly with the VP and CHRO and the CIO.

John Sumser (18:20)

Yeah, have you, this is a great thread. Have you spent much time thinking about the security risks in using large language models?

Jonathan Duarte (18:29)

⁓ there is, there is a couple. haven’t spent that much time, ⁓ on the security piece, but the memory piece that you were mentioned before I have, like our company were used in open AI and chat GPT. we have it connected into our own enterprise solutions as well, but, ⁓ we’re really small scale. We’re not a Deloitte or something like that. So be able to prompt up and use.

an open AI or any of these other models with your, your kind of, would say your, your memory of, what solutions are we supposed to be doing for this product market fit exercise for marketing in P and G? I don’t know how they’re going to solve that right now. That that’s going to be the million dollar question. The security piece. I think most of these guys have it down except for.

⁓ from a login perspective, but I think the security really comes back to it’s really the risk, not the security of the data, but it’s the risk of, because they can do private ⁓ LLM access and stuff like that, but the ⁓ real risk is the probabilistic response, especially in healthcare. You can’t give a somewhat correct answer.

John Sumser (19:53)

Yep, yep, that keeps coming back as the bug move. But I also, you know, I did some research over the last year and I’m up to about 45 different ways you can hack a large language model. And it has nothing to do, most security people focus on the machine and the intersection with the machine. And when you hack an LLM, one of

Jonathan Duarte (20:14)

Mm-hmm.

John Sumser (20:20)

paths you can take is towards hacking the machine. But another one of the paths that you can take is towards causing it to say something stupid that the company has to pay for. And so ⁓ there’s plenty of legal precedent now that says if your chat bot says you’re going to stand on your head and face north, you’ve got to do it.

Jonathan Duarte (20:33)

Yeah? Yep.

Yeah, think, and you know what is a great example of that, that people are using every single day marketing teams are, how do you, know, previously we would just have search engine optimization as a way to market your company. So when someone did a search on Google, you would show up. And then we had SEO guys that like myself that would go tell Google search engine.

all about our company and we do it better than the competitor. We’re doing the exact same thing with large language models. We are going in and telling it who the best is and who the worst is. So our competitors aren’t showing up. So we’re hacking the answer. We’re not hacking the actual hard, hardwired solution in this way. And it’s very public that it’s happening.

John Sumser (21:40)

Yeah, so what do you suppose is going to happen there? Because if you’re not doing that, you’re kind of stupid. It may not be an honest way to do business, but if you’re competing for business and you don’t use the same tools everybody else does, it’s dumb.

Jonathan Duarte (21:48)

Please.

Yeah, I think...

John Sumser (22:05)

But that means that

all of the data in all of the systems is already starting to be too corrupt to use.

Jonathan Duarte (22:11)

Yeah. And it’s, and we’re dumbing down to the middle, right? Like, ⁓ I had ⁓ a friend in the industry, said, Hey, we created this great marketing thing for our clients and we’re going to help them build their marketing strategy using, ⁓ open AI. I go, if I were your boss, I’d just throw that in the trash can. And he goes, but why? I go, because it’s, you’re just using dumbed down, dumbed down average.

You’re getting more average data out of chat GPT. You’re not getting marketing needs to have teeth, not gums. So you can’t use that for a marketing project. I mean, you can get the competitive research, all that stuff. Great. But it’s not going to create, just do it. It’s going to say, ⁓ just do it after Nike and everyone else has copied that a 50 billion times. But I think you’re.

John Sumser (23:01)

Right.

Jonathan Duarte (23:06)

The question, how do we? How do we stay away from hacked data inferences of like I say, hey, my company is better at text recruiting than somebody else’s right? ⁓ The way to do that as we’re using it is we know that we are going to. ⁓ It’s almost kind of a scripted type of solution. We only want chat GPT to understand the outcome and provide a humanistic.

response to it. But we’re creating a deterministic response, but we want chat GPT or one of the language models is to generate the human response to that specific answer. But we’re going to force the answer to it and just use it for the generative, not the research.

John Sumser (23:54)

So that gets through. There’s a kind of a technical question that I’ve been wondering about that you’ll probably have a great idea for. ⁓ And that is...

If you have a piece of data and you run it through the LLM, ⁓ the LLM is going to do its thing with it. And so because LLMs don’t particularly understand math, if your text says 1, 2, 3, 4, 5, 6,

and you want to know the next thing, it’s as likely to say 711 as 72 or something. It doesn’t ⁓ know. It doesn’t understand. so you get answers that are logical from the prediction of what word might come next that don’t make any sense at all. And so if you drag the data through the LLM,

You get that stuff applied to it. If you route around the LLM, then you end up with a layer where you have language coming from the tool and the data coming from some other place and you merge them afterwards. And I haven’t seen anybody who can tell you if those two things merged make the same viewpoint. Right? So the answer is six.

Jonathan Duarte (25:25)

Mm-hmm.

John Sumser (25:27)

But the narrative from the LLM says seven. ⁓ And I don’t know how you reconcile that because you almost have to have a feedback loop that forces you to drag the data back through the LLM in order to make sure that the answer and the data are aligned on the other side.

Jonathan Duarte (25:31)

Mm-hmm.

Yep, that’s what we call now, like there’s tools out there called, for best way of saying it, they’re anti-hallucination tools. All right? ⁓ They’re expensive, they’re really for enterprise only at this point, ⁓ but to solve your original problem you were mentioning about, ⁓ say benefits, say you have a global organization, you’ve got ⁓ benefits by region, ⁓ by employee type.

multiple languages, multiple countries. If, and I know this is just a hypothetical, but if all that data was up to speed, ⁓ how you can do use these RAG systems now is you can go retrieve the data by the employee’s location in that whatever database it’s in. And again, this is very hypothetical, but you could retrieve that data.

John Sumser (26:29)

Hmm.

Jonathan Duarte (26:45)

put it through the LLM to come back with the response. But before the response goes back to the user, it is purposely going back to the document that the RAG came up with or the LLM came up with and verifying that that context is accurate. So those vector systems, ⁓ you’re seeing companies implement this stuff, but it is truly at the Fortune 100 level right now.

John Sumser (27:13)

Yeah, and so I understand why people think that would work, but let’s say you get to the other side and the LLM and the document don’t agree and you have to redo the work.

Jonathan Duarte (27:27)

Mm-hmm.

John Sumser (27:29)

You’re going to have the same probability that it comes out bad as you had the first time through. Even if you validate the data out, can’t kill.

Jonathan Duarte (27:34)

Yes.

John Sumser (27:40)

You can’t kill hallucinations because everything that a large language model does is a hallucination. And, and, um, you know, I, do you know who my room, the writing is?

Jonathan Duarte (27:51)

No, I’ve heard the name, but I’ve never met

John Sumser (27:54)

So she was one of the co-founders of OpenAI. And when all of the noise happened a couple of years ago, she fled with all the rest of the smart people. And she went off and started her, on this issue, she went off and started a company called Thinking Machines, which is another billion dollar AI company. And...

The first thing that she did was this experiment. She went to a ⁓ conversational interface and ran the same query a thousand times. And what she got back from running it a thousand times was 80 different answers. those answers, some of those answers were right, but most of the answers were wrong in little tiny ways.

Jonathan Duarte (28:45)

Mm-hmm. Mm-hmm.

John Sumser (28:45)

They weren’t

just another way of saying the same thing. You can group all those that were just purely another way of saying the same thing. But of the 80 different answer types, something like 60 or 65 of them were noticeably wrong. And this is as good as it gets. she’s trying, the whole job of thinking machines is that she is trying to build tools that inhibit hallucinations.

But really, that’s what these systems do. They don’t have any attachment to meaning or any attachment to the world. They just are probabilistic guessing machines.

Jonathan Duarte (29:26)

Yep, you know, and that is a little segue in the conversation. I think that’s why anyone who’s saying that, hey, we’re going to do this conversational AI at scale, ⁓ it doesn’t really know what this doesn’t really know technically that it isn’t going to work. Like we’re not going to be able to in HR, we’re not going to be able to go to SAP. Say you have SAP in one division, you’ve got Oracle in another vision and you got

in division, and then you’ve got Workday in another division. There’s no way there’s going to be a UI that goes to each of those platforms comes back with the right answer. Maybe not even in my lifetime, but I think everyone is just going to cut the umbilical cord on that concept pretty quickly. I think what we’re going to start seeing is because everyone’s investing so much money, and I think someone has said that the amount of money that had been invested by the United States in AI in this last

three years was more ⁓ than the GDP or consumer spending in the United States. And the only time that has ever happened was when we had the railroads. if you think, yes, this is coming, it’s CIOs are investing in smarter emails at the moment. But I think it all goes back to Taylorism. What are the pieces that we can do, but not individually?

What are the pieces we can do that we can automate things and determine should we have been even writing emails in the first place?

John Sumser (31:01)

Right. Right. That’s that’s a real question. And I think just to drag it back to your history and your expertise, I think you have enough time at the front end of how things evolve so that you know that. Automating the same old BS doesn’t really get you anywhere. It just gets it just gets you faster, stupid. It doesn’t get you. It doesn’t get you faster, smart. And in order to get the faster, smart.

you have to be able to completely reimagine the process. And I don’t know that our educational system trains people to reimagine processes. So I don’t know where the people who are going to reimagine processes come from.

Jonathan Duarte (31:33)

Exactly.

Yeah, it’s really tough. think ⁓ I was very early on in my career, was very, very fortunate to have plenty of mentors, even yourself in the recruiting space and Jerry. I didn’t know we were all thinking about Taylorism the same way even 30 years ago. But ⁓ when I learned early on at Gateway 2000, as I think I mentioned before, we’ve rebuilt

their sales order processing, their purchasing, their inventory, their financials, their manufacturing, their ⁓ customer support system. An entire year we rebuilt it all. And I was fortunate to get, that was 24 years old. I was fortunate to see all four corners of the business, all the data that was required, how to merge all this stuff. And it just came down to me that ⁓ it is really hard to have someone who knows

all four corners of the business, which is what is required to do process automation.

John Sumser (32:49)

Yep. Yep. And we have such deep silos that nobody knows all the pieces of business.

Jonathan Duarte (32:55)

Yeah. Yep. And I think when we look at, who’s getting laid off with AI and all this other stuff, just, I, I just go back to the people who know the business are the most critical assets right now. And if you’re not as it is CHRO, if you’re not finding a way to keep those people on, they may have, they may not be able to spell AI who cares, but they know the process. They know your business.

They know your customers. They know where the data is. Those are the people you have to keep because as you’re trying to do workflow process automation, which is going to happen, they’re the people who know the answers.

John Sumser (33:38)

One of the things I think about this emphasis on workflow process automation is this is basically a private equity approach to solving problems. If you automate the process, then you can fire the people. ⁓ so you automate the process, you fire the people.

Then what do do? Then what do you do the next time that you have to rethink the process? I think you’re stuck because you don’t have anybody who understands actually how it works. And that means that the logical conclusion of automating your workflows is that you’re going out of business.

Jonathan Duarte (34:21)

True. here’s the thing that, you know, I see there’s so it’s so interesting. I watch a lot of YouTube videos because I just geek out on this AI stuff and watch all these guys are going to create these processes for businesses and they send you cold email and they say, hey, we’re going to automate your recruiting for you. And, you know, for $10,000, it’s done. It’s like. Yeah, but I have like five different processes. And then we had a merger. So everything you had.

is worthless. So it’s all about understanding that businesses change. If they don’t change, your business doesn’t exist. mean, Kodak doesn’t exist in its former self anymore. And I think stats are something only like 20 % of the Fortune 100 were in the Fortune 100 ⁓ or 10 years ago or something like that, or maybe 20. So the natural

John Sumser (35:16)

Right. Get out there.

Jonathan Duarte (35:20)

Instinct is that there’s going to be change. So that’s why I say you need to know the people who know the product Know the customer and know where the data is Because if they do they could change with it because you can’t hard code your business There’s no such thing

John Sumser (35:35)

Well, so that’s the paradox, isn’t it? Because if you need the people that you’re going to automate out of the job, why automate in the first place?

Jonathan Duarte (35:45)

Yeah, I think what we see is we automate the seamstresses. We’d automate the phone people, know, were the people who plugging stuff in. ⁓ We don’t look back and say, God, I wish, you know, I could call John and he could plug me into Jerry on a phone line. We’ve gone past that. So I think there are going to be areas where we can. And there’s selective executive recruiting. That’s you’re not going to, no one’s ever automating that.

It’s still a gut feel. And from talking to somebody else, no computer is going to call John and say, Hey, John, what do you think about Jerry? How does he work in this situation? You can have that in a golf game, right? You can have that. No AI is going to do that. No process automation is going to do that. So that relationship piece in sales and marketing is never going away. We’re not going to automate it, but we can change the manufacturing line a little bit of things that we know we have.

a specific outcome. ⁓ If you have to do an ad creation and has to go a certain way and has to get put over here at this certain time, and then we’re going to track the metrics to see if that ad actually worked, sure, you can automate that.

But you can’t automate the process of the total ⁓ understanding of what the insights in those numbers look like.

John Sumser (37:05)

Yep. Well, I think we could probably keep talking for another couple of days. ⁓ This has been great.

Jonathan Duarte (37:12)

Well, we probably will be.

John Sumser (37:14)

Tell people how to your name, how to get in touch with you, and a little bit about your company.

Jonathan Duarte (37:21)

Yeah, so my name’s Jonathan Duarte, D-U-A-R-T-E. Easy to find on LinkedIn because I’m an old SEO guy. So my LinkedIn profile is best LinkedIn profile. Pretty easy to find. ⁓ I run a company called GoHire. We’ve been doing ⁓ talent and recruiting automation ⁓ from text messaging to ⁓ any kind of strange HR platforms. ⁓

that you think, hey, there’s a way to automate. We do a lot of custom builds in there. And then I also do a lot of advisory, some private equity, some VCs, as well as corporations on overall what the process should be and what kind of tools might be able to solve those types of solutions. I also do some early stage investing and consulting for early stage companies in the HR tech space.

John Sumser (38:16)

It’s been a great conversation, Jonathan. I really appreciate taking the time to stop by and do this. Thanks everybody. Yep, and we’ll see you all next time through on the HR Examiner podcast.

Jonathan Duarte (38:22)

you bet, John. Thank you, as always.

John Sumser (38:30)

Okay,

Keywords

chatbots, recruitment, AI, HR technology, data quality, SaaS, Taylorism, conversational interfaces, security risks, automation

Discussion about this video

User's avatar

Ready for more?