Summary
In this episode of the HR Examiner Podcast, John Sumser speaks with Usman (Oz) Khan, Senior Vice President and head of ADP Ventures, about the intersection of AI and enterprise software. They discuss the fears surrounding AI, its limitations in precision work, the differences between personal and enterprise AI applications, and the significant challenges of security and bias in AI models. Usman emphasizes the importance of understanding the complexities of enterprise data and the need for HR buyers to ask the right questions when evaluating vendors. The conversation concludes with insights on the prudent approach to AI implementation in organizations.
Takeaways
AI poses risks that could erode human connections.
Value delivery is essential in a competitive market.
Current AI technology lacks the precision needed for enterprise tasks.
AI’s evolution is more evolutionary than revolutionary.
Security is a major concern for AI in enterprises.
Bias in AI models mirrors human biases and requires careful training.
Inconsistent data can hinder effective AI implementation.
HR buyers should ask detailed questions about vendor capabilities.
Patience is crucial in adopting new technologies.
Understanding the complexities of enterprise needs is vital.
Chapters/Timestamps
00:00 Introduction to AI Concerns
02:32 The Limitations of AI in Precision Work
05:09 AI in Personal vs. Enterprise Use
07:40 The Future of AI in Enterprise Systems
10:22 Security Challenges in AI
12:44 Bias and Training in AI Models
18:35 Navigating Inconsistent Data in Enterprises
25:34 Effective Vendor Questions for HR Buyers
32:29 Final Thoughts on AI Implementation
Transcript
John Sumser (00:00)
Hi, I’m John Sumter and this is the HR Examiner podcast. Today we’re going to be talking with Usman Khan who goes by Oz generally speaking. And Usman Oz is the Senior Vice President and head of ADP Ventures and the founding member of its strategic investment arm. It’s a fascinating job at the pile of the intersection of innovation and the
huge supply of customers that ADP has. So with that, Oz, how are you?
Usman Khan (00:34)
I’m really well. A little bit chilly in New York, but that comes with the territory, I suppose.
John Sumser (00:40)
Yeah, well, you can come visit me in California. It’s not chilly here today.
Usman Khan (00:44)
boy, don’t do that to me.
John Sumser (00:45)
I tell people in New York City this time of year, doesn’t have to be like that.
Usman Khan (00:50)
you
John Sumser (00:50)
So
I want to dive right in. I want to talk about the things about AI that scare you. So what are the five things that scare you most about AI?
Usman Khan (01:01)
I think the thing that scares me the most is that it takes us down a path that we should not go down. Where you see an erosion of the things that make us human at work, at home, and as friends with each other. Where it pushes us into such silos, personally and professionally, that we lose who we are.
I think that’s like the philosophical one. The very tactical one is, again, being part of a company that’s built on a 75-year legacy of strength and market dominance through value. Nothing comes without delivering value to your customers. That can get challenged in an environment where the innovation pace really comes after.
Those are two fears that we as professionals and as society have some degree of agency and try to orient in the right direction, if you will.
John Sumser (02:03)
Yeah, I worry a lot about...
The view that large language models are somehow adequate enough or can be made adequate enough to do precision work, right? That seems like something that you’d be dealing with on a regular basis, but you can’t afford. ADP could not possibly afford to have a process that goes, well, your pay this week is maybe $50. There’s actual disciplined precision required to do what you do.
And as near as I can tell, it’s not possible to get that level of precision with the lower sign.
Usman Khan (02:40)
It is not. It’s generative, which essentially means that it is designed to predict things based on patterns that it’s observed in the past. I like, you know, the human brain and the neural network, but the human brain also has a lot of other components to it, that give us judgment. And the current iteration of models is when you talk to, know, the talk about the Yann LeCun of the world.
there’s a very predominant set of thinking out there that says they have a marginal curve to which they will be efficient and be able to create judgements. They’re very good at micro-judgements. But for it to have a full set of decision-making ability about like, let me start from scratch and let me start to calculate payroll for 500 people in 35 states with a union.
in place, like forget it. AI is a long way from that level of precision. So I think the augmentation of human beings is really the frontier that we’re shooting for right now. And in some cases, people are being reckless to say that it’s a replacement for human beings. But from my perspective, the technology that we see today is not there. And you could argue about like the...
advancement of models and the pace of innovation and all that sort of stuff. But you can look at the fact that the step function change between each release becomes more more flatter. It started out revolutionary and it’s becoming more and more evolutionary. So I think a lot of things have to happen before this paradigm shifts that we’re in today. So I feel like we’re sort of in the middle of a very long S curve right now.
John Sumser (04:18)
That’s interesting from my perspective about the only enterprise thing that Generative tools are good for his marketing because in marketing it doesn’t actually matter how precise you are What matters is how persuasive you are? and so I’m worried that it leeches out into other things from there because it’s it’s awfully easy to do some things that look heroic until you
dig under the covers and find out what’s actually there.
Usman Khan (04:44)
It’s so interesting because I use a ton of AI in my personal life without promoting different models in different companies over here. Some of those problems can be as technical and precise as I like to build PCs with my kids. so troubleshooting a new rig that is having all sorts of issues because of memory loss or heat sink issues and so on and so forth.
And you’ll find that the model takes you into menus and command prompts that don’t exist that it thinks it exists. And these are like top highest tier paid models that you can get. And it’s always a reminder that, hey, it will make up stuff or it will try and keep on validating things that you believe in and the biases that you believe in unless you check it. To check it, the amount of work that you have to do is tremendous.
you have to literally grind and teach it, which is why AI in the enterprise has been at a different velocity than AI in my home. Because as a consumer, it’s very easy to pick up and apply my judgment to say this works and this doesn’t work. But when you bring that to someone’s benefits, it’s a whole different kind of works.
John Sumser (05:55)
That’s right. That’s right. That’s the precision piece. I watch the evolution of this, I’ve been following the evolution of AI for 50 years and it’s not uncommon for there to be peaks of enthusiasm followed by valleys of zero investment. And I wonder if you...
think that we have enough of a tiger by the tail so that we will avoid that kind of winter or is that an inevitable consequence of where we
Usman Khan (06:26)
I think the people that are on the very exuberant side of the market and deploying capital like you’ve never seen before might end up disappointed. Those of us that are being a little bit more disciplined, I think we’ll be fine because there are fundamental things that is evolving in the enterprise.
John Sumser (06:35)
Right.
Usman Khan (06:46)
probably is like in the business world, so to say. If you think about software as a layer cake, right, there’s generally three layers, which is one is like the input layer, this is the forms basically, right? It could be the cells in your Microsoft Excel or it could be your Word document. The middle layer is your logic layer, which is sort of like it routes information to where it should go. It can also have a calculator in it, like a payroll engine potentially. And then underneath that you have the data layer.
where you store the inputs and the outcomes of everything that you calculated. And then there’s the supplementary element to it. I won’t call it a layer, which is like the integration side of it, which connects all of this to other systems outside of it. And when you think about this construct, and the reason why I’m taking you down this long path, there are seams within that layer cake, and there are massive seams between that cake and other cakes.
And there are so many micro decisions or micro tasks that are low value added that people have to do to make that software function. And it’s everywhere. It’s an ERP, it’s an CRM, it’s an ACM systems. That is going to get solved by AI. So it is going to make all these systems far more efficient at what they were designed to do. All those things that people had to step in because it was incapable of making those micro decisions.
that scope is massive, right? You think about the fortune 500, you think about the companies underneath that you think about the mid market, you think about small businesses, and then you think even outside of the US, the total addressable market for these models and these companies is massive. I think everyone will commercially do well to make what we do today much more efficient. But those of us that were imagining, you know, AGI like scenarios where
very, very complicated processes get taken over by a new AI Salesforce comes out and basically dominates the field. We’ve yet to see that because the hard stuff around building an enterprise system is unchanged. The precise stuff you still have to do today. And that takes time, effort, manpower. Yeah, you can, you can have AI agents help you write your code, but if you rely on them completely, I was just with my,
My nephew who’s doing his masters at Berkeley in AI and he’s like, if I let the AI do the work without super-giving, I end up with a host of cards as far as my software goes. And that is true for anyone that doesn’t have experienced engineers looking at what AI is producing. So I think you have to separate these things from the hype. I think the low value added tasks are gonna go away.
But when people say, oh my God, the world is going to change and there’s going to be massive erosion and enterprise value for the ADPs of the world, I don’t believe it.
John Sumser (09:30)
So that’s interesting. You could translate what you just said into another generation of we’re finally going to be rid of spreadsheets. Because the way those little tasks are done inside of organizations is with spreadsheets so that you can manage the details of those little tiny transactions.
recently starting to believe that
That’s where humaneness is in this whole process. It’s actually not inherently inefficient. The problem there is the sophistication of the enterprise tool rather than the sophistication of the user. The user knows what they need. The user’s getting what they need. They have to. figuring out how to make an enterprise tool that is so sensitive that it can understand the nuances of
my organization, it’s kind of almost the opposite of enterprise economics, right? Because enterprise software economics runs on the idea that you build it once and sell it a thousand times, ⁓ right? And this next layer of stuff seems to me to start to look like
Usman Khan (10:36)
That’s right.
John Sumser (10:43)
every instance is a big implementation.
Usman Khan (10:46)
Yeah, I think the configuration, reconfiguration, reconfiguration, reconfiguration stuff is A, it’s exhausting. B, it’s also time away from all the things that you should be doing as an organization, right? Like it ends up being a distraction for your core enterprise because
not only are you going to have to supplement your people with external partners who are going to help you keep your stuff running, you keep your people from doing things that you necessarily care a lot more about, which is, you want HR to build culture, you want HR to think about leadership, you want HR to basically get in to the frontline problems that teams have, but they spend most of their time
buried under the transactional layer. They spend most of their time on cases that could be resolved much more easily. And I know a lot of HR people, HR and ADP is a thing of beauty in my opinion, because of what they do and what they practice and how it showed up in our products. I think the...
The observation from my side is just like, they’re just always on. We were talking about people always on in general. They’re always on because they can’t escape. And you think about Maslow’s hierarchy of needs, like they can’t escape the top department, they can’t escape the middle department, they can’t escape the bottom department. And I think this is where the unlocks from the really transactional stuff that we talked about is going to let real experts be
to the experts rather than being distracted by things that are not necessarily valuable. Does that make sense?
John Sumser (12:24)
Absolutely,
absolutely. I’m going to switch topics on you here. This last weekend was a big aggressive look at security problems in AI. We started to see some theoretically unimaginable things, but Cloudbot or what is it, Moltbot now? ⁓
Usman Khan (12:44)
Yeah.
John Sumser (12:44)
Whatever that thing is.
poses the question of how do you handle security? I’ve been watching the large language model security question for a couple of years now, and the number of ways that you can break in and break things is expanding as fast as the bottles are. It’s incredible to see how the...
how vulnerable the tools are and how little attention is really being paid to security of the model itself, of the stuff that sits above the hardware and enterprise layer. What are you thinking about?
security inside of the AI world.
Usman Khan (13:21)
I think it’s probably the largest blocker in enterprise for AI adoption because it’s imperfect. There was this research recently that everyone quoted that said 80 % of AI pilots are failing. I think, again, this is personal bias, so take it with a grain of salt. I don’t think they’re failing.
I think they’re experimenting. They’re trying to see is the risk acceptable? Is the boundary acceptable? Can this thing be secure to the point that we feel that there is a liability for us and our customers? And so earlier stage organizations that don’t have a lot to lose, that are not sitting on top of billion dollar businesses are able to make that risk equation a little bit more, a little differently, right?
But for incumbents, it’s harder because you have massive liability. And so until you can be completely sure that the AI can’t be compromised by, again, AI driven attacks or your security framework isn’t strong enough, or you can’t essentially fool the model into doing something inadvertent, you hold back.
which means it takes you three times as long as an enterprise provider to do what a startup might do in the span of the same amount of time because they’re happy and willing to take risks. And their customer types, by the way, are also startups like themselves who don’t care about the same level of risks. If you think about a traditional mid-western client, they’re very different from a California company in the Gulf Valley in terms of what they’re willing to do and their whether.
to take. I think there’s a ton of value that’s going to be in the security sector. Everyone’s talking about fighting fire with fire. You cannot monitor the amount of volume of attacks that you could get because of, know, malware and bad actors.
the only way to counter it is with AI of your own. And that’s what all these companies that you’re seeing come into the cybersecurity realm are doing. So it’s gonna continue to be an elevated level of game set match in that domain, I think.
John Sumser (15:32)
Yeah, I find it interesting. Part of what I see happening in security is the fixing of the gate after the horses left the pasture. You can’t really solve the problem until you’ve been a victim of the problem, sort of. And I wonder...
How do you think about, because you work in a world where there is a massive product development department, and if you want to this stuff nipped in the bud, you have to embed it in the initial layer of the tool rather than as a bolt on on the outside of it. As soon as you start bolting security on, you create vulnerabilities rather than fixing them. So how do you communicate this rapidly changing security environment?
down to the ranks of people who are doing code.
Usman Khan (16:21)
So I think you do it by a having an ownership mentality where any failures in security are not owned by the security organization. They’re owned by your team.
They’re owned by you as a product group. And I think that starts to create a culture of accountability and a culture of really thinking through what you’re releasing. I can talk to the number of precautions that we put into place, the AI gateways that we’ve built that have kill switches in them, the information guardrails to keep bad actors from.
from exploiting systems and so on and so forth. And that takes a ton of time, by the way, to engineer and architect. And so we have been doing that. And it is a combination of culture, process.
and also diligence, right? I think your security apparatus as any organization also has to be super diligent, right? And you have to have people or whether they’re in your organization or outside your organization pointing out your vulnerabilities. And so when it comes to AI deployed, it really becomes a factor of like, what is it exposed to?
Because if it’s exposed in the public domain as a consumer facing application, then your vulnerability is tremendous. Because anyone can interface with it and go to town. If it’s behind a security firewall of a login for an employee, then already you have a whole bunch of security frameworks that you can put on top of it before.
someone goes into the playground and actually starts to do something with the AI. And then you have a second set of guardrails in terms of what the user permissions are and what they can get wrong. So again, it’s like a medieval analogy where you see these castles in Europe where you really have so many different walls that basically are layers of defense that are designed to sort of be safeguarded. You got to run it the same way with software and technology.
John Sumser (18:26)
What an interesting image. think I’ll hang on to that one. The layers of defense in a castle as approach to AI security. That’s awesome.
You have these learning models and they learn by absorbing the latest content that flows through. Learn is probably an extreme characterization, you know what mean. How do you monitor and control drift from the task or drift into biases that you don’t want to support when you...
And maybe the large question underneath this is, do you think it’s possible to build bias-free digital technology now?
Usman Khan (19:09)
I think it’s the same principle, again, I’m a fan of analogies, my team gives me such a hard time over it. But it’s the same principle as training up your kids, so to say. We as human beings are either sometimes, I don’t think we’re born with biases, but we start to build them over time based on our environment, our education, our learnings of the world, and so on and so forth.
John Sumser (19:16)
Ha
Usman Khan (19:36)
And so the interesting mimicry of the human being in these large-dangling models is this whole notion of neural networks. And so they will learn and build biases, or they will learn the pattern recognition to problem solve. Because I’ve seen X, and Z, I can replicate the pattern and tell you ABC. And the same way a human being might get it wrong, the model will also get it wrong.
But I think with a human being or a kid, you will hopefully have trained them to recognize their biases, to not behave in a way that they violate the task that they’ve been asked to do. And it works exactly the same way for your models. And that’s why even if you’re picking up a model off the shelf, if you train an Asian,
Training an agent is essentially grinding. It’s grinding with the learning. You have to basically run it through its paces. You have to it where its limitations are. You have to get other agents to test it and basically see if it behaves badly. You have to have trained those other agents to test that agent because you want to make sure that you completely control the variable in prep. So the same way you can teach your kid to not say completely inappropriate things at a family dinner.
The same way you have to train your AI to not go off on a tangent and start to imagine stuff. You ask about a company HR policy, your guardrails, your mechanisms, your training should basically point to only reference my company’s policy, what we put into place. Do not go out to any other existing knowledge about company policies on X, Y, and Z. Stick to what you’ve been told.
based on this, apply your neural framework and give an answer. And that takes time, unfortunately. Again, it goes into the cycle time of developing. So whereas as a consumer, you’re okay with an open-ended answer. And if you know something about the subject, you might catch the error and you might correct it, or you might actually take a wrong answer and believe that to be the truth, by the way, as well. And that is also the inherent risk of, you know.
leading you astray. But the opportunity cost of you doing that as a consumer is nothing compared to an organization doing it where especially it’s a compliance-based violation that might result.
John Sumser (21:50)
So you opened up a rabbit hole that I’m gonna run down for a second.
If you try to give a large language model the HR policy data that’s available in a large organization, you will have a great big giant pile of incomplete and contradictory data. Because you can’t...
enforce the same specific dress code in San Francisco that you can in Minneapolis. That sort of thing. There’s going to be variability inside of the data if it’s complete and if it’s up to date. I have yet to see or hear of a good way of sorting that out. It has stopped a lot of companies in their tracks over the last 15 years because that
incomplete and inconsistent data makes for a stew that’s really hard to predict what the right answer is in a given context.
Usman Khan (22:51)
This is a good one. and actually this speaks to actually one of our portfolio companies as well. so, so we invested in, a company called Emma that’s EMA. they’re, essentially a platform to help you deploy AI within the enterprise space. And so they have like, this, this thing called Emma fusion, which is, which is basically like a model layer that picks the most appropriate model for the use cases that you want.
and they build AI employees or agents around specific workflows. And the reason why we actually got really interested in Emma was because after meeting them, found out that Hitachi, which is a very large global conglomerate, actually, I was really shocked to hear what they did because I always associated with electronics in my home, but they do a heck of a lot more.
They, they use the Emma to basically become the, the HR policy, agent for their global workforce. And the problem that you’ve just described, right. You’ve got. You in your database, you’ve got policies that are probably outdated. You have policies that are contradictory. You have policies that are geography bound. you have policies that are date bound. you have policies that are role bound.
And when you pull it all together and you try and make sense of it, you can end up with contradictory answers, right? Simply inquiring on your PTO balance might pay time off. I’m sure this is an HR audience, so they’ll all know this. inquiring on your PTO balance, is it going to use the right methodology for where you’re based and who you are based on your context? You don’t know that. And so the...
This is where the proprietary work goes in past the model layer, right? Essentially figuring out how to use small language models for places where you don’t want it to go off the rails, you don’t want it to go sideways, for using large language models where they’re extremely directed down this path of like, this is your framework, this is your hierarchy of how you should treat information.
And then essentially using it, the process and put out this whole notion of what are the most accepted ways to answer your top 100 queries. Because if you think about it, like your employee base is not going to have more than a hundred different types of questions, right? Then you get into the really long tail of the completely odd and insane question that you probably want a human being to answer. Right. And so.
You build that, you massage it, you train it, and guess what? You put it together with this whole notion of, again, the castle defense from a security standpoint, with this really, really methodology-based approach to how information should be accessed and presented, and it works. And you can put it in front of tens of thousands of employees. And so, again, we invested in EMMA. We brought the same technology to our platforms.
That is actually one of the use cases that they’re working with us on to complement some of the work that you do on ADP assist. But I hope it sort of like highlights how much work you have to do to get to something that sounds very simple to a lay person. might just be like, yeah, let me just load all my documents in the chat. You should be able to tell me, right? Cause we do that in our personal lives, but in an organization, that’s not the same reality. Something that might seem extremely easy and simple is actually very complex.
John Sumser (26:02)
So you are saying something that I don’t think I’ve heard anybody say before, and that is that an enterprise implementation of say HR is best understood as an org chart sensitive thing with role and level and task definitions by at least
division and probably down to a department level kind of thing. And that’s a very different way of thinking about how this implements. Most of the stuff I see assumes that there’s a universal quality to be found somewhere. I find that remarkably, it’s the best idea I’ve heard this week.
Usman Khan (26:50)
Well, I’m glad to hear it. We’re just at the start of the week, so you might hear about something better.
John Sumser (26:54)
Hahaha
That’s good, that’s good. So to wrap this all up, if you’re an HR buyer, what kind of questions should you be asking vendors?
Usman Khan (27:06)
think what you need to be asking vendors is, I’m gonna give you a slightly longer answer, because my media trainer would go absolutely crazy. The longer thing that I would give you is really go deep into the details of what you’re able to do. Because there’s a high tendency to...
for providers to focus on like the top line marketing of, you we have a thing for this and we have a thing for that and we have a thing for that. Get deeper into how effective that is. The HR buyer knows what their top.
There’s no such thing as top five problems. Top five problems are just like things that are on fire. You generally have 20 to 30 problems and sometimes more depending on the size and scale of your organization. As an HR buyer, you should know your top 20 to 30 problems that exist across that same taxonomy of needs that I just described. Departments, people, role types, geographies, and so on and so forth.
And think about where you could get the highest amount of agency with AI, whether it’s reducing the service load on your HR help desk, whether it’s case resolution for the same thing, whether it’s employee experience, whether it’s talent elevation, whether it’s faster time to roll from a recruiting standpoint. I think you have to really get into the deep details of what does it do, how well has it been trained, and essentially,
How are your teams going to react to it? Which again, goes back to our general philosophy in life at ADT is like, have to really take your time with Dustin and stuff on it because in some places you’re going to find fascinating technology that people have built. In other places you’re going to find that you have maybe 25 % of the promise and the rest of it is just like, do you really want to pay for 25 % of what you think you should be getting and burn your team’s time on it?
or just focus on the things that are the most important in that list of 25 to 30 things. Does that help?
John Sumser (29:08)
That helps, that helps. It raises a question of...
how do you get smart enough to figure that out? I can tell you what my 20 problems are, but if I go into a demo of some kind...
You have to be particularly aggressive in a demo environment to get the kinds of answers that you’re talking about. It requires an ability to ask the right questions, understand the answers, and move forward and keep the process on track rather than on the talk track. And I don’t know that HR buyers are trained to do that.
It’s not very nice.
Usman Khan (29:45)
So.
I completely understand it, right? Because expertise is built in directions that take you deep in terms of the effectiveness of what you need to do. This is another thing too, right? I think the thing that our HR audience in 2020 should take away from hopefully this conversation is that everyone needs to get a lot more smarter about technology because it is going to blend with what you do, right?
And having a point of view is important. And so if you don’t have a very, very deep point of view right now, I think there’s couple of ways to get there. And the problem is different for every type of buyer. So we talked about the variability in terms of the client base that we work with at ADP. We might be dealing with 50 employee companies in our small business division, or might be dealing with mid-market companies with 50 to 1,000 employees, or the up market, which is more than 1,000.
If you’re an up market HR buyer or even an upper mid market one, you generally have a technology team in your organization. I think this is the time to forge closer alliances with the technology team because they’re thinking through the same issues on the operational side as far as deploying technology for your business needs goes.
They might be working on your logistics systems. They might be working on your finance systems. Get them thinking about your HR systems. Get them thinking about effectiveness of agents because all of these organizations have teams. have CIOs, have CTOs. How does your CIO or CTO think about success with agents? What is a good framework and measure input to outcome for agents? And you don’t have to go all the way. I think you just have to start to pull them into the conversation a little bit.
so that they’re also part of the discussion. I think the table stakes for most HR buyers are, hey, is this thing going to keep me compliant? Is this going to accomplish the large operating model question that I have and the things that I’m trying to achieve on a functional level? The AI stuff right now, feel that it’s still, some of it’s seeping into the core of why you should buy the system, but it’s more of a benefit.
I think the first thing you look for is just lack of pain with running a particular system. And AI does have a role to play there. So as you get from that spectrum of like need to have, must have, that these things would add a lot of productivity, partner up. And then if you’re in the lower mid market, I think the market changes a little bit because you don’t have a lot of customized solutions. You have things that work outside of the box. The risk of trying them out, the sandboxes are easy.
John Sumser (32:05)
you
Usman Khan (32:15)
I think just go experiment, spend time with your peer cluster. I love that HR people talk and they always have points of view on different software. Share that out. So again, long response to what was the short question.
John Sumser (32:29)
Well, I was hoping for a long response. So this has been a great conversation and we’re going to close it up here in a second. Have you got anything that we missed that you want to be sure we cover?
Usman Khan (32:41)
I we covered it a little bit, but I would say don’t have FOMO. Don’t have fear of missing out. Because sometimes, and a lot of times, there’s an advantage to observing what happens in the market, whether it’s something that you want to do or a trend that you were looking at, and then taking action rather than being the first person out the door with it.
If you are in an organization that loves to take risk and loves to be at the front of it, how about it? But if you’re in the middle of the road be prudent because I don’t think there’s anything that we’re massively losing by As as buyers taking the buyer perspective But by waiting to see how things settle how things play out
So you shouldn’t be in an urgency to deploy AI in a manner that might not be well thought out. Think it through. Really, really, really design it. And so that’s what I would say. Whether we, and we apply it here, whether it’s investing in the company, whether it’s building a new product, whether it’s like working on roadmap with other ADP teams, I think it’s a principle that’s as quite well.
John Sumser (33:42)
That’s awesome. That’s awesome. I don’t think many tech executives are willing to say something like that. patience is the way to the solution. That is distinguishing. That is distinguishing. So thanks for taking the time to do this. I really appreciate it. It was a good conversation.
Usman Khan (34:03)
Thank you, John. I enjoyed sharing my thoughts and hearing yours. I think you asked some awesome questions.
John Sumser (34:09)
Great, great. So you’ve been listening to HR Examiner Podcast, and we’ve been talking with Oz Khan, who is ADP’s head of ADP Ventures and part of a strategic investment firm. Thanks, and we’ll see you next time.
Keywords
AI, enterprise software, security, bias, HR technology, precision work, investment, innovation, data management, vendor questions






