HRExaminer Radio Executive Conversations Badge Podcast Logo

HRx Radio – Executive Conversations: On Friday mornings, John Sumser interviews key executives from around the industry. The conversation covers what makes the executive tick and what makes their company great.

HRx Radio – Executive Conversations

Guest: David Karandish, Founder and CEO, Capacity
Episode: 379
Air Date: September 18, 2020




Important: Our transcripts at HRExaminer are AI-powered (and fairly accurate) but there are still instances where the robots get confused and make errors. Please expect some inaccuracies as you read through the text of this conversation. Thank you for your understanding.

Full Transcript with timecode


John Sumser: Good morning and welcome to HR Examiner’s Executive Conversations. I’m your host, John Sumser and today we’re going to be talking with David Karandish, who is the CEO and founder of Capacity. David, how are you?


[00:00:28] David Karandish: I’m doing well. How are you doing John?


[00:00:30] John Sumser: Great. You’ve been on before, but I’m going to guess there are a few people who don’t know about Capacity. Capacity is unique, I believe in that it is headquartered in St. Louis, Missouri.


[00:00:43] David Karandish: We are.


[00:00:43] John Sumser: So tell me a little bit about Capacity and how it is that you’re building a tech giant St. Louis.


[00:00:51] David Karandish: Yeah, we are prooudly located in the middle of the Silicon Archway and are happy to be with creating jobs here in the Midwest and pulling great talent.


[00:01:01] We’re located very close to Washington University in St. Louis here. And we would see each year a lot of great engineers that go off to the coast. And we’ve loved building a company here in St. Louis, where I think we can avoid a lot of the noise that you see some of the East coast and West coast startups and people chasing, chasing their tails around for the next thing.


[00:01:21] We’re a little more block and tackle culture here, but it’s suited us well.


[00:01:26] John Sumser: That’s awesome. That’s awesome. So what does capacity do?


[00:01:29] David Karandish: You could think of Capacity as a new kind of help desk that’s here to automate your support by taking the questions your customers or your team members have answering them in an automated fashion, automating your workflows.


[00:01:45] And then for anything that can’t be automated, which can vary from company to company, but anything that can’t be automated goes to a help desk where a person can jump in human in the loop, answer the question and then store that in the knowledge base for next time. So, you think of us as a support automation platform,


[00:02:00] John Sumser: A support automation platform?


[00:02:02] Now I would bet that this was called a chatbot sometime in it’s history. And what’s the change in emphasis actually mean?


[00:02:11] David Karandish: That’s really interesting. When we started, we had a lot of companies that were flirting with this concept of adding a chat bot, either internally for their HR team, maybe externally for their customers on their intranet.


[00:02:25] And what we found is that over time, The chat bot is really the tip of the iceberg. It’s the inner one of the interfaces from which to go help you do your best work. What we’re finding is that our customers are really coming back to us and saying, David, we’ve got all of these processes and tasks and questions that we’re doing dealing with over and over again.


[00:02:46] What can you do to take as much time, effort and energy out of that process so that we have a frictionless experience with our team. And that’s kind of the difference between a chat bot and a support automation platform. Bob is here to answer simple questions back and forth. A support automation platform is there to automate the experience, but also know when it can automate and when to hand off to a person.


[00:03:08] So in the early days of chatbots, the best of breed were about something that looks like knowledge management, which is getting to the exact right answer in the piles of conflicting documentation. To do the kind of thing that you’re talking about. There’s a larger systems integration issue. How do you think about that?


[00:03:31] Yeah. When we got started, a lot of our initial clients thought of us as a, almost like an intranet extension. And we like to think of the intranet as the digital pale of knowledge management still around, but it’s losing its usefulness in most organizations. But as we started getting him, you realize that you have to break down your knowledge expectations into a couple of different buckets.


[00:03:54] On the one hand, you’ve got the sources of where your company intelligence lists and some of that’s documents. That’s your FAQ dude. A lot of it is your apps. And originally we started building out integrations with all the major players, and then we quickly realized that we didn’t want to be an app integrator for the rest of our lives.


[00:04:13] And so we launched a developer platform. So now our clients and third parties can connect applications to our system. Then we added the natural compliment to that was that. We don’t want to just have question and answer out. You need to be able to have a guided conversation that guided conversation should have some level of intelligence within it because a lot of questions do not have simple answers on one part of that intelligence is the escalation factor of routing that unanswered question or difficult to answer question up to the right person at the right time.


[00:04:47] But as we started going, folks come to us and say, David, you know, this is great. I love what you’re doing, but I want to take a process like onboarding and I want, I want your help with that. And what we realized is which maybe should have been more apparent to us sooner, but we figured out soon enough, is that a process like onboarding is not something you can do in a single session, onboarding a new team member.


[00:05:13] Might involve coordinating multiple schedules, integrating with multiple plans, forms, different days of dripping out additional information. The last thing you want to do is just give everybody everything all at once for one session. And so we kind of took a step back and said, we could try graduate our guided conversations platform, or we could throw the playbook out the window, start from scratch and say, what would the ideal experience look like?


[00:05:39] To sit on top of a process like onboarding. And that’s when we started building out our workflows platform. So the way we think about workflows is that a guided conversation is a single user in a single session, answering questions in a tree there’s a workflow is it could be multiple users across multiple platforms, multiple sessions.


[00:06:00] It might loop in branch and circle around and more akin to what your actual processes look like.


[00:06:06] John Sumser: So, what about, what about me, I guess, was kind of the question, the fact that I go and ask a shingle question or participate in and I shouldn’t go workflow is indicative of some aspects of who I am as an employee who I am as a person.


[00:06:24] Have you started to think about building detailed relationships with individuals in the organization, then your switch part? Yup.


[00:06:33] David Karandish: Yeah. So we’ve been thinking about it in terms of a couple of different things. It’s one facet is the idea of understanding the context of the user and having that inform both the answers that come out of the bot, as well as the processes that you have at highest level there’s permissioning.


[00:06:51] Okay. You can do this, you can’t do that based on some kind of role or access, but then in a next layer, it’s like, well, who should we talk to about our benefits questions? Oh, Sally’s raised her hand and said, she, she can help with that. But then how do we vet Sally’s answers? Oh, here’s the process for what that looks like?


[00:07:09] And that process might look different at company, a company B. So I think at a high level, we’re trying to take a very nuanced approach where not only do we recognize who the users are asking the questions, but on the other end of the other end of the spectrum, who are those subject matter experts? What is the mechanism for getting them involved?


[00:07:28] How do you not involve them too often? Getting them to be frustrated with lots of tickets, but at the same time, how do you bring them in at the right time to get the user? The answer when the technology has to hand over to that next layer


[00:07:41] John Sumser: garbage. So I hear you talking about a tool that sort of transcends silos that you could use it with customer support or HR or technical support inside of the company.


[00:07:53] How do you keep short of the personas straight? Right? Because you would have a different kind of relationship with a choir thanyou would have with an employee or.


[00:08:05] David Karandish: Oh a hundred percent. So, you know, we have a couple different ways to do it. One way is have completely separate instances. And we’ve seen some clients who are treat them like they’re in completely different worlds.


[00:08:16] Other clients that we’ve had view it as a form of a concentric circle model. So you’ll have information that’s internal only. So it’s information that’s external only. And do you have information that’s shared between the two? So I think roughly 40% of our clients are internal only roughly 40% are external only.


[00:08:33] And about 20% are shared. You’re actually doing shared instances where they share the information between the two groups. Having a robust permissioning system is important for that. And the other part about that that’s important that I think applies to both internal and external use cases.


Is the recognition that not all knowledge is evergreen and that most systems talk about from the pick on SharePoint for a minute, you put information in the system it’s good for a period of time, and then it becomes stale.


[00:08:59] And then nobody knows it’s still like we’re trying to avoid that problem. But by encouraging our clients to put an expiration date on the information and let the system, Hey, nudge you and say, Hey, is this still good? Okay, great. Let’s move on. Or no, which is, we should update that document.


[00:09:15] John Sumser: I guess the question that I want to ask you next is do you think of this as AI or is this just a elaborate micro engineering or social processes?


[00:09:24] David Karandish: I think great elaborate engineering of social processes in a shovel it’s from, uh, here. Here’s how I would answer that question. I think that today you have a, we have really three, three major parts of the technology that we would consider to have an AI or ML type component to it.

The first is on the natural language processing and understand what you meant by what you said, but also taking into account your context and your industry and your previous history and all of that.


[00:09:58] There is real AI in that part. The second part that’s new. Maybe even since the last time we talked is that if I go back to those sources of Intel, your apps, your docs, and your people, what we’ve spent a lot of time on is that not all documents are created equal. We actually think of it in terms of three types of documents, you’ve got your fully structured documents, your databases they’re already set up.


[00:10:20] They’re already organized theory. Those that’s not AI NLP on top of that could be clearing a database. Isn’t there. The other end, you’ve got documents search of which were not the first to that game. We think we have some important and interesting improvements on document, basic document search, but that’s for completely unstructured documents.


[00:10:39] I think where there’s an interesting middle ground that people don’t talk a lot about is this idea of what I would call a semi structured document. It’s a document that you see more than once, but it follows some laws pattern. So maybe a contract. Might be 27 pages long, but there are 10 fields that you actually care about.


[00:11:01] 10 questions that you want to ask into that contract. Or maybe you’ve got a job rec that might have, might be three or four pages, but they’re really a few key items. You’d want to ask into that, into that job rec spending time saying, okay, how do we think about these things? Why structured documents? What if we could treat those documents?


[00:11:21] Like they’re a database. And to do that, there is some AI and ML heavy lifting because you’re so structured. There’s a reason to called semi-structured. They’re not here. They’re going to be, you know, you’re going to take a holistic approach and a statistical approach to answering the questions inside of them.
[00:11:40] We think that they, a lot of company Intel is buried in these types of documents and just waiting to be unlocked from a natural language.


[00:11:49] John Sumser: So, how do you check your, you know, your you’re starting to dig into layers of nuance that maybe your clients don’t even know are there, I can’t imagine that your process is to review every incremental piece of wisdom that you are a nurse.


[00:12:08] So how do you make sure that it’s right.


[00:12:10] David Karandish: Yeah, I think there are a couple of principles that we like to go apply. One principle is that we want to make sure that you have the context of the answer. So if I ask the question, can I deduct mileage to and from work? And if the answer to that was yes. And he answered that with no either way.


[00:12:32] If there’s no context around that answer, I don’t actually know if the bot was answering yes. To the question I asked. Or to some other question that I thought about most of the time when a bot fails, it’s not because it gave you a bad answer to the right question. Most of the time in our experience, it gave you the right answer to the wrong question that, that it thought you were asking.


[00:12:54] So that the contextualization of where did this answer come from? That’s the first place we like to start. One second. We like to start as a robust feedback system and feedback is never perfect. You can buy us. 4.9 star item on Amazon and still not have it show up, but it’s kind of like, kind of like democracy, right?


[00:13:13] It’s the terrible system, but it’s the best we got sort of thing. So we’ve been focused on implementing the rating and review systems so that we have a robust way of understanding what is working and what isn’t working. And how does that trending and what does that look like? But before we show you the answer or give you a big thing, a review system, the first thing we like to do is get a representative sample of people from your team to go ask questions to the bot, to unearth things that we as a third party might not understand.


[00:13:43] So whether that’s your lingo, whether that’s your specific nuances of your company, it’s amazing, amazing how quickly those things come out. When you just have a few people come in and ask questions to the bot over and over.


[00:13:55] John Sumser: Awesome. So that leads me to the next area, which is unintended consequences. And for my money, the idea of unintended consequences often means that the core engineering team wasn’t diverse enough or informed enough to she, some of the new officers that are in the system.


[00:14:16] And so, so I guess the question is largely, how do you think about. with your system and what are you doing to reduce the likelihood that they’re there?


[00:14:26] David Karandish: Great question. I’m going to use a recent example of this. So we just started powering both internal questions at a large technology company, as well as their external conference.


[00:14:36] And so we looked at it and we looked at the data and I think we were answering something on the order of, I think it was like over 1200 questions a week regarding this conference. And so when we’ve dug into the data, The data said we were at about a 75% match rate. Meaning for every a hundred questions asked, we could answer 75 of them, which is actually low for us.


[00:14:58] Our average across the board today is about 4%. And we were like, well, you know that we’re getting thousands of questions a week. Why is this happening? So we dug into the data and actually looked at what was not being answered by the bot. They weren’t questions about the conference. They were all the questions about the company, the folks who were designing the conference, they thought that they would be out of scope and then nobody would ask those questions.


[00:15:22] And so. Yeah, we were able to course correct and pivot and help direct some of those questions into the rest of the bot or the rest of the website experience. But it wasn’t that the technology didn’t work per se. It was that the technology did a great job answering the question that it had access to, but we all should have done a better job of coming in in the beginning of saying, look, you may think people are going to this spot to ask questions about the conference.


[00:15:45] But they’re going to ask all sorts of unrelated questions that we need to go test and see. So in this case, we tested with some of their internal team members who were working for the conference and it’s about positive. Great. Next thing you know, you roll it out to consumers who have not been given yet explicit instructions on what to ask or what’s possible and go figure, they ask that they ask whatever they want.


[00:16:04] John Sumser: So, how do you paddle that, you know, the fundamental, transactional idea of building software. So if we should talk about whether or not you think this stuff is software, the fundamental thing in building software is that you set a scope and you deliver inside of that scope. And part of what you’re sharing is.


[00:16:23] When you do an intelligent tool like this, there’s a discovery process. That discovery process is guaranteed to enlarge the scope because sort of by definition, you can’t see everything. How do you have about.


[00:16:37] David Karandish: Yeah. So from a discovery perspective, what we find is that it’s not just the volume of questions that people ask starting with a representative sample of who is going to ask those questions.


[00:16:50] Is that another way in this case, the team we were working with, I could have asked them to ask the bot another 2000 questions. But the fact that they were so geared into what’s going on with the conference meant that they just kept asking conference questions. Soon as we brought in consumers to start asking whatever was on their minds, it didn’t take that many consumers to recognize that, Hey, we’ve got a gap in the knowledge base, so we’ve got to go kill.


[00:17:13] The good news is, is that we implemented this, those unanswered questions. Didn’t just go into the ether. It didn’t disappear. I was on a recent side note here. My kids are doing the school from home over the summer or the early spring. And we ordered a new iPad and of magic keyboard or whatever didn’t show up.


[00:17:31] So I went to the ups website, there was a bot there. I asked the question to the bot to answer my question the first time. Great. I asked my second question. Not only did I not know my answer, it didn’t know that it didn’t know my answer. And there was no escalation stuff. And so even in our case, even if we didn’t know the answer and the bot didn’t know the answer for this first question, we very quickly identified that the bot didn’t know the answer, and we were able to escalate those questions up to the team to go handle, Hey, how do we take care of this?


[00:17:59] So I think you have two things, I think upfront, you want to have not only a large volume of questions that you want to have a diverse set of people asking them. And then as you go along. No matter how good you think you did your discovery process. There’s always going, it’s always gonna be something you missed.


[00:18:14] So what you want to be able to do do is make sure that you have a graceful escalation experience up to a human in the loop, because otherwise you’re going to end up missing things that you’d be surprised that your team or your, your customers would ask.


[00:18:27] John Sumser: So that gets at part of the problem. That’s the things that you might miss that and designing around, making sure that you’ve got a method for enlarging, the problem set so that you miss less stuff.


[00:18:39] What about things that you didn’t imagine that you get wrong because you didn’t imagine them. Right. Does that make sense? There’s a difference between the two tasers.


[00:18:48] David Karandish: Elaborate on that. Just one more time.


[00:18:50] John Sumser: So the first thing that you’ve talked about is areas that you missed because the group of people looking at the problem had a limit to their view of the problem.


[00:19:02] The second question is integrations that deliver the wrong answer. And so getting the right questions, big deal. Getting the right answer is also a big deal. And one of the places I’d imagine there are unintended ladies or people who are answered, an example might be, you don’t understand the, the local dialect well enough to know that auntie Maine just kicked the bucket beans.


[00:19:34] I want to know what my bereavement benefits are. And so you instead. Default that question over to the section about buckets. And so there’s a of unintended consequences there.


[00:19:46] David Karandish: I think there are a couple, couple of ways that we handle that. The first thing I would double click on this for just a second.


[00:19:50] So when you ask a question to our bot, what ends up happening is pull in the context. She’ll pull in. User information, it’ll pull in your previous questions, et cetera. And we have a, what’s called a candidate generator that will try to figure out potential matches to what we think you just, you just asked us about from that candidate generation, all the algorithms will go in and vote and say, what was the most likely candidate.


[00:20:13] And if those votes. Tally up and come up with a score that’s high enough. Then we’ll go retrieve that information from wherever it may be. If those scores are too low, then we’ll go and we’ll send it off to a person either via live chat or via a ticket. But then there’s an important middle ground, which is where a lot of those, a lot of those corner cases that had happening, that’s where it will come back and clarify.


[00:20:33] And we’ll see, did you mean a, B or C? And if you select to be watching, remember that for next time so that you don’t have to. That’s it again. And so what we’ll do initially, when we start is we’ll set that clarifier threshold to be pretty high. So to match something you gotta, you gotta be, it’s gotta be a pretty strong vote from these out rooms.


[00:20:53] And then otherwise we’re gonna clarify if it’s in between, if it’s really, really bad, then we’ll send it to some, to a person and then we can, we can actually bring that clarifier threshold down over time. Now to the next point, though, of. Okay. Even if you, even, if you have your clarifier, how do you handle the bucket question, going to the bucket response, if it somehow tripped your threshold and then you still display it back to the user, you’ll ask the user for that feedback, and then you can customize what you do from there.


[00:21:22] So one client of ours in there, uh, And your kind of help desk response before it ever goes to a help desk person will actually give them a whole bunch of results. Hey, what we thought it was this, but actually here’s a much bigger set. Was it in here somewhere? And we try to avoid that as a first principle for where to start, because having to dig through all of these results, the reason why I think Google internal search has failed.


[00:21:44] Within most companies, but that is an option. The other thing though, is when, when that gets escalated up, we will actually ask the user for Y we’ll actually ask for additional context for what happened. So before it ever gets back to that agent, they will say, Oh, I’m looking. Yeah, may decide, and that will help fill in some of the gap because we can actually pull on that description to help the agent now know how to respond faster, better, or to use that in training the machine learning for next time.


[00:22:17] John Sumser: That’s interesting, there’s a whole bucket of questions that I’d love to drill down into from there. But we have hit the wall on the clock. Hit the wall on the clock? Hit the clock on the wall, maybe. Anyhow, thanks for taking the time to do this David. Is there anything you want to leave people with as we go out the door here?


[00:22:36] David Karandish: The only other thing I’ll leave you with is obviously we’re in unprecedented times.


[00:22:41] Everything going on with COVID and wildfires in the West coast and big election year, et cetera. Well one thing I’ll mention is that now is the time to start investing in this kind of technology. We happen to think our platform is pretty good. There are other great platforms out there as well, but now is the time in the middle of all the change that’s going on in the world – best time to get started on this is now for implementing this kind of technology within your org.


[00:23:06] That’s great. So, we’ve been talking with David Karandish, who’s the CEO and founder of Capacity an enterprise artificial intelligence, SAS company, headquartered in of all places St. Louis, Missouri. Thanks for taking the time to do this David.


[00:23:20] John Sumser: And thanks everybody for listening in. This has been HR Examiner’s Executive Conversations. We’ll see you back here next week. Bye, bye now.


Read previous post:
2020-09-22 HR Examiner Psst Do You Really Know How to Ask a Great Question stock photo img cc0 by brett jordan 8Fzn3XsIbxI unsplash sq 200px-hq90.jpg
Psst, Do You Really Know How to Ask a Great Question?

“We have most of the knowledge of the Universe in our pocket, along with a whole lot of things we...