HRExaminer Radio Executive Conversations Badge Podcast Logo

HRx Radio – Executive Conversations: On Friday mornings, John Sumser interviews key executives from around the industry. The conversation covers what makes the executive tick and what makes their company great.

HRx Radio – Executive Conversations

Guest: Don Sull, cultureX co-founder and Senior Lecturer at the MIT Sloan School of Management
Episode: 343
Air Date: October 18, 2019

 

Transcript

 

Important: Our transcripts at HRExaminer are AI-powered (and fairly accurate) but there are still instances where the robots get confused and make errors. Please expect some inaccuracies as you read through the text of this conversation. Thank you for your understanding.

Full Transcript with timecode

John Sumser 0:13
Good morning and welcome to HRExaminer’s Executive Conversations. I’m your host John Sumser and today we’re going to talk with Don Full from cultureX. Don’s got this amazing resume that you wish you had. He’s a senior lecturer at the MIT Sloan School of Management, serves on the committee that oversees business analytics program there and runs x lab, which is something that does randomized controlled trials in organizations. He’s also been named a rising star in a new generation of management gurus by The Economist and Fortune also loves him. He’s got five books under his belt. This is one of those guys you need to watch closely. Don, how are you?

Don Sull 0:56
Very well John, thank you for the kind introduction.

John Sumser 0:58
Yeah, so how did you end up doing what you do? This is a puzzle, right? Because no kid wakes up wanting to do detailed analytics about organizational behavior. So how’d you end up doing this?

Don Sull 1:12
Yeah, or, that would be an odd child that did wake up, think that though. Well, you know, I started my career at McKinsey. And then I was in private equity and, you know, kind of an executive, had P&L responsibility. So, you know, before I retired into academia, I was actually on the hook for hitting my numbers, an operating manager and and so carried into my academic career a real interest in not just theoretically interesting questions, but practically interesting questions, like what actually drives results in organizations. And, you know, over time, it just became clearer and clearer. You know, of course, I had observed this when I was working too, but culture is a critical element, hard to measure, though it is hard to define that what is the critical element of what over time drives performance of organization. So that’s really been, you know, it’s messy. It’s tough, but it’s super important. And folks, by and away had avoided it because it was messy and it was tough. So that’s what really got me interested in the question and trying to measure culture rigorously and link it to outcomes rigorously.

John Sumser 2:08
So talk to me a little bit about what you think culture is.

Don Sull 2:12
Well, so the kind of standard definition of culture in a well, first of all, I should say, there are a lot of different points of view on what culture are and all of which are many of which provide insight. But the most commonly used definition of culture is a set of values, bundle of values and norms, that shaped behavior that are widely shared throughout an organization deeply held, and where there is some sanction if you violate those norms and expectations. And that’s, you know, that comes from Charles O’Reilly, Stanford and Jennifer Chapman at Berkeley, but it’s been very widely adopted in the academic research and parents medically, that’s how companies talk about their culture as well as in terms of values and norms. We have the study of over 600 large companies and how they publicly describe their culture and almost all of them I mean, well north of 90% describe their culture in terms of the values and the norm. That they aspire to. So both practically and you know, kind of theoretically and empirically that help people talk about culture. And so not to say there aren’t other views of culture that shed insight, but this, I think it’s a really, you know, kind of the dominant view. And then of course, allows for measurement. And then once you can measure the culture, it allows you then to understand which elements of culture I’m always predictive of outcomes in, you know, total shareholder returns or innovation or revenue growth, whatever it is,

John Sumser 3:25
What an interesting thing, so, am I giving this right that what you’re saying is that culture is those aspirational goals that are on every poster in every break room? They are the seven things we hold dear in our company, is that with culture is?

Don Sull 3:40
No, it’s the set of which the bundle of values and norms of you know expectations and behavior that are widely shared deeply held shape behavior over time and where there are sanctions if, if they’re violating the official values to be super closes another paper that we have, we’ve actually measured company stated values versus the Official that they’re actually lived values as assessed by employees. And what we find is precisely zero correlation. So that the official value Yeah, exactly. But, you know, the official values are what companies aspire to. But what we see is that they, you know, very few companies walk the talk, there are some, you know, Netflix does ABMF dogs, there are some companies that do a good job of living up to their, you know, aspirational values, but most companies, it’s just wallpaper.

John Sumser 4:28
So what does work look like for somebody who does what you do?

Don Sull 4:33
Well, the typical day starts early and ends late. I’ll say that. So you know, I have three big chunks of activity. So first and foremost, I’m a teacher and, you know, have the great privilege of teaching some of the best students in the world and MIT. I don’t I never tell my Dean this, but, you know, I’d be happy to pay them as opposed to them paying me for the privilege of working with these students. So that’s just great fun. And then at MIT, I also do a lot of research related activity. So I oversee two big research projects. One is this culture 500 projects, a partnership with glass door, maybe we’ll talk about that a little bit later. Another one is around strategic agility. And you know, and then have a dozen or so research projects related to culture, measuring culture, measuring the impact of culture. So there’s the research component. And then the third thing is having the co founder of this startup culture x, which is a spin off of my research at MIT, my team’s research at MIT, where we use natural language processing and artificial intelligence to measure attributes of culture in organizations based on what employees say not on the official values and then use that to help organizations identify opportunities for improvement, you know, intervene and then actually measure the business results of those interventions.

John Sumser 5:42
When you say you use AI and NLP to measure this stuff, are you looking at internal communications to do that?

Don Sull 5:49
So it’s it’s textual data is our primary focus, right? So our starting places people are smart enough to describe what’s going on their organization and you know, people who are skeptical about this, I think they need to In part question, you know, there are assumptions about human beings, right that if they just don’t think people can describe what they do, we believe people can. So we limit ourselves, we do not work with any source of textual data that people did not write with the intent that management read it to improve the company, right? So we don’t do emails, we don’t do slack by design, you know, kind of an ethical decision for us. But we do uses things like Glassdoor reviews, where employers are saying this is what I think about my company meant for public consumption pretexts employee engagement surveys, where again, they’ve been asked what are your recommendations to management, employee exit interviews, employee suggestion boxes, and you know, the typical company that we work with? I mean, we just had a meeting yesterday with one of our large clients in a single employee engagement review, they generated over 200,000 free text instances, you know, and that was just from a single survey. So the companies have a goldmine of this data and the problem is at present, they typically is there some poor schmo who’s in a room, he tried to read through these things and see what’s going on which is just impossible. for a human being to do at that scale, or they do word clouds, or, you know, basically this gold mine of data about employees, in their own words, talking about what matters most to them and sharing their insights goes on exploited. And that’s, you know, that’s what our, our NLP platform is designed to help companies to do.

John Sumser 7:18
That’s interesting. So you assume that there’s no bias in the data, and that it accurately reflects the organization that what you get with engagement surveys is the truth of what you get with Glassdoor reviews is the truth. Lots of why would I have interested?

Don Sull 7:35
Well, why would I assume otherwise? I mean, I,

John Sumser 7:39
under under a lot of pressure, because they’re often perceived as tools to force people to come up with specific kinds of answers.

Don Sull 7:48
And so that’s true, but that’s exactly why we hate traditional like or five point scale surveys which I have a lot of experience with, by the way, and I’ve come to hate them not because I started hating them is just as I learned more about them, I realize their limitations because the problem with a traditional employee engagement survey is you tell employees what matters. You give them the questions, and then you force them to answer on a one to five point scale. And we’ve known for a long time that if people aren’t see a long list of like rich scale questions, if they’re in a bad mood that gives a bunch of stuff ones and twos, if they’re in a good mood, it gives them a bunch of stuff, fours and fives somewhere between 25 and 40% of the apparent insight that emerges from these employee engagement surveys is an artifact of what’s called the non differentiation bias that people just say everything’s bad or everything’s good. That’s why free text is so cool, right with so we ignore that data. And I completely agree with the biases and those that data, what we do is we focus on when people are given a blank sheet of paper, right? They write what’s on their mind, in their own words based on their experience and their assessment. So we’re not telling them what matters. They’re telling us what matters. And so yeah, so that allows us to get a sense of incidents. So if you look at you know, 100,000 employees, you get a sense of how many people are talking about, for instance, bureaucracy. or the ability to have candid discussion. And then you also see how many of them are talking about it positively or negatively. And so and again, our fundamental belief is people are smart, and they’re and they’re honest. And when presented with an opportunity, and they they work in good faith by and large, I mean, they’re, of course exception. And when you look in aggregate over, you know, so any 10 reviews or 15 reviews, that’s too small sample to do anything with look at 100,000 reviews, you’re going to learn some interesting things about your companies, when you when you listen to your employees and take what they have to say seriously.

John Sumser 9:30
So could I say from that, that your work is limited to the kinds and scale of company that can produce 100,000 results, or does this technique work in smaller shops?

Don Sull 9:43
No, it’s a fantastic question. You don’t need 100,000 reviews, but you need a couple hundred reviews, at least write this methodology and no matter what anybody tells you and nobody can do good NLP with a sample below a couple hundred least of the kinds of topics that we’re looking at it’s the sample is just too small, you’re too The results are too fragile and susceptible to small sample biases. So but at you know, once you get up to a couple of I mean, there are a couple of ways to address it. One is instead of asking a general question, what’s working in the company, what’s not working? What advice would you have to management, you can ask a more specific question like, how are we doing on innovation, what’s working on innovation, what’s not working on innovation, and then at a smaller sample, because then you’ll get, you know, 100%, or nearly 100% of your respondents will be talking about a specific topic, and then the sample for that topic will be large enough that you can do analysis with smaller samples. I mean, you still probably need north of 100 to do even that. But yeah, by and large, you need a couple of hundred call 200 is kind of a rule of thumb respondents to, you know, to have robust, robust analysis.

John Sumser 10:49
So our call was prompted by a project that you did called culture 500 and it is a assessment of a million glass door reviews. The theory with this is that you can find out something about corporate culture by searching through this database that to tell me about the project that somebody was supposed to do.

Don Sull 11:12
Yeah, the origin of the project is a few years back, I was working with the top leadership of the Gates Foundation Global Health Organization, I was spending two days with the team. And before I did that, I knew something about their projects and some other things, but I wanted to get my arms around their culture. And so what I did was I looked at Glassdoor data, and I got about 50 reviews, you know, very small amount of data. And I was super skeptical at the time. This isn’t a lot of reviews, and I’m not sure about quality this data, but anyway, just by hand, you know, kind of working in Excel classified what people were talking about classified very crudely what they were talking about the incidents with which they talked about topics and whether it was positive or negative. And I pulled together a little chart and to be honest, going into the meeting with this because, you know, these are serious people, you know, talk to him the Gates Foundation, you know, I was skipped. I was worried about even as presenting this because I thought, wow, you know, and I don’t know about the status with the top of the conversation. Turn the culture and I put it up. And right away my worst nightmare starts to occur where one of the one of the senior leaders starts challenging the data and says, Oh, you know, this, blah, blah, blah, blah, blah. And I’m like, boy, that was a mistake putting that chart up, at which point one of one of his colleagues intervened. And what she said was, look, we’ve just spent, you know, a large sum of money having a cultural assessment done. And three of the four items that that are highlighted on that chart are exactly what this expensive study highlighted. And I’m now very worried about the fourth, at which point I said, Wow, this is kind of interesting. But you know, you can’t 45 isn’t enough and you know, you can’t do it by hand. So how could we do this at this kind of analysis at scale? And that’s when I started the project and the partnership with glass door. And you know, some of the concerns that people have about this is one employees won’t be honest. But really, why I mean, would you be honest, when you wrote a review about yourself as opposed to some, you know, some objectified other to that this, the the reviews will be dominated either by haters or fanboys that there’ll be polarized reviews empirically, that’s not true. That’s true of many review sites. By the way, that’s true, for instance of Yelp or Amazon reviews, they tend to be ones and fives dominate to reviews, that’s not true of glass door, glass doors, distribution of reviews looks much more like a normal distribution. In fact, it’s one of the least polarized online review sites of any of them out there as a result of some very sensible policy safe that they’ve put in place. We’ve actually done some additional tests on top and found that there’s through testing with other survey mechanisms and variety statistical tests that the reviews really are representative. And then the third thing is that there aren’t enough and in our case with the culture 500 we you know, we limit ourselves to 500 large organizations typical organization and our sample out over 2000 reviews. So, you know, at that number you get pretty robust results. So yeah, so what we do with culture 500 is a user friendly, interactive partnership with between MIT and Glassdoor powered by the culture x analytical platform. And for those, you know, 500 companies organized into 33 Industries. You can compare companies along Nine of the most frequently cited by company’s values of companies official, you know, say they want integrity, they want collaboration, they want diversity, and we allow you to see how they’re doing relative to their peers on that. Again, in the world of the employees, this isn’t the top executives, this isn’t the HR department, this isn’t, you know, what they write in their annual report either employees who are on the ground giving their assessment of how well these things are working, and we aggregate that data up and and rigorously analyze it to provide you know, more evidence based perspective on that.

John Sumser 14:28
Have you ever paid attention to the Best Places to Work competitions that happen? There are many versions of the best place to work companies and I think Forbes or fortune has one and there is a long standing institution that does it but it’s a thing, right that the best places to work and the way that those awards get one generally speaking, or that the organization spends enough money to get the award, and organizations that don’t spend enough money to get the award. Don’t get the award. Same thing happens in Glassdoor. So the glass door best CEO to work for whatever, whatever the award is, is a highly sought thing. And people organize their companies to deliver that feedback into it. So so that’s that’s a hypothesis that we’ve tested and it’s empirically not true. Okay, so first of all, so I know people believe that to be the case,

Don Sull 15:21
but you that an empirical question not one

John Sumser 15:27
person to understand how in the world you would test that to prove that it’s

Don Sull 15:30
perfectly perfect. So first of all, what we did was, so if that were the case, right, so first of all, Glassdoor has a set of policies designed to address exactly this issue. So what you’re worried about is companies incentivizing their employees write positive reviews, that’s the basic thing you’re worried about. And so Glassdoor has a team of 30 people as well as a variety of algorithms that can notice identify patterns of responses that are, you know, a lot of positive responses all at the same time we’re using the same language or that are short are different. warded than others the reviews are flagged human beings and you know the algorithm slicing human beings review them. So there’s that first second of all employees are banned from writing more than one review, they have to have a company email, there are ways to validate that these are one review per year per company to their ways to validate three Glassdoor has a policy called give to get to which means in order to get access to the tools, you know, salary competent comparisons and interview questions and so forth, you have to write a review and that’s why they get such a representative sample because it’s not as typically people are only only have incentives to write reviews, if they’re very disgruntled or they’re very happy by this give to get policy, you provide incentives to people who have you know, are more representative of the it’s a three it’s like a pretty good company, but it’s not a great company. So, you know, for if companies are found through these various screening processes to be violating the policies, they’re banned from these competitions, right. So there are variety. So all of the end and as I mentioned, empirical What we see is the distribution of reviews in Glassdoor there was a survey of Presario review of 25 of the largest online review sites and measured their polarization Glassdoor was second from the bottom in terms of polarization. So we just see empirically that the distribution looks much more normal. But then even given all that we took the conservative rates very, very seriously, what we did was we looked at, we took our sample of large companies, 531 companies, and we said, okay, let’s take on a monthly basis for the time period we cover which was 63 months for every company in our sample. So it’s 363 months, we’ll look at the average number of reviews in a month before that company, and then we’ll take any company reviews. So December 2018, for 3am. That’s two or more standard deviations above the average number of reviews in a month for that company. Right. So that’s picking up like when there’s this big spike in review number of reviews. What we see is that’s basically it’s a little more common than you would expect it to work perfectly normal distribution. So it doesn’t look skewed on that then we say but that’s not that’s not in and of itself a bad thing, right because for instance, when Intel got out of mobile phone ships, this huge spike in reviews, most of which were negative, people were really ticked off. So this is the spikes good reviews could happen for a lot of reasons good or bad. Then we said of that sample of companies where it’s two or more standard deviations above the meat in terms of the monthly number of reviews, let’s look for those company months worse also two or more standard deviations above the me in terms of the sentiment and the overall rating, because that’s really suspicious big spike in reviews, very positive, that accounted for those months. And again, some of those are going to happen by chance, right, of course, but in such a large distribution, that there were company months with that those characteristics more than two standard deviations above the mean number of reviews in a month or more two standard deviations above the average overall rating in a month, we’re less than Well, at one well under one 10th of 1% of the total company months and we By the way, we dropped them just to be safe, they might have been perfectly valid. We dropped all of them from our analysis, at which point by the way, those company months that spiked. were more to stand up. above the mean, in terms of number of reviews were completely identical to the overall ratings and the other measures of the overall sample. So, you know, this is I understand that people are often not pleased by what they see with Glassdoor reviews. And so an easy way to interpret that data is to say, well, it’s just not representative, but that’s an empirical question, one that we’ve dug into quite rigorously and it’s simply not true.

John Sumser 19:22
What a great answer. What do you mean answer? Thank you very much. So you end up with these variables that how did how did you arrive at the list of nine variables? Is that is that the number?

Don Sull 19:33
Yeah, so we so culture actually we actually measure 125 different topics that are anchored in research and you know what’s important to managers and some which emerged, inductive Lee to unsupervised learning, but how we got that down to nine was in a separate study, we looked at 600 companies, including all of the culture 500 companies looked at their official cultural, their official statements of their culture in their annual reports and online, looked at the frequency of so we have Two PhD students independently code and then we also used our machine learning NLP tool to code so so we got consistent coding of what each of these values were. And then we just looked at the incidence of those values. So the most frequently cited value by far in our sample was integrity. And so basically, we just took, you know, nine of the top 11 values and the logic behind that what in terms of how frequently company said this is what we aspire to, you know, diversity, integrity, collaboration, customer orientation. So we said, Look, if that’s what companies say their values are and what they aspire to, culturally, let’s hold them up to that standard, and and see how they measure against those values. So that’s how we chose the big nine.

John Sumser 20:40
And you’re comfortable that the big nine covers culture adequately, so that this is a this is a fingerprint of culture. Oh,

Don Sull 20:49
yes. And no, I mean, I think it Yes, in the sense that these are, you know, as I mentioned, the most, you know, nine of the most commonly cited so there it’s not like a random chose them or, you know, some ad hoc selection process know, in the sense that cultures are much richer. I mean, we identified in the study of the values that companies lyst, almost 70 distinct values that more at least 1% of the companies in our sample listed. So culture is a much more, you know, there are a lot of other values beyond what we’re measuring. I mean, just for this interactive, we couldn’t report on 65 different values, that would just be too cumbersome. Similarly, some of the best work that’s been done best academic work on culture has identified something on the order of 75 distinct cultural values. This is the work of O’Reilly and Chapman I alluded to earlier. So, so no, in that sense, no, of course, right. I mean, this is and this is why the, you know, the culture 500 I think it’s a good useful tool. It’s an interesting tool, but it’s certainly not exhaustive and culture is much more multifaceted than the nine most common values companies lyst but you know, it’s a practical matter. We couldn’t we couldn’t report out on you know, at different values for 500 companies across 33 Industries, it would, it would just wouldn’t been too cumbersome for the user of the tool.

John Sumser 22:00
So have you been tracking response to the project? It’s such an interesting idea. But there was sort of the standard PR Blitz at the beginning. Is this does this have viral legs or people coming in looking at culture? 500 and drawing conclusions?

Don Sull 22:15
Yeah, we. So the first few weeks when it went up it almost the traffic almost broke the servers with a lot of interest. You know, from my point of view, I liked the culture 500 I think it was a I think the team did a terrific job on the interactive, you know, it’s like really nicely designed, the team just did a great, great job on that. But to me, this is one part and to be honest, a relatively small part of an overall research program where I, my colleagues and co authors and PhD students are working on, you know, over a dozen different, much more rigorous articles where we’re measuring for instance, you know, as I mentioned, two companies walk the talk, the alignment between company stated values and their actual values. What are we talking about when we talk about culture I’m working with a statistician has developed a really neat tool to see how well we can predict Employees one to five ranking of cultural values and Glassdoor data based on these topics, you know working with Amy Edmundson in a doctoral student at Harvard on measuring psychological safety in organizations and predicting that measuring, seeing which elements, there’s a lot of empirical evidence, it turns out that corporate culture is predictable financial results. I mean, something Alex Edmonds, terrific piece of work in the JFD that basically showed kind of top quintile companies in terms of culture as measured by the best places to work that you were alluding to earlier, versus the median company over a five year period experienced the 20% out performance in total shareholder return. So we’ve got some evidence already. And there have been a couple other studies that have there’s one that just came out in the Journal of finance that took a slightly different cut came to a very similar conclusion. So we’ve got some evidence that in general culture has an impact on financial performance. What we don’t know is which are the key elements of culture. And so we’re, you know, we’re looking into that which of these hundred and 25 different elements of culture are most predictive of financial performance innovation is measured by patenting and so forth. So the culture 500 Terrific is a great tool and the team did a terrific job pulling it together. But it’s really a, you know, a very small part of a much larger research program.

John Sumser 24:07
That’s great. So we’re at the end of our time together, it’s been a fantastic conversation. What’s the best way to stay on top of the research that you’re doing and the discoveries that you’re making? It sounds like you are making progress in real time. And this is important stuff to know about the so how do I stay on top of what you’re doing?

Don Sull 24:26
So two things you can do. One is the culture 500 project at MIT will be, when we come out in I think, April of next year, we’re going to have the second version of culture 500, next year’s version, and there’ll be a cluster of articles will be publishing some of our results at that point. So that’s one thing and then culturex.tech, our company. We also log in and talk about what we’re, what we’re learning along the way.

John Sumser 24:45
Well I’ll look for that there. So would you take a moment and reintroduce yourself and tell people how to get ahold of you?

Don Sull 24:53
Sure. So I’m Don Sull. I’m the co founder of cultureX. So you can reach me at Don at cultureX dot tech. Or, I also oversee The culture 500 project and on the faculty at MIT and you can reach me at DSull at MIT dot edu.

John Sumser 25:08
Thanks so much Don. This has been a great conversation. I really appreciate you taking the time to do it. We have been talking with Don Sull, who is the co-founder of cultureX, an MIT professor and a resume that’s too long and detailed to even begin to recap. Thanks for taking the time to be with us Don and thanks, everybody, for tuning in. We’ll see you back here next week.

John Sumser 25:33
Bye Bye now.

Don Sull 25:34:
Terrific, thank you John.

 



 
Read previous post:
2019-10-22-hrexaminer-photo-img-2020-watchlist-ai-hr-tech-profile-for-oleeo-logo-200px.jpg
The HRExaminer 2020 Watchlist: Oleeo — Candidate Engagement Platform

Oleeo is the fifth of twelve organizational profiles appearing on The HRExaminer 2020 Watchlist. Oleeo focuses on candidate engagement from...

Close