0:00
/
0:00
Transcript

HREX v1.2 Mark Feffer

A Deeper conversation with the most widely read AI analyst in HRTech

Mark Feffer is a journalist at heart (and by training). He spends his time learning about the marketplace and then translating it into fact based stories that are readily understood

===============================================

Transcript
John Sumser (00:02.958)

Hi, welcome to the HR Examiner’s Podcast. I’m your host, John Sumser, and today we’re going to be spending some time with Mark Pfeffer. Mark is the most widely read commentator on AI and HR technology and a longtime tech journalist in the space. So he’s got a remarkably broad insight into how the system operates in HR tech, particularly in recruiting. Mark, how are you?

Mark Feffer (00:31.118)

Okay, John, thanks for the kind words and it’s nice to see you.

John Sumser (00:34.894)

So, so let’s jump right in. You went to an SAP event recently. What was that? What was that like? And what did you learn there?

Mark Feffer (00:45.11)

It was very interesting. It was called SAP Connect, and it was the first time they staged this event. And rather than focusing on any one product, like SuccessFactors, or drilling down very deeply into the technology aspects of it, this was more about the use of the SAP platform in various parts of the organization and really looking at how the platform

can work as an organization-wide solution. So was more about business and actual usability. And it was interesting to hear what all of the different people, both on the SAP side and the customer side were saying.

John Sumser (01:30.23)

So I think you told me earlier that they were making the case that this is now the era of the suite and that point solutions will fade into the background because of the power of the suite. Is that what you heard?

Mark Feffer (01:45.344)

Yeah, it is. I think a lot of that was coming from SAP and some of the other analysts and reporters who were there. But what was different about it is they usually don’t talk about this that much. It was a very specific part of the discussion this time. And I think they were making sort of the usual arguments about

If you have a single platform, you only have one platform to worry about. You can share data more easily. There’s a lot of usability issues that are taken care of because everybody’s using the same basic UI. But there’s also a certain amount of discussion about how those things can contribute to the employee experience.

That was interesting actually in and of itself.

John Sumser (02:40.812)

Huh, you know, this is, I know you know this, it’s a standard argument about whether the suites are better or point solutions are better. And what you see, what you tend to see is that the suites, like let’s use success factors from SAP as an example, the suites are not able to stay abreast of all of the changes and all of the niches all of the times. And so, often suites produce, hmm.

a more homogenized view of things. And so that gets you an interesting and consistent employee experience, but it means that you’re always in the position where it’s going to be smart to buy somebody like smart recruiters because it’s almost inevitable that the functionality of the suite falls behind the market because you can’t prioritize everything all at once.

Mark Feffer (03:38.412)

I think you’re right. And one of the things that was interested about the discussions around smart recruiters was how it went beyond the product. I think that smart recruiters is going to end up being essentially the new success factors recruiting module. But people at SAP also talk a lot about smart recruiters’ approach to AI and approach to development. And

They won’t really say anything specific, but I was trying to get them to talk about how could the influence of smart recruiters creep across SAP. They weren’t saying anything about it, but I suspect it’s going to.

John Sumser (04:21.4)

You know, SAP is an amazing company. They have some of the smartest people in the world. They, I don’t know if it’s still there, but they used to have an amazing innovation center in Palo Alto, just full of genius level players who were doing things. And yet, so they’ve got all this innovative energy and all this innovative talk, but what they actually deliver is,

John Sumser (04:54.662)

standardized consistent stuff, which is more, you you think about innovation, you think about spiky. And when you think about non-innovation, you think about smooth. So I love, I love watching the company, I’ve been watching them do this for 30 years, try to figure out how to innovate from this position of strength that they’re in. And it’s a fascinating.

perpetual issue inside of the company.

Mark Feffer (05:24.974)

Yeah, it seems to be. One of the things that’s always struck me about SAP is how pragmatic they are. They really try to get their arms around the business issues, not just the IT issues. And if you talk to people in different parts of the company, they all talk about that is what does the customer need to get done, whether it’s finance or purchasing or HR or whatever.

They’ve always talked about usability and the end user also, but kind of like most other companies, think, do. This time, there was a little bit more emphasis on usability. And not because they were preaching anything fancy, but because they were just looking for simple ways to make the experience easier, less time consuming, less frustrating.

I thought that was interesting because many of the HR software vendors don’t really talk about that very much.

John Sumser (06:32.622)

Well, it’s an interesting time. This might shift us out of the SAP conversation a little bit, but the promise, the absolute promise of AI is that you’ll get an intimately, that’s probably a bad word. You get a very deep level of personalization at the individual level, at the client level, all the way up to.

up the pile because what’s true is that software in general and SaaS software in specific is built around the idea that there’s one best way to do everything. And if you just get in line, you can do it that one best way. And the reality is culture, region, industry, capital structure, labor force, relationships, technology, product, all of those things mean that every

bit of workflow is unique to the setting. And the more you try to standardize it, the more you’re putting a square peg into a round hole. And you get hiccups with that because you can’t.

deliver a standardized universal experience. And I don’t think any of us know yet what it means to deliver a, how do you measure if you’ve built a interesting, effective, highly personalized interface? How do you tell if that’s working?

Mark Feffer (08:09.63)

Yeah, I actually, I don’t know. I suppose at the 30,000 foot level, you could look to see if efficiency has gone up. You look to see if performance has gone up. You could then go out and talk to people and see how they’re feeling about their use of the system. One thing that struck me is SAP talked a little bit about the idea of Atlas applications.

And I think what they’re getting at is essentially capabilities without a quote unquote interface. The spread of voice lately, think, is a good example of that. And voice is getting to be so good that I think SAP is recognizing that it’s a viable way to build your interface. Certainly, if you could go to

John Sumser (08:54.223)

Hmm?

Mark Feffer (09:06.35)

you know, pick your large language model. But if you could go to a model and tell it to do something in John Summers way, and I could tell it to do it in my way. And then a little bit under the covers, it just parsed it all out and did what it had to do. From the end user’s point of view, that’s going to be a huge leap forward.

John Sumser (09:29.795)

Yeah, I wonder a lot about that because the thing about voice and think about language as an interface is you ask a question, you get an answer, you ask a question, you get an answer. And that level of things is interesting. But most of the work that I see people do happens in documents and happens in spreadsheets and happens in slides and a conversational interface.

doesn’t put you back in where the work happens.

Mark Feffer (10:03.02)

Well, that’s true. I don’t know if, I don’t know where the line is for a platform, you know, at some point where they have to move out of voice and into keyboard and mouse. But, you know, we are at the early stages of seeing Microsoft and Apple do it, where you can dictate into a word processing document.

You can’t really format it. There’s an awful lot of things you can’t do. But if you just want to get the words down, can do it. And those tools seem to get better and better every year. I don’t know if anyone’s ever going to sit down and say, OK, I want a bold-faced headline that says this. Then we’ll start to write it, and I’ll talk through what it is. Even if they do do that, they’re going to have to go back and edit it.

you know, somehow. I don’t think voice is a good way to do that. So I think there’s some middle ground where both avenues are going to be open. But again, the individual may get to pick where that switch off happens.

John Sumser (11:16.013)

That’s interesting. I still, I get how you could do that in a document. It’d be a little harder to do the kinds of editing and tech and decks that people need because that often happens collaboratively. So you have a group of people looking at the PowerPoint and how does the machine know how to prioritize the conflicting directions that come through that process. That’s a...

That’s a good question. But when you get down to...

John Sumser (11:54.166)

actually doing something interesting with large chunks of data. I think the interface will emerge, but I haven’t seen anything even vaguely resembling what this interface would look like. And conversation seems to be the wrong direction to go.

Mark Feffer (12:12.75)

Yeah, I think that’s true. And I think part of that is we’re not there yet. If you think about what SAP is talking about with streamlining the interface, paying more attention to the experience, even in very nuts and in bolts ways, that’s a lot of work already.

But it satisfies an awful lot of users, at least partly satisfies them. So if I need to get to a PowerPoint deck that my team’s working on, and I can do that just by asking the system to open it, that’s made my life easier. That’s a better experience, even though I still have to go in to the document itself and start playing around with tables and such.

John Sumser (13:05.487)

That’s actually kind of a cool idea. I hate trying to find things in SharePoint or I hate trying to find things on my own hard drive because they tend to not be where I’m looking for them. so I would love a tool that I could say, here’s what I think that file was and here’s what I think it was and it might have this in it. Can you find it for me?

which is better search capability than the text-based search capability that’s there. So that’s an interesting use case.

Mark Feffer (13:38.594)

You know, one thing that always stays with me, and I’ve seen this happen a couple of times, where you go to a conference and they’re talking about their great new product and they’re talking about their roadmap and they’re talking about their vision. And then somebody on the stage says, by the way, we’ve also done this revision to a tool. So now if you want to upload a spreadsheet, you can do it in one click. They get applause.

John Sumser (14:07.887)

Bye.

Mark Feffer (14:08.074)

They get really enthusiastic applause. All of the big stuff, the visionary stuff, does not. So I think this is sort of a subtle shift toward paying more attention to the users at a very basic level because the end users usually have a lot of grunt work to get done. And AI may streamline a lot of that, but there’s still going to be grunt work to be done.

John Sumser (14:36.484)

That’s right. And I think, I think you’re making a pretty interesting case about the importance of understanding what your client does. And the current models for that are mostly about personas, which is a sort of a dumb dumb word for stereotype. like stereotypes, the actual user

is never like the stereotype. The actual user is never like the persona. so, so it’s, this suggests to me that there’s a whole new kind of,

user interface resource that’s about to start to happen so that you can figure out what somebody needs.

Mark Feffer (15:27.47)

Yeah, and I think if you look back over the years, I mean, I’m old enough to remember when PCs first started to appear at work and they were all CPM or MS-DOS, so they were basically all keyboard driven. Nobody, nobody before 1984 was thinking of a mouse or anything like a mouse. It took the Macintosh to do that.

People were thinking about voice recognition, but it was a Star Trek kind of thing in the future. Now it’s kind of crude, but it’s here. Any kind of technology has that early phase of development where the stuff going on under the hood is really cool. But how you interact with it is kind of clunky. We’re still in the phase of that, I think. I don’t think anybody knows what the ultimate interface looks like.

John Sumser (16:22.832)

Oh, I think it, I think it’s pretty early. remember, I remember you probably do too. I remember when the web opened up and we thought all sorts of things about what the web was going to be and what it wasn’t going to be and how you would measure it. And 10 years later, it was nothing like that. 10 years later, it was absolutely nothing like we imagined it in the beginning. Um, and good or bad, that’s that.

That’s where we are right now, where we have this relatively shared view of what’s possible. But it’s pretty likely that we don’t have a prayer of actually understanding where we are and where this is going.

Mark Feffer (16:58.381)

Mm-hmm.

Mark Feffer (17:04.877)

Oh, you’re right. I mean, you can just look at social media, for example. Around 1990, I was working on some of the online applications, pre-web applications. And we used to sit around and talk about what was it going to be like in 30 or 40 years. We were all wrong. None of us imagined social media. There’s probably other things out there.

that are going to develop or quickly appear in the digital world, like social media are going to come out of someone’s idea, hit a common chord and just kind of take over.

John Sumser (17:46.704)

So in your travels, SAP or otherwise, have you been poking at the question of how we deal with a technology that...

John Sumser (18:03.473)

provides probabilistic answers rather than deterministic answers. That’s probably the easiest way to avoid using the word hallucination. But the fact that you can’t rely on the answer that comes out of this machine, it seems to me we haven’t really addressed that very well.

Mark Feffer (18:21.47)

No, we haven’t. I don’t think we’re going to anytime soon because the problem with digital information, we’ve seen this on the web, is people tend to believe what they see, even if it’s completely outlandish. And now you ask an LLM a question, it comes back with a pretty smart, pretty well-researched answer.

John Sumser (18:23.121)

and

John Sumser (18:34.991)

Mm-hmm.

Mark Feffer (18:51.878)

but half of it might be made up. And you don’t know until you go back and look. So I think it’s human nature is to believe things that are presented to it by something that’s, quote unquote, authoritative. I don’t know how you change that. You know, we’re having a problem keeping critical thinking in our world today. I don’t know how we take it even a few steps further.

John Sumser (19:17.039)

Bye.

John Sumser (19:23.269)

Well, it’s been a problem for a long time. I am thinking about, there’s a Kurt Vonnegut novel from the forties about a world in which there are a few people who actually work and the rest of it is automated and robotic. And in that world, the problem that the people have is that they weren’t able to change. They weren’t able to change into the new technology. And I think we’re seeing some of that.

Mark Feffer (19:53.75)

Yeah, I think that’s true. I think that’s true.

But one thing that strikes me as interesting is I read a short story once that was called The Machine Stops. And I think it was written somewhere between the late 1800s and the early 1900s. It was very old. But it was about a society in which everybody lived in their small little cubbyhole by themselves. And they interacted by machines, by what we would call computers.

They studied certain subjects and became quote unquote experts. They’d host talks. They’d have romances. They do all the things that you do online. And then when the machine stopped, it was a disaster because nobody knew what to do outside of their one room. We’re kind of in a situation like that, I think. We’re moving toward it. There’s an awful lot of people whose day is just

built around technology in their work and their leisure. And if suddenly the web went away and connections went away, they’d be stymied. And I don’t know how you get around that.

John Sumser (21:17.019)

Well, think it’s actually, if you want to get this a big arc, for all of human history, we have evolved in concert with our technology. And the way that the eras are defined actually have more to do with technology than they have to do with changes in human beings. So going from agriculture to industry, or industry into digital.

It’s all about the underlying technology. isn’t really about some changing people.

Mark Feffer (21:50.958)

You what’s different about this time around, I think, is this is really changing the way people use their brains. You know, lot of the changes in technology advance something physical, you know, a better way to get across the country on a train or on a plane, a better way to produce documents with a typewriter or even a printing press. But now we’re looking at

you know, things that are usually done in your head and then produced onto today paper or screen. And that’s sort of the play area of AI. And I don’t think we really know quite how people are gonna react as that becomes more more pervasive.

John Sumser (22:40.529)

I’ll tell you, I did an interesting experiment a couple weeks back and I hand built a 20 page report using ChatGPT to get there. And when I was done, I went and tried to edit it.

And I couldn’t. I could not take it apart. I could not make sense out of it. I’m a pretty good editor. I’ve been editing stuff for...

a very long time. And I couldn’t, I couldn’t because when you think about the error rate in the current crop of AI.

10 % of the time it’s wrong. That means one out of every 10 words is wrong in some way. Plus one out of every 10 answers, maybe one out of 10 words, one out of every 10 paragraphs, you know, maybe it scales like that, but the one in 10 thing.

If it was voice recognition technology, it would not be market ready. If you, you only got 90 % right. Um, and so, so it seems to me that, that, um, the output is hard to edit and the error rate, the, the one in 10 mistakes error rate are high enough so that this may not be.

John Sumser (24:15.94)

an endpoint at all. may be something we pass by fairly quickly because the nature of this particular beast is you can’t get it better than that.

Mark Feffer (24:28.034)

Well, and what’s interesting is how many people have kind of jumped over that little problem. You know, there’s all this talk about how AI can replace journalists, you know, for example. But I think any professional journalist who was wrong 10 % of the time wouldn’t be able to survive in journalism very long.

John Sumser (24:41.617)

Right.

John Sumser (24:53.264)

Bye! Bye!

Mark Feffer (24:55.662)

They could work for Fox News or National Choir. But that aside, you know, if you were to have chachi PT or perplexity write a story for you, you’ve then got to go back and check every sentence and every fact that it says to make sure that it’s really correct.

And that I don’t think is natural for everybody to essentially become a of a copy editor and fact checker on everything they produce. But there are people using it that way. There are people, lawyers, using it to write briefs, writing research reports, what have you. A lot of people have just jumped over the whole quality problem.

John Sumser (25:35.634)

So I think...

John Sumser (25:48.912)

Yep, yep, and, and,

I I want to go back to one in every 10 words is wrong. Cause if you go to edit a human’s work, there’s actually a consistent point of view that flows all the way through any human’s work. It’s not a hodgepodge of things with mistakes in it. And so this is why it’s so hard to edit because you have to look at every sentence and every word in every sentence to understand whether or not is it is actually

congruent with the rest of the flow of the next sentences and the sentences after that. that level, I do know editors who work about local detail, but they are paid handsomely for their work. And haves like me who only do it for a living and not as a rarefied art form, just don’t have the skills necessary to do that. And I’m imagining that

Most people don’t have those skills.

Mark Feffer (26:55.298)

I think you’re right. And I think that the pressure really isn’t on the developers to fix this. The direct pressure is really on the end users to make sure they don’t get caught. Compliance is on the company. It’s on an employer in this case. It’s not on UKG or SAP or.

Oracle or whoever has built the platform. They want to make sure that they can facilitate compliance and that they’re being as complete as they can be. But at the end of the day, the battle’s really fought by the end user and the end user’s boss. I don’t know how that impacts product development for the vendors. I’m sure when something’s bad enough or inaccurate enough,

Word trickles up and they hear about it, but they’re not, they’re not approaching it like they would their own code, you know, where that’s their product. So there was sort of a natural disconnect, I guess.

John Sumser (28:08.754)

I think that what we’re seeing is the emergence of product liability for tech vendors. And they’ve been free from that. So it was always on the user. You give a product that’s got bias embedded in it, it’s the client’s problem. Has been the way that it’s worked forever now. And I think the work...

day lawsuit is one of the first tests of whether or not that’s real. if that goes the way it looks like it’s going to go, which is that workday is responsible for the bias that’s expressed in the product, that’s probably the turning point in whether or not vendors of digital technology have product liability to worry.

Mark Feffer (29:07.158)

If it gets that far, it hasn’t started in court yet. They’re still in pretrial maneuvers. But if you put yourself in work days, how much do they have to offer to basically force a settlement? It could be, in the context of things, a very reasonable number.

you know, even in the tens of millions of dollars to make these plaintiffs, now that it’s essentially a class action, to make this group basically go away with precedent being set. I think that’s going to become a really big factor in this case.

John Sumser (29:55.868)

I’m sure that’s right, given the scale of the thing, they won’t be able to contain the value of the settlement. that’s kind of how the law forms is you get a settlement. two years ago, there wouldn’t have been a settlement. Now it’s to the place where there’s a settlement. And when things get to the place where there’s a settlement, people start paying attention. Right.

Mark Feffer (30:23.522)

Yeah, think that’s exactly true. The political climate, I think, right now gives the businesses a little bit of breathing room. And I say that mostly because the Supreme Court’s business friendly right now. I’m not trying to get political, but it’s a business friendly Supreme Court. So I think that works against the plaintiffs in for work day. It will encourage a settlement.

John Sumser (30:26.545)

Right.

John Sumser (30:37.501)

Right.

What?

Mark Feffer (30:53.478)

But you’re right. Now, all of a sudden, this is an issue everybody’s talking about in the business. And I’m sure they’re talking about it beyond HCM platforms.

John Sumser (31:04.037)

Of course, of course, there’s a lot of places where this applies.

Mark Feffer (31:07.53)

Yeah, because this is basically about data, which is in everything now. So I’d like to say, well, what’s going to happen is accuracy is going to become very important. It’s going to become a fundamental thing if it isn’t already. But it’s going to become a fundamental thing for products.

I don’t know if that’s going to really happen because I thought a lot of people on the customer side and user side are willing to accept something that does most of what they want to do, even if it does it with problems.

John Sumser (31:46.42)

Hmm. That’s interesting. so kind of adjacent. I’ve been thinking a lot about whether or not human generated structure is how you make sense out of a lot of the things that go on in AI. know, so I’m thinking about, I hate the word taxonomy. I’ve been, I’ve been on the hunt for a new word for taxonomy for awhile, but the idea that you can take data and organize it.

and that the organization of the data makes the output of the system more accurate. When you look at how large language models work, the underlying assumption of a large language model is that there is a structure that will reveal itself. And that structure is the relationship between words and their usage across a big pool of data.

And so machine learning is the idea that you can find that embedded structure. And if you look at companies like Eightfold, they make the case that a dynamic ontology, which is just a way of saying a taxonomy that’s updated moment to moment with the changes in the marketplace, is

an adequate way. This is what gets discovered when you look inside of the data is an adequate way to organize data. And I have been wondering if that’s true. I’ve been wondering if it isn’t the case that you really need something like a Dewey Decimal System to consistently be able to find stuff and that that’s maybe a path through

John Sumser (33:34.599)

the hallucination problem, that getting things organized in a standardized way and figuring out how to apply that across the board is a key to reducing the variability in current AI approaches.

Mark Feffer (33:51.438)

I think the challenge there is that hallucinations are one thing, errors are another. The hallucinations tend to be so blatant that an informed user is probably going to pick it up. It’s the more insidious errors, the extra percentage point in a figure, say.

That’s the kind of thing that gets you in trouble with compliance. If a model basically makes up a whole precedent that really doesn’t exist in Cite’s case law, that doesn’t exist, which has happened, probably that’s going to get caught at some point.

Early on with CHAT GPT, there were a couple of lawyers who got sanctioned. I think it was in New York State because they submitted briefs that had been written by CHAT GPT, which cited cases that didn’t exist. And it was the judge and the opposing attorney who caught it. So I think there’s more of that in our future.

John Sumser (35:08.968)

Hmm. Hmm. That’s interesting. So, so you were at HR Tech. Here’s my, here’s my million dollar questions to you for HR Tech. What was the ratio of smoke to mirrors or smoke and mirrors to substance at HR Tech?

Mark Feffer (35:18.54)

Okay.

Mark Feffer (35:28.558)

I would say the smoke was thinner.

mirrors were still there. This is making analogies up on the fly. No, think it was really very noticeable that there was much less talk about how great AI is at HR tech. You heard about AI can streamline this process. AI can...

take care of these tasks in a more efficient way and maybe accomplish something 20 % more quickly, that kind of thing. You didn’t hear these big sweeping statements about we have AI in the platform. I think at this point, it’s pretty much assumed you have AI. But even if you don’t, because I think a lot of people, a lot of technology companies who have

really no use for what AI is good at right now, still claim that they have AI. My favorite example is I once saw someone who had a time clock that they said was run by AI, which I still haven’t figured out.

But it was a notable change this year where it felt like everybody was getting back to business. They were looking for real results and how they could really apply the technology to their actual work. And I think there’s more buyers who are talking about the pressure that they’re getting from management to actually make sure the money they’re spending is worth it.

Mark Feffer (37:19.766)

So they’re having to justify the investment. They’re having to demonstrate a return on investment and they hadn’t been before.

John Sumser (37:27.764)

It’s interesting. So out in the world right now, there are all sorts of people claiming all sorts of expertise about AI and what AI can do and what AI can’t do. I saw a piece the other day from an HR tech analyst that said, you’re buying AI, you should ask.

how long the company has been using AI and if they tell you that they’ve been using it before 2015, you know they’re lying. And I thought, my goodness, apparently this person doesn’t understand that the history of AI is 75 years long and that there’ve been all sorts of versions of AI in HR tech products for many, many, many, years.

the core pieces of...

The core pieces of recruiting technology, which are taking resumes, turning them into digital form, and then parsing the text into fields, that core resume processing thing, that was AI at the time. And that’s 30 years ago.

Mark Feffer (38:49.314)

But it was right. Right. But I don’t think anybody was thinking of that as AI. And I think the terminology has evolved. In the context you’re talking about, if I was at HR Tech and somebody said to me, yeah, we’ve been using AI since 2010, yeah, I wouldn’t believe them. But if they said they had been using machine learning, I’d be

believe that that makes perfect sense. So I guess it’s basically about how much bluster you want to present.

John Sumser (39:28.2)

Well, so my sense is that, so the HR tech companies that used AI in the nineties were all around MIT. They were all driven by MIT and it was very clearly labeled AI coming out of the labs at MIT going into the product. But you couldn’t say AI and

in public, would think you were smoking dope in California if you said AI in public in the 90s. So it’s a weird thing to try to calibrate because I don’t think you can make sense out of it. We won’t do business with somebody who says they had AI before 2015 because that just misunderstands what’s happened.

Mark Feffer (40:22.27)

I think that’s very possible. think the definition of AI has been a moving target. And I don’t think anybody’s ever saw a need to standardize it. And that’s why it’s been sort of changing and why people can say AI today. it means something completely different than it 20 or 25 years ago. I would guess that if we’re in

John Sumser (40:44.981)

Hmm.

Mark Feffer (40:50.67)

If a company said that to me, I’d certainly think they were aggressive in terms of marketing or sales. And if I was going to get hung up on the words, I’d want to ask them for specifics of what they were using it for. But at the end of the day, the people who are using the software or buying the software for that matter really don’t care. mean, the developers sort of have had

the run of the place for the last couple of years, just by throwing the term AI around. But now they’re having to start to really justify it, you know, with numbers on spreadsheets, with fees in investment. That changes the dynamic a lot.

John Sumser (41:41.128)

Awesome, awesome. So let’s shift again and talk a little bit about how your role is evolving. You’ve had this great career and you’ve been sort of at the edges of the technology as it came through HR tech all the way through the last 30 years. What’s different now?

Mark Feffer (41:47.96)

Mm.

Mark Feffer (42:10.382)

I feel the need to say that my high school math teacher, Mr. Jack, is spinning in his grave because I spend so much time writing about data.

And in a lot of ways, that’s really what is one of the things that’s changed. There’s really a focus on the information that these systems are working off of, whether it’s data or whether it’s news articles or whatever. The content really is front and center now, whereas before it was kind of a thing.

The technology was a distribution mechanism, but now the technology is much more embedded and interweaved with everything. I think usability is much more important than it used to be. When DOS came out, nobody was worried about whether or not it was easy to use. It was not really easy to use unless you were familiar with it, and then it got kind of sort of easy to use.

The flip side of that is I think one of the downsides of the web or one of the challenges the web has faced is graphic designers got a hold of it. And the early websites, one of the things that were thought about was their usability, how fast they worked, whether they got you to the right place quickly.

when people started to use a lot of graphic designs and add videos to website, that all kind of went away or at least got shoved into the back of the car because the design was the most important thing. I certainly didn’t see that dynamic coming. I certainly didn’t see things like social media coming. I guess if I had to...

Mark Feffer (44:11.532)

to put it all into a sentence, I would say the technology has become much more pragmatic. It’s become much more end user focused. And it’s normal now where 30 years ago, this stuff wasn’t normal by any stretch of the imagination.

John Sumser (44:31.315)

Cool. Well, it’s been a great conversation. Would you take a moment and tell people where they can find you out in the world?

Mark Feffer (44:39.438)

Sure. My website is at WorkforceAI.News, which is a weekly newsletter which focuses basically on AI in HR and human capital management. I write for Reworked, and I also report for the AIM Group on talent acquisition, technology, and job marketplaces.

Physically, I’m in Bucks County, Pennsylvania. I know all the good bars here and I’m happy to show anybody around.

John Sumser (45:10.695)

And I’ve made the pilgrimage.

Mark Feffer (45:15.264)

You have.

John Sumser (45:17.331)

Yeah. Okay. Thanks, Mark. We’ve been talking with Mark Feffer and you really ought to dig into workforceai.news and get a hold of his newsletter. It’s worth the time. Thanks, Mark. This was great.

Mark Feffer (45:30.818)

Thank you, John.

Discussion about this video

User's avatar