AI Is Very Easy to Trust
And that's a problem
The HRExaminer email and workspace accounts are managed by Google. These days, Google is occasionally adding summaries to the top of email threads. Yesterday, I got a piece of email at my HRExaminer address. It let me know that my frozen dog food order would be delivered by the 19th.
At least I thought that was what it said.
We have a regular shipment scheduled every four weeks. Frozen dog food can take up a lot of freezer space. So, I have worked diligently to figure out the right timing and quantity to keep Carlos fed while minimizing freezer space. I have had to stop using some providers because their supply chain overloaded my freezer no matter what I did.
The email said, I thought, that the shipment would be two weeks later than expected.
There’s an emotional state that involves a nagging level of concern. Not enough to do something right away and not enough to look deeply. It’s the state in which you try to remember to put something on the to do list or the needs further research list. Instead, it lives like a little nagging creature on your shoulder.
Ultimately, I did what you do. I waded through the inconvenience of remembering the company domain name, getting my user name so that I can reset my password (again), passing through the various security/ identity verification steps, and finally looking at my account.
I don’t know about you but when I finally solve a problem like timing in my personal supply chain, I put the whole thing on autopilot and more or less forget it. That makes the digging all the more painful.
I took a look at my order history and, much to my delight, discovered that the shipment was on time and would be delivered as expected. What a relief. No plan ‘B’s to create, no grumpy letters to write. Just a question about how Google could have been so wrong.
I pulled the email back up. The google provided summary, bold text on a gray background. said ‘Your shipment should arrive by Jun 19th.’ Now that I looked a little more closely, it became clear that I scanned the note and read Jun 19th as Jan 19th.
I don’t usually get the opportunity to confuse June and January. The email, dated late on Dec 31st, was summarized as a notification of a shipment in early summer. The mistake generated real bandwidth loss and consumption.
I suppose that I should have double checked the email with a more critical eye. No one is more concerned that I am about the tendency for contemporary AI to be wrong. I should have but I just don’t have the energy to watch my machine for every one of its mistakes.
The material consequence of this AI failure was that I used ChatGPT to figure out how to turn the functionality off. Then I turned it off. I do not like having my time wasted by a machine.
What the leading AI companies don’t seem to understand is that they are eroding trust in our fundamental communications infrastructure. (It’s already suffered an enormous amount of degradation.) When I go to the effort of figuring out how to turn off an uninvited slug of functionality, I am not likely to turn it back on.
You might well be thinking, ‘that’s an awfully trivial complaint’ or ‘John seems to be over-investing in solving trite problems’.
Trust is often freely given at the start of things. No matter how loud and frequent the warnings, people want to rely on their machines. It is dreadfully difficult to get things done when you don’t trust your tools. Trust breaks easily and then has to be rebuilt through an arduous process of getting little things right. In the end, it’s the little things that matter.
I am already starting to hear stories about people getting fed up with the inconsistency, instability and error rates of the current crop of intelligent tools. (They are more sensitive to having their time wasted than I am.) It may be that we are nearing an inflection point where the technology no longer gets the benefit of the doubt.



