Why AI Regulation Matters

It looks like the president is trying again to get Congress to pass a law prohibiting any regulation of AI by states. Here’s a nice explainer by Fisher Phillips.
Their reason for prohibiting regulation is a fear that it will diminish US companies’ ability to compete with China on AI development. At the same time, China heavily regulates AI. (Here’s a detailed summary from Latham Watkins if your curious about China’s AI laws.)
My sense is that this is really about some US companies wanting unfettered ability to do what they want. The problem is that it doesn’t actually work that way.
Regulation of AI anywhere affects how AI functions everywhere.
AI depends on massive quantities of data and computing power. One of the main sources of data is the internet. Some AI companies crawl the entire internet to capture the information that we post there and run through all the programs that are connected there. Sure, some have more limited visibility (and more security), but it’s fair to say that AI systems regularly eat the internet.
At the same time, all sorts of AI programs are being launched to the internet. My latest grump is Google’s AI, which has pretty much ruined search for me because it’s so damn wrong. AI assisted search comes up with information that’s wrong more than it’s right. But it’s very confident in its answers even though none of the source articles say the conclusion. And that’s the trouble with prediction v. thinking. Sure, they’re both wrong sometimes. But prediction is just math and thinking has some context and understanding involved.
It also turns out that there’s a lot of incorrect information on the internet. Who knew?
Then It’s worth noticing that the internet does not have political and geographical boundaries (although some governments have tried with limited success). Data that flows through the internet does not come with tiny passports and with most data there’s no way to know what rules apply.
So when California or Colorado passes a data privacy or AI statutes or the EU enacts GDPR or the EU AI Act, it has a ripple effect. The safest path for companies that are subject to the new rules is to comply with the new laws so they’re in compliance everywhere.
There’s no such thing as the AI Wild West, no matter how much some AI companies would like it to be otherwise.
It’s not clear that the federal government has the power to prevent states from regulating AI.
There are two US Constitutional principles in tension here. The first is that states are their own sovereign governments and have rights to make laws affecting their citizens.
The second is the Supremacy Clause, which gives the federal government the right to exclusively govern certain areas. The classic employment law example is labor law where the National Labor Relations Act governs all things union formation, collective bargaining, and labor relations.
But usually when the federal government occupies a legal space to govern it, it actually governs. Like there are laws and regulations and sometimes agencies!
I tried to find an example of an area where the federal government has ever said, we are going to occupy this area of law and the only law we’re going to pass says nobody can make any laws. While I love Constitutional law, I got far enough along in my research on this to realize I do not love Constitutional law enough to figure this one out.
There are real harms to people at stake
The biggest thing is that uses of AI that are causing harm to people.
From discrimination in employment, legal, financial, and healthcare decisions to being a causal factor in suicide, these as big as harms get (short of straight up murder). And that’s before we talk about the potential environmental harms from AI’s massive uses of electricity, water, and land.
We have a technology based on data and statistics that is designed to not give repeatable results.
AI does not think, understand, or know. It just does what it determines the user is most likely to want based on the instruction/activity. It’s a tool. Literally.
But it feels like it’s real, like we are understood and it’s there to help us. We get definitive answers with citations to source materials. We get chatty banter and praise. The trouble is it’s only loosely tethered to facts, much less truth.
When there are real harms to people, it makes sense to pass laws to prevent or mitigate those harms. That’s a big part of what government is supposed to do.
It’s an interesting legal and practical issue. And I’m sure I will get a chance to think and write about this some more. But in the meantime, we human users need to remember what we’re dealing with and take some steps to protect ourselves and each other.
What to do
While we watch this play out, here are some things that employers and tech companies should be considering.
• Transparency - Companies developing AI products should let users know about the data and design the tools are based on along with the limits of what they are good for.
• Teaching - Both AI product companies and employers should focus on how users are likely to use their products, not just the ways they want people to use their products. Build in guides to help people avoid issues and find their ways through them. AI is actually good at that stuff. Employers should be training people about the right use cases and approaches to using specific products as well as cautions on using LLM’s and generative AI at work.
• Governance - Are the people who need the tools using them? Are people who could do damage without the right training using tools that they shouldn’t be? Consider what you want to protect (trade secrets, proprietary information, private information, business intelligence, and a bunch of different kinds of communications) and whether that information is inadvertently being uploaded or disclosed.
• Policies - Create policies about where AI can and should be used and where it can’t. Work with your friendly employment lawyer to create policies that actually work and don’t backfire. But first, think through what it is you’re trying to do, what you want to happen, and the best approach to getting there.
• Monitor outcomes - Since we don’t really understand exactly how many AI programs work, we really can’t prevent some of the potential problems created by AI. In any place you use AI where there is risk of liability or harm (hiring, employment decisions, and any other functions that affect humans and human lives and careers), make sure you monitor what’s happening and identify anything that looks off, weird, or potentially really bad. Then figure them out.
• Investigate and fix problems - This is the figure out the problems bit. Really. Go there. Find out the stuff you don’t want to know. Because ignoring it is worse, way worse. Then do something about it. Get help. You can do it!
Let’s use AI for the things that AI is good at instead of using AI for the things it can do. In a time of rampant misinformation, it’s worth carefully considering how and where we use technology that makes things up.
Let’s keep humans in the process for context and understanding and to use our common sense and wisdom. Especially the wisdom.


