No Time-Out For AI: Decoding Washington’s Moratorium Flip

Closeup of the documents of the American Rescue Plan Act of 2021, a $1.9 trillion economic stimulus … More
In the frenetic activity leading up to a vote on the current administration’s major bill, it seems Congress has rejected the idea of imposing a 10-year moratorium on states when it comes to regulating AI.
On Tuesday, the Senate voted to take this part out of the so-called “One Big, Beautiful Bill.” The vote was not close. It was reportedly 99-1, and done in an overnight session just days before an expected vote on the bill’s adoption.
Tech Industry Viewpoints
Some in the tech community had argued for keeping the ban, saying that it would be difficult for tech companies to comply with different state requirements. Sam Altman of OpenAI, for one, talked about this kind of patchwork system as being a challenge.
“(Those kinds of state laws) will slow us down at a time where I don’t think it’s in anyone’s interest for us to slow down,” Altman reportedly said, calling for a single federal framework, that is “light touch.”
However, apparently Marsha Blackburn of Tennessee helped lead an effort to get rid of the provision, partly on the grounds that if states don’t regulate, and the federal government doesn’t regulate, there won’t be any regulation at all.
“While I appreciate Chairman Cruz’s efforts to find acceptable language that allows states to protect their citizens from the abuses of AI, the current language is not acceptable to those who need these protections the most,” Blackburn said in a statement explaining her aversion to stop-gap compromise language from Cruz prior to jettisoning the idea. “This provision could allow Big Tech to continue to exploit kids, creators, and conservatives.”
The Big Bank
Here’s another detail to know about how the bill moved through Congress.
The version of the AI regulation ban that was rejected by the Senate had already been changed to apply to a $500 million infrastructure fund – in other words, it wasn’t an outright ban per se, but a move to tie funding to the states’ decisions.
Blackburn, for her part, cited kids’ safety and protecting the personal data of entertainers, as mentioned above.
But of course, there are other reasons to look carefully at the industry and try to make a AI safer for humans.
It’s interesting to know that while there wasn’t a lot of public championing of regulation from tech heads, it wasn’t that many years ago that figures like Elon Musk were noting the need for AI ethics research and social guidelines for AI in general.
That seems like something we should still have as new technologies roll out in the blink of an eye.
In that sense, the striking down of the regulation moratorium gives officials in all 50 states more ground to make an attempt – to protect people, to fine-tune systems, and to focus on AI safety and equity.
State Initiatives
So what are states planning?
A lot, apparently.
“State lawmakers are considering a diverse array of AI legislation, with hundreds of bills introduced in 2025,” write a number of authors at Inside Privacy, a blog at Covington & Burling LLP. “As described further in this blog post, many of these AI legislative proposals fall into several key categories.”
They list:
- comprehensive consumer protection legislation
- sector-specific legislation on automated decision-making
- chatbot regulation
- generative AI transparency requirements
- AI data center and energy usage requirements
- frontier model public safety legislation
“Although these categories represent just a subset of current AI legislative activity,” the authors note, “they illustrate the major priorities of state legislatures and highlight new AI laws that may be on the horizon.”
Here’s more from the Council of State Governments.
Look for a lot more of this to be coming our way. It might be complex, and hard to navigate in some ways, but most of us would probably agree this is something worth pursuing.