House Republicans have inserted a controversial clause into their broad tax-and-spending package that would effectively bar states from regulating AI technologies for an entire decade.
The provision appears as a ten-year “moratorium” on state and local AI rules, tucked inside legislation that President Trump has promoted as his “one big, beautiful” agenda. If enacted, it would override existing and future state statutes designed to protect citizens from AI-driven discrimination, privacy invasions, and other risks, potentially leaving victims without recourse.
Over a hundred organizations oppose the proposal by Republicans
However, over 100 advocacy organizations, academic centers, and employee coalitions have publicly condemned the measure, warning that it would strip states of the ability to enforce any laws governing AI models, AI-driven systems, or automated decision-making tools, even when those systems inflict demonstrable harm.
In a letter delivered Monday to congressional leaders, including Speaker Mike Johnson and Democratic Leader Hakeem Jeffries, the organizations argue that this blanket ban would grant companies a license to deploy unvetted AI technologies without accountability.
“This moratorium would mean that even if a company deliberately designs an algorithm that causes foreseeable harm, regardless of how intentional or egregious the misconduct or how devastating the consequences, the company making or using that bad tech would be unaccountable to lawmakers and the public.”
The letter reads.
Among the 141 signatories are prominent law-and-policy centers, Cornell University, Georgetown Law’s Center on Privacy and Technology, civil-rights advocates such as the Southern Poverty Law Center, labor unions including the Alphabet Workers Union, and climate-focused employee groups like Amazon Employees for Climate Justice.
Their combined voice underscores how widespread and bipartisan concerns about unchecked AI deployment have become, spanning from academics and nonprofits to front-line tech workers.
Emily Peterson-Cassin, corporate power director at the non-profit Demand Progress, which helped draft the letter, calls the preemption clause “a dangerous giveaway to Big Tech CEOs who have bet everything on a society where unfinished, unaccountable AI is prematurely forced into every aspect of our lives.”
She urged congressional leaders to heed the public interest rather than succumb to “Big Tech campaign donations.”
States consider their own AI laws despite Republicans’ plans
The state-preemption provision arrives amid a broader rollback of federal AI safeguards. Soon after taking office in January, President Trump rescinded a sweeping Biden-era executive order that had established guardrails for AI development.
He also announced plans this month to lift export controls on advanced AI chips, moves he and his allies argue are necessary to maintain US leadership in the sector, especially as competition with China intensifies.
“Excessive regulation of the AI sector could kill a transformative industry just as it’s taking off,” Vice President J.D. Vance told attendees at the Artificial Intelligence Action Summit in February.
But many states have responded to the vacuum in federal oversight by crafting their own AI rules for high-risk applications. Colorado’s landmark 2024 AI statute requires companies to guard against algorithmic bias in hiring and lending, and to notify consumers when they are interacting with an AI system. New Jersey’s recent law creates civil and criminal penalties for the malicious dissemination of AI-generated deepfakes.
Ohio lawmakers are considering legislation mandating watermarks on AI-produced content and outlawing identity fraud via deepfake technology. Several states have also targeted AI-generated misinformation in elections.
Meanwhile, regulating some AI applications has drawn rare bipartisan agreement in Washington. Congress this month passed the Take It Down Act, which President Trump was scheduled to sign into law on May 19, 2025, making the non-consensual distribution of explicit, AI-generated images a federal crime.
The measure enjoyed support from both parties, reflecting widespread alarm over digital impersonation and online harassment.
By contrast, the House budget bill’s ten-year ban on state AI laws would halt this kind of incremental, sector-specific regulation at its inception, shielding algorithm developers from liability even when their products harm individuals or communities.
But, some leading AI executives have publicly called for more government oversight. In 2023, OpenAI CEO Sam Altman testified before a Senate subcommittee that “regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.”
Cryptopolitan Academy: Tired of market swings? Learn how DeFi can help you build steady passive income. Register Now
This articles is written by : Nermeen Nabil Khear Abdelmalak
All rights reserved to : USAGOLDMIES . www.usagoldmines.com
You can Enjoy surfing our website categories and read more content in many fields you may like .
Why USAGoldMines ?
USAGoldMines is a comprehensive website offering the latest in financial, crypto, and technical news. With specialized sections for each category, it provides readers with up-to-date market insights, investment trends, and technological advancements, making it a valuable resource for investors and enthusiasts in the fast-paced financial world.