Premature Preemption?

HERE’S WHAT YOU NEED TO KNOW…

While the Musk-Trump feud grabbed most of the tech world’s attention the past few weeks, there’s a provision in the “One Big Beautiful Bill” that will have a longer-term impact on tech policy than their war of tweets:  a ten year moratorium on most state-level AI regulation.

For those not paying close attention, the provision shows how fast Washington’s thinking on AI regulation has gone from avoiding “mistakes of the past” to ensuring we can “win the future.”

When Generative AI first appeared on the scene, Big Tech was under big scrutiny. Policymakers on both sides of the aisle vowed to ensure this new technology had guardrails they believed social media lacked for too long. Now, just two and a half years later, the debate has shifted dramatically.

Subscribe to Receive Insights

"*" indicates required fields

This field is for validation purposes and should be left unchanged.

While the Trump Administration is leading the charge, this shift is bigger than just one administration. Here’s what you need to know to understand why this shift is happening and what it means for everyone from new AI startups to incumbent industries integrating AI into their operations.

HOW WE GOT HERE

Less than two years ago, our first public risk assessment of AI policy focused on the bipartisan eagerness to regulate first and ask questions later on AI. Many policymakers lamented the hands-off approach to regulating the internet in the 1990s — and again with the emergence of social media platforms in the 2010s. Eager to avoid these “mistakes of the past”policymakers took a more proactive stance on generative AI.

Talk of “responsible AI” and “AI safety” was everywhere, with even OpenAI CEO Sam Altman urging lawmakers to take the issue more seriously. New working groups and bipartisan interest in regulating AI signaled legislative momentum. Yet Congress remained in gridlock while the Biden Administration took (easily undone) executive actions. Even the tidal wave of AI legislation at the state level largely failed to reach enactment’s shores.

Then, DeepSeek’s breakthrough provided what many viewed as a “Sputnik moment” — a jarring signal the U.S. was falling behind in the AI race. It galvanized concern in both Washington and Silicon Valley, reframing AI for many not as a risk to be tamed but as a competitive edge to be harnessed. Suddenly, the conversation shifted from cautious oversight to national urgency that complemented the incoming Trump Administration’s America First priorities.

WHAT’S DRIVING THE SHIFT

So, is this just a result of a shift in administrations? In part, yes, but there are deeper forces at work that anyone navigating the AI policy landscape needs to understand.

The national security and economic competitiveness threat of China: Even before the recent policy shift, advocates of American competitiveness – including Vice President J.D. Vance – argued the U.S. is in an AI arms race — one it must win to retain global leadership. This sense of urgency led to growing calls for a coordinated national strategy, with a U.S. congressional commission even proposing a “Manhattan Project-style initiative to fund the development of AI systems” built to fast track U.S. dominance over China in generative AI development. Proponents argue that only through bold, centralized efforts can America outpace China’s rapid advancements and maintain its competitive edge.

Trump’s “Tech Bro Orbit” arrived just as “Little Tech” found its voice: It’s not just that Trump brought Elon Musk, Peter Thiel, and a pantheon of their “acolytes” to Washington, “Little Tech” is finding its voice in Washington more than ever before. Long overshadowed by Big Tech’s lobbying machines, there is a new wave of venture capitalists like Andreesen Horowitz and accelerators like Y Combinator becoming more vocal about shaping a regulatory framework that encourages innovation and supports decentralization. Now that Little Tech has secured a seat at the table, it’s not likely to give it up. With allies across party lines and momentum in both Washington and state capitals, this emerging coalition is here to stay and positioned to influence AI policy well beyond the Trump era.

Responsible AI got tangled in the culture war: Many AI safety efforts leaned on familiar frameworks from the same trust and safety teams behind content moderation and equity policies. But those frameworks were already politicized, and they brought the same blind spots. As well-intentioned initiatives drew backlash, “Responsible AI” became just another front in America’s ongoing ideological battles. Voices like Elon Musk and Senator Corey Booker amplified the divide, eclipsing real concerns. Now, AI accelerationists have momentum and the focus is on AI’s value rather than its risks.

PREPARING FOR WHATS NEXT

Trump began his term by removing Biden’s executive order on AI and preparing DOGE cuts to the AI Safety Institute (now renamed the Center for AI Standards and Innovation) while removing other guardrails viewed as hindering America’s AI advantages. All of this action has culminated in the House-passed “one big beautiful bill” that includes a 10-year moratorium on most state-level AI rules.

We often like to say that in politics, as in physics, every action has an opposite reaction. The difference is that in politics it’s usually unequal. As the Senate prepares to debate this bill, a cross-party clash is emerging between federal policymakers eager to assert national control and state officials determined to preserve local authority. This brewing standoff could test federalism in the AI age — and public affairs professionals should be watching closely.

As these opposing priorities come to a head, public affairs professionals need to be ready to navigate a fast-moving and fragmented political landscape — tracking the key players, narrative shifts, and regulatory moves as they develop.  If you or your team need help navigating this shift, Delve Research is here to help you see further, act faster, and navigate a volatile landscape smarter.