Trump’s plans threaten the future of the AI Safety Institute in US Tech
AI regulators are facing a new chapter under President-elect Donald Trump, who is already eyeing a raft of policies he sees as barriers to technological progress. One target on the chopping block: USAI’s Security Agency, established last year under the Biden-Harris Administration to access and prevent risks to advanced AI systems. Despite the center’s focus on security, its director Elizabeth Kelly insists that regulatory frameworks fuel, rather than stymies, AI development. “We see it as part and parcel of enabling innovation,” he asserted at the Fortune Global Forum yesterday (Nov. 11), doubling down on his view that security and progress can exist—if regulators know what they’re doing.
Housed within the Department of Commerce, the security agency is bringing together an unlikely group of computer scientists, academics and community activists to develop new guidelines aimed at managing the smoothness of AI. Established as a cornerstone of Biden’s 2023 AI executive order, the agency’s mandate includes enforcing transparency standards and making testing agreements for AI models—a setup that, according to the GOP’s 2024 platform, “stifles AI innovation” and, accordingly, the market for liquidation. .
Sensing change, big AI developers are waiting for the ax to fall. Last month, industry giants such as OpenAI, Google ( GOOGL ), Microsoft ( MSFT ) and Meta ( META ) signed a letter urging Congress to permanently authorize the agency, citing its work as “critical to advancing USAI’s innovation, leadership and national security.” ” Translation: tech titans want safety nets in place before the speed of innovation overtakes them.
Kelly maintains that the center doesn’t just maintain AI development—it accelerates it. Beyond enhanced R&D, talent acquisition and advanced computing resources, the center’s mission, as Kelly says, is about building “trust—which enables discovery, which enables innovation.”
Earlier this year, the institute struck a deal with OpenAI and Anthropic to get their new models up-to-date, putting Anthropic’s Claude 3.5 Sonnet through pre-shipment testing. Kelly hailed this as a “classic example” of leveraging expertise from places like Berkeley and MIT to assess national security risks and implications—a subtle reminder that this group is not exactly the D-List.
The institution’s ambition is straightforward: empower AI with responsibility. From drug discovery to carbon sequestration, the promise of AI looms large but, as Kelly warns, so do the risks. “As we see these models become more powerful and more capable, we’re getting closer to that future,” he explained. In his eyes, it’s important to make sure the brakes are as sharp as the engine before the AI hits the fast track.