Wednesday, 12 February 2025
25.4 C
Singapore
23 C
Thailand
20.6 C
Indonesia
26.1 C
Philippines

AI companies increased federal lobbying in 2024 amid regulatory uncertainty

AI companies increased their U.S. federal lobbying spend 2024 by 141% amid regulatory uncertainty, pushing for key legislative changes.

Artificial intelligence (AI) companies significantly ramped up their spending on federal lobbying in 2024 as regulatory uncertainty loomed in the United States. According to data from OpenSecrets, the number of companies involved in AI lobbying jumped to 648 last year, up from 458 in 2023. This represents a remarkable 141% year-over-year increase, highlighting the growing focus on influencing AI-related legislation.

Key players push for legislative support

Leading tech firms like Microsoft backed initiatives such as the CREATE AI Act, which aims to support benchmarking AI systems developed within the U.S. Meanwhile, companies such as OpenAI threw their weight behind the Advancement and Reliability Act, advocating for establishing a dedicated government centre for AI research.

Notably, specialised AI labsโ€”businesses focused almost entirely on developing and commercialising AI technologyโ€”were among the most active in lobbying efforts. OpenAI increased its spending from US$260,000 in 2023 to US$1.76 million in 2024. Similarly, its rival Anthropic doubled its lobbying budget, rising from US$280,000 to US$720,000. AI startup Cohere also significantly boosted its expenditures, spending US$230,000 in 2024 compared to just US$70,000 two years ago.

OpenAI, Anthropic, and Cohere spent US$2.71 million on federal lobbying efforts last year, dramatically rising from the US$610,000 paid in 2023. While this figure is relatively small compared to the US$61.5 million the broader tech industry allocated for lobbying during the same period, it signals growing urgency among AI companies to influence regulations.

To strengthen their lobbying efforts, OpenAI and Anthropic hired seasoned professionals to engage with policymakers. OpenAI hired Chris Lehane, a political veteran, as its Vice President of Policy, while Anthropic appointed Rachel Appleton, formerly of the Department of Justice, as its first in-house lobbyist. These moves reflect the industry’s determination to play an active role in shaping AI governance.

Despite these efforts, the regulatory landscape remains turbulent. According to the Brennan Center for Justice, in 2024, U.S. lawmakers considered over 90 AI-related bills in Congress during the first half of the year alone. At the state level, more than 700 AI-related bills were introduced.

Progress in states but federal action lags

While Congress struggled to progress significantly on AI legislation, state governments stepped in. Tennessee became the first state to protect voice artists from unauthorised AI cloning, and Colorado adopted a tiered, risk-based approach to regulating AI. Meanwhile, California Governor Gavin Newsom signed several AI-related safety bills, requiring companies to disclose details about how their AI models are trained.

However, not all proposed measures were successful. Governor Newsom vetoed SB 1047, a bill that sought to enforce stricter safety and transparency standards on AI developers, citing concerns over its broad scope. Similarly, the Texas Responsible AI Governance Act (TRAIGA) faces an uncertain future as it moves through the legislative process.

The U.S. still lags behind its global counterparts, such as the European Union, which has already introduced comprehensive frameworks such as the EU AI Act.

Federal challenges and industry outlook

The federal governmentโ€™s approach to AI regulation remains unclear. Since his return to office, President Donald Trump has shown a preference for deregulating the industry, revoking several Biden-era policies aimed at reducing AI risks. On January 18, Trump signed an executive order suspending certain AI-related regulations, including export rules on advanced models.

Despite the lack of consensus at the federal level, companies continue to push for targeted regulation. In November, Anthropic called for swift legislative action within the next 18 months, warning that the opportunity for proactive risk management was closing. OpenAI echoed similar sentiments in a recent policy document, urging the government to provide clearer guidance and infrastructure support to foster AI development responsibly.

As the regulatory debate continues, the stakes for the AI industry remain high. Without a cohesive strategy, the U.S. risks falling behind international competitors while struggling to address the potential risks posed by this transformative technology.

Hot this week

Smart Nation evolution and Singapore’s next chapter in digital transformation

Singapore's Smart Nation initiative advances into its next phase, focusing on AI, cybersecurity, and inclusivity to set global standards for digital innovation.

Belkin expands Qi2 charging solutions with seven new models

Belkin expands its Qi2 wireless chargers and power banks with seven new models, offering fast-charging technology, portability, and eco-friendly design.

DeepSeekโ€™s R1 model was found to be highly vulnerable to jailbreaking

DeepSeekโ€™s R1 AI model is reportedly more vulnerable to jailbreaking than other AI systems, raising concerns about its ability to produce harmful content.

Startups take the spotlight with Super Bowl ads

Five startups, including OpenAI and Ramp, are making a splash with Super Bowl ads this year, aiming to boost brand recognition on the big stage.

LG introduces Google Cast-integrated hotel TVs at ISE 2025

LG launches Google Cast-integrated hotel TVs at ISE 2025, offering guests secure streaming options and boosting its hospitality tech leadership.