California Governor vetoes landmark AI safety bill, citing innovation concerns

Governor Gavin Newsom vetoes a landmark AI safety bill, citing concerns over stifling innovation


News Desk September 30, 2024

California Governor Gavin Newsom has vetoed a landmark artificial intelligence (AI) safety bill, a decision met with strong opposition from major tech companies.

The proposed legislation aimed to impose some of the first regulations on AI in the United States, but Newsom argued that it could stifle innovation and drive AI firms out of California.

Senator Scott Wiener, who authored the bill, criticized the veto, stating it allows the development of "extremely powerful technology" without any government oversight.

The legislation would have mandated safety testing for advanced AI models and required developers to include a "kill switch" to isolate or deactivate systems posing a threat. Additionally, it sought official oversight for the development of "Frontier Models," the most advanced AI systems.

In his statement, Newsom noted that the bill did not account for the context of AI deployment, applying stringent standards even to basic functions of large systems.

He simultaneously announced plans to develop public safeguards against AI risks and sought input from leading experts.

Despite this setback, Newsom has signed 17 other bills in recent weeks, including measures targeting misinformation and deep fakes created with generative AI. As the home to many of the world's largest AI companies, California's regulatory decisions carry significant implications for the industry.

Wiener lamented the veto, emphasizing that it leaves AI companies without binding restrictions from U.S. policymakers amid Congress's ongoing paralysis over tech regulation. Major firms like OpenAI, Google, and Meta have expressed concerns that such regulations could hinder the development of crucial technologies.

COMMENTS

Replying to X

Comments are moderated and generally will be posted if they are on-topic and not abusive.

For more information, please see our Comments FAQ