close
close

Silicon Valley in turmoil over California’s AI safety law

Artificial intelligence heavyweights in California are protesting a bill that would force tech companies to adhere to a strict security framework, including creating a ‘kill switch’ to disable their powerful AI models, in a growing battle over regulatory control over the latest technology. technology.

The California legislature is considering proposals that would introduce new restrictions on tech companies operating in the state, including the three largest AI startups OpenAI, Anthropic and Cohere, as well as major language models from Big Tech companies like Meta.

The bill, which passed the state Senate last month and would be put to a vote by the General Assembly in August, would require AI groups in California to guarantee a newly created state body that they will not develop models with “a dangerous capacity,” such as creating biological or nuclear weapons or aiding cyber attacks.

Developers would be required to report on their security testing and introduce a so-called kill switch to disable their models, according to the proposed Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act.

But the law has become the focus of a backlash from many in Silicon Valley over claims that it will force AI startups to leave the state and prevent platforms like Meta from exploiting open-source models.

“If someone wants to come up with regulations to stifle innovation, they can hardly do better,” said Andrew Ng, a renowned computer scientist who led AI projects at Alphabet’s Google and Baidu in China, and who sits on Amazon’s board. “It creates enormous science fiction risks, stoking fear in anyone who dares to innovate.”

AI’s rapid growth and enormous potential have raised concerns about the technology’s safety, with billionaire Elon Musk, an early investor in ChatGPT maker OpenAI, last year calling it an “existential threat” to humanity. This week, a group of current and former OpenAI employees published an open letter warning that “cross-border AI companies” lack sufficient government oversight and pose “serious risks” to humanity.

The California bill was co-sponsored by the Center for AI Safety (CAIS), a San Francisco-based nonprofit led by computer scientist Dan Hendrycks, the safety advisor to Musk’s AI start-up xAI. CAIS has close ties to the effective altruism movement, which was made famous by jailed cryptocurrency executive Sam Bankman-Fried.

Democratic Senator Scott Wiener, who introduced the legislation, said: “Fundamentally, I want AI to succeed and innovation to continue, but let’s try to get ahead of any security risks.”

He added that it was a “light bill. . . that simply requires developers training massive models to perform basic security assessments to identify major risks and take reasonable steps to mitigate those risks.”

But critics have accused Wiener of being overly restrictive and imposing a costly compliance burden on developers, especially at smaller AI companies. Opponents also argue that the bill focuses on hypothetical risks that pose an “extreme” liability risk to founders.

One of the fiercest criticisms is that the bill will hurt open-source AI models – in which developers make source code freely available to the public so developers can build on it – such as Meta’s flagship LLM, Llama. The bill would potentially hold open model developers liable for bad actors who manipulate their models to cause harm.

Arun Rao, chief product manager for generative AI at Meta, said in a post on X last week that the bill was “unworkable” and would “end open source in (California).”

“The net tax impact of destroying the AI ​​industry and driving out companies could be in the billions as both companies and well-paid workers leave,” he added.

Wiener said of the criticism: “This is the technology sector, it doesn’t like any regulation, so it’s not surprising to me at all that there would be resistance.”

Some responses were “not entirely accurate,” he said, adding that he planned to make changes to the bill that would clarify its scope.

The proposed amendments state that open source developers are not liable for models “that undergo significant refinement,” meaning that if an open source model is subsequently sufficiently modified by a third party, it is no longer the responsibility of the group that does that. the original model made. They also state that the kill switch requirement will not apply to open source models, he said.

Another amendment states that the bill would only apply to large models “that cost at least $100 million to train,” and therefore would not affect most smaller startups.

“There are competitive pressures that are impacting these AI organizations and actually pushing them to cut corners on security,” CAIS’s Hendrycks said, adding that the bill was “realistic and reasonable,” with most people requiring “some basic oversight.” wild ones.

Still, a senior Silicon Valley venture capitalist said they were already fielding questions from founders asking whether they would have to leave the state as a result of the potential legislation.

“My advice to anyone who asks is that we stay and fight,” the person said. “But this will put pressure on open source and the startup ecosystem. I think some founders will choose to leave.”

Governments around the world have taken steps to regulate AI in the past year as the technology has surged in popularity.

US President Joe Biden introduced an executive order in October that aimed to set new standards for AI safety and national security, protect citizens from the risks of AI privacy and combat algorithmic discrimination. The UK government outlined plans in April to draw up new legislation to regulate AI.

Critics are baffled by the pace at which California’s AI bill emerged and moved through the Senate, led by CAIS.

The majority of funding for CAIS comes from Open Philanthropy, a San Francisco-based charity with roots in the effective altruism movement. It provided grants worth approximately $9 million to CAIS between 2022 and 2023, in line with its “focus area of ​​potential risks from advanced artificial intelligence”. The CAIS Action Fund, a division of the nonprofit founded last year, registered its first lobbyists in Washington, D.C., in 2023 and has spent about $30,000 on lobbying this year.

Wiener has received funding over several election cycles from wealthy venture capitalist Ron Conway, managing partner of SV Angel, and investors in AI startups.

Rayid Ghani, professor of AI at Carnegie Mellon University’s Heinz College, said there was “some overreaction” to the bill, adding that any legislation should specifically focus on use cases of the technology rather than the development of regulate models.