California's AI safety bill, which requires 'kill switches' for AI models, is criticized for forcing AI startups to withdraw and damaging open source models.

Silicon Valley venture capitalists and tech workers are sounding the alarm that a proposed 'AI Safety Bill' under consideration in the California legislature could force some major AI companies to change the way they do business.
Silicon Valley in uproar over Californian AI safety bill
Tech Companies Challenge California AI Safety Legislation - WinBuzzer
https://winbuzzer.com/2024/06/08/tech-companies-challenge-california-ai-safety-legislation-xcxwbn/
Silicon Valley on Edge as New AI Bill Advances in California
https://www.pymnts.com/artificial-intelligence-2/2024/silicon-valley-on-edge-as-new-ai-regulation-bill-advances-in-california/
The misguided backlash against California's SB-1047
https://garymarcus.substack.com/p/the-misguided-backlash-against-californias
Uproar in California tech sector over proposed new bill
https://www.cryptopolitan.com/uproar-in-california-tech-firms-over-ai-bill/
Silicon Valley Is On Alert Over An AI Bill in California - Bloomberg
https://www.bloomberg.com/news/newsletters/2024-06-06/silicon-valley-is-on-alert-over-an-ai-bill-in-california
SB 1047, commonly known as the 'AI Safety Bill,' is an AI-related bill currently under consideration in California, introduced by Senator Scott Wiener. The bill aims to establish 'common-sense safety standards' for companies that create large-scale AI models that exceed certain size and cost thresholds. The AI Safety Bill passed the California Senate in May 2024 and is currently awaiting passage in the House of Representatives.
The AI Safety Bill requires AI developers to 'implement a 'kill switch' that can reliably shut down AI systems to prevent AI models from causing 'serious harm.'' Furthermore, it would require AI developers to disclose their compliance efforts to a new 'Frontier Models Division' within the California Department of Technology. Companies that fail to comply with these requirements could be sued and subject to civil penalties.

At the time of writing, there is no federal AI law in the United States, and states are increasingly pushing for their own regulations, Bloomberg noted. California, where the AI Safety Act is currently being enacted, is home to major AI companies like OpenAI and Anthropic, so if the bill were to be enacted, 'it could have a direct impact on companies at the forefront of the AI industry,' Bloomberg said.
Wiener, the architect of the AI Safety Act, told Bloomberg, 'It would be great if Congress would step forward and pass reasonable, strong AI, innovation, and safety legislation. This kind of legislation should be enforced at the federal level. But on data privacy, social media, net neutrality, and even technology issues that have strong bipartisan support, it's been very difficult, and sometimes impossible, for Congress to act.'
The AI Safety Act is supported by Geoffrey Hinton and Yoshua Bengio, who have been vocal about the potential existential threats posed by AI. Hinton
Critics of the AI Safety Bill have criticized it, saying it could impose an unrealistic burden on open source developers, who make their code available for anyone to review and modify, to ensure their services are not misused by malicious actors. They also worry about the potential for greater powers to be given to the Frontier Models Division, which would be created under the AI Safety Bill.

Rohan Pandey, founder of Reworkd AI, an open-source AI startup, said of the AI Safety Bill, 'No one thought this would pass. It seems pretty ridiculous. Maybe the rules will make sense in a few years' time when we know the criteria for determining whether an AI model is safe or unsafe. But GPT-4 only came out a year ago. It's way too early to jump into legislation.'
Martin Casado, general manager at venture capital firm Andreessen Horowitz, said he has been approached by startup founders who are concerned about the AI safety bill and ask whether they should leave California.
A hot topic in the startup community is whether AI developers should be held liable for those who abuse their systems. The hot topic is
Wiener, the architect of the AI Safety Act, said he is making changes to the bill to reflect some of these concerns. He amended the requirements for AI models covered by the bill to clarify that open source developers are not liable if their technology is misused, and clarified that shutdown requirements do not apply to open source models. Furthermore, Wiener said, 'I'm a big supporter of AI. I'm a big supporter of open source. I'm not trying to stifle AI innovation. But I think it's important that people pay attention to safety as these developments occur,' indicating he is prepared to make further changes to the AI Safety Act.
Weiner also argued that hardline AI advocates with large online followings have 'begun to vocalize the AI Safety Bill, sometimes spreading highly inflammatory and inaccurate information.' In particular, Weiner argued that the AI Safety Bill does not contain any provisions that would require companies to obtain permission from government agencies to train AI models, and emphasized that the liability risk from the bill is 'extremely limited.'

Casado also criticized the drafting process of the AI Safety Bill, arguing that it 'reflects the views of some 'major outliers' who are pessimistic about the long-term risks of AI to humanity, but does not represent the consensus of the technology industry.'
In response, Weiner noted that the AI Safety Act was born out of dinners and meetings he has had with AI industry stakeholders over the past 18 months, including meetings with front-runners in the AI industry such as OpenAI, Meta, Google, and Microsoft, as well as Andreessen Horowitz, where Casado works. He asserted that the bill was 'drafted in a very open environment.'
The AI Safety Bill is scheduled to be voted on by the state legislature by the end of August 2024, and Weiner said he is 'optimistic that there is a path to submitting it to the governor.' Meanwhile, California Governor Gavin Newsom has expressed caution about excessive regulation, saying, 'If we over-regulate, or we're too lenient, or we're chasing something that's too attractive, we could put ourselves in a dangerous position.'
Related Posts:







