The terms “innovation” and “public protection” often find themselves head-to-head in national politics. On September 29th, Governor Gavin Newsom prioritized the former when he vetoed a highly contentious AI bill. The bill, otherwise known as the Safe and Secure Innovation for Frontier Artificial Intelligence Act, called for stricter regulations surrounding AI development in the state of California. No other bill debated or passed in the U.S. surrounding AI governance has ever been as comprehensive as this one, leading many to refer to its potential to behave as the blueprint for future national policies on AI. In the belief that it would stifle technological innovation, Newsom overturned the polarizing act that has divided California’s Silicon Valley.
What did experts mean when they said that the bill proposed strict and legitimate changes to the current technological landscape of California? Bill SB-1047 specifically targeted artificial intelligence development in California, home to the bustling ecosystem for high-tech innovation in the United States. — Silicon Valley. Its legislation vouched not only for equitable development, but damage control by holding noncompliant or passive companies liable for harm that these models may enact on its users. The bills strictly mandated the Secretary of Government Operations to develop a comprehensive plan to evaluate the impact of deepfakes on state agencies, businesses, and residents. Under this act, powerful AI developers must develop a “kill-switch” and safety protocols before training their models. Furthermore, the act prohibits developers from publicly deploying AI models if there is a legitimate risk that it may cause critical harm to the general public. Incidents that involve public safety must be reported directly to the U.S. Attorney General. Any non-compliant AI developers will face civil action by the Attorney General. The act further establishes the use of CalCompute, a public cloud computing cluster aimed to promote ethical and equitable tech development. Developers would have been expected to submit a general report of safety protocols by January 1, 2026.
Following debates, revisals, and suggestions, it is crucial to note that since the bill’s introduction in February, it was significantly weakened following last-minute negotiations on August 18th. In these revisions, the bill no longer allowed the Attorney General to sue companies for negligent safety practices before the occurrence of harm. It also no longer requires AI labs to certify their safety testing under penalty of perjury nor assurance from developers that their systems will not be harmful. In summary, the new revisions of the bill made it so it could only be enacted after harm had been caused. While the bill still remains comprehensive in its proposal, any legal action that may be taken after an incident occurs may take years to conclude. It will allow, as Eric Schmidt notes, for companies to “roll the dice and let the lawyers clean up the mess after the fact.”
Since its introduction in early February of 2024, the bill has been at the heart of a great debate amongst tech executives, politicians, and ethics experts. The arguments brought up by those who stood fiercely in opposition to the bill were centered mainly around its cost, its targets, and the ultimate belief that it would stunt technological innovation. Opponents include lobbyists from tech giants such as Meta and Google, who contested that it would be nearly impossible to test for all the potential harms of new AI technology while simultaneously innovating and deploying new models. Furthermore, Democratic Congress representative Nancy Pelosi opposed the bill, claiming that it was “well-intentioned but ill informed.” Pelosi’s position, held by many other critics of Congress, believed that the bill’s measures, especially around holding AI companies accountable, may inadvertently stifle innovation in smaller AI businesses. On a costly note, the measures that SB 1047 would demand would require mandated safety testing for advanced AI models that is estimated to cost more than $100 million to develop. Required protective elements such as a “kill-switch” would take extensive sacrifices both in relation to budgeting and labor.
Critics of the bill further contested that the bill is limited in scope, targeting large and powerful computing models instead of small-scaled AI systems equally capable of enacting public harm. From a broader standpoint, regardless of how the bill affects tech developers and the public, many believe that the bill is being misdirected. Rather, these opponents believe that it is those that use the AI for malicious purposes, rather than its developers, that should be held most accountable for the consequences of their actions. In June, just four months after the bill was proposed, Y Combinator, a large investor of small-start ups, argued in a letter that was signed by over 100 start-ups, the responsibility for the misuse of large language models should rest “with those who abuse these tools, not with the developers who create them.”
Despite massive pushback from tech giants and politicians on Bill SB 1047, it remains strongly vouched for and supported by AI ethics activists and other smaller tech developers. Amongst the few supporting parties are AI expert Yoshua Bengio and SpaceX and Tesla’s Elon Musk. Those in support of the bill largely acknowledge that there are amendments that should be made to further refine, specialize, and clarify its protective policies. These proponents do not claim the bill is perfect, but argue that it takes the U.S. one step closer to establishing a legitimate and genuine clause to protect the nation from malicious parties. Spokesperson Bob Salladay challenged Newsom’s veto, arguing that choosing to prioritize public safety does not necessarily eradicate innovation: “It’s not a binary choice. We can protect the public and foster innovation at the same time.” Regulators are expected to hold Big Tech accountable and demand genuine transparency about data usage. Peter Guagenti, president of Tabnine argued that while the bill “may affect their cost of doing business, it will build trust in AI more broadly and ultimately help [California] build a more vibrant, more profitable ecosystem.” Even in its weakened form, if SB 1047’s policies lead just one AI company to think critically about technological risk, it will only lead to greater good.
The bill gained national media traction after it was publicly supported by SAG-AFTRA, a union representing many Hollywood actors and actresses. A group called “Artists for Safe AI” further issued an open letter to support the bill with support from large names such as J.J Abrams, Shonda Rhimes, Mark Ruffalo, and Jane Fonda. This decision to voice their support stemmed from personal experiences with the use of AI, particularly deepfakes, in a malicious and detrimental manner. The voice of Hollywood largely amplified the recognition and discussion surrounding the matter, leading many other voices to jump in on the bill.
When asked about his decision to veto the bill, Governor Newsom stated that he does not “believe that this is the best approach to protecting the public from real threats posed by the technology.” His decision to overturn the act was guided by the belief that SB 1047 focuses solely on “the most expensive and large-scale models” that give the public a false sense of security. Instead, Newsom directs the attention onto the possibility of small-scaled and specialized models that have been emerging that may cause equal if not more harm to the public. The governor concluded his statement by noting that the bill does not take into account whether the system is “deployed in high-risk environments, involves critical decision-making or the use of sensitive data.” His stance seems to indicate a “wait-and-see” approach to the implementation of broader and more targeted AI models. Newsom defended his stance on choosing to veto the controversial act by emphasizing his belief in the bill’s incompleteness and inaccurate target demographic.
Newsom’s decision, however, has been theorized to be highly influenced by the fact that California’s economy is substantially supported by its technological sector. Since its introduction, the bill has faced direct opposition from highly influential and powerful tech developers that help drive much of the economy. Therefore, Newsom’s final decision was ultimately influenced by considerations of how it might impact the state of Silicon Valley. Within the political sphere of California’s Silicon Valley, much of Newsom’s candidacy is dependent on the support of large tech companies. Therefore, had Newsom passed the bill despite strong opposition from influential tech leaders, he risked causing irreversible damage to his standing as governor.
Interestingly, prior to Newsom’s veto, a poll from the AI Policy Institute found that 60% of voters are prepared to blame the governor for future AI-related incidents if he vetoed the act. Additionally, 40% of California voters further said that they would be less likely to vote for Governor Newsom if he chose to run for a future presidential primary election given that he vetoes the bill. This voter sentiment likely reflects a growing public concern about the unchecked influence and potential risks associated with AI. Many Californians recognize the need for stringent oversight, especially with the state being home to some of the world’s most powerful AI companies. By vetoing the bill, Newsom may be seen as prioritizing tech industry interests over public safety, accountability, and consumer protections, which could appear out of touch with voters’ concerns about the rapid, unregulated development of AI.
While California is home to the reputable and influential Silicon Valley, it is not the only governmental body working on developing AI regulations. Despite the veto, Newsom had previously signed 17 other bills within the last month that touched on the deployment and development of generative AI, particularly on AI watermarking and combating the spread of AI-generated misinformation across online platforms. Furthermore, Colorado passed a substantial consumer protection law in May that focused on AI. Following these, other states such as Oregon, Montana and Tennessee have enacted AI-related legislation while other states are in the midst of developing proposed provisions. On a federal level, U.S. legislators have been working toward proposing comprehensive AI legislation. As it stands, the bill will return to the legislature, where a two-thirds majority vote in both houses can override Newsom’s veto. However, veto overrides are rare, and have not occurred since 1979. As Jennifer Everett, a partner in Alston and Bird’s technology and privacy group stated, “This isn’t going to be the end of regulations coming out of California for AI.”
Regardless of the bill’s uncertainties, it is one of the first genuine attempts at regulating AI development and holding powerful parties accountable. The introduction and discussions around the bill would likely have encouraged experts to think critically and consider the risks inherent in the technology they develop. Not only prompting conversion, the bill exposes the reality that, in the race for innovation, public safety and security are often sacrificed. Bill SB 1047 would have changed that dynamic, behaving as a trailblazer for future stricter and specialized legislation. At present, California stands at a crossroads with AI developers stating that AI may silently take over humankind within the next ten years, while proceeding with progress at negligent speeds.
Looking globally, the way that the U.S. presents its stance on AI developmental policies has far-reaching effects that extend beyond state borders. For example, China and the U.S. have stood head-to-head in a technological arms race for the past decade. These international superpowers have been perceived to be competing against each other for newer, efficient, and outstanding technologies. Despite differences in approach, funding, and deployment, Professor Jeff Ding at George Washington University notes that AI regulation presents a breeding ground for cooperation between the two nations. Both China and the U.S. have acknowledged the risks associated with powerful AI models, and the avoidance of conflict and harm caused by careless AI systems can behave as a common ground for these nations to combat a common enemy. The enactment of legitimate protective bills such as SB-1047 has the potential to influence other nations to follow suit, regardless of their standing with the U.S.
On the assumption that Newsom’s veto was grounded in the belief that the bill lacked details and had areas of ambiguity, then this can be viewed as a positive by many AI policy ethics experts. Many critics were careful to point out some of the ambiguities in the text. In particular, some of the terms like “reasonable care” and “materially enabled” are not expanded upon and are crucial to understanding what is liable and what punishments may be enforced. As AI models expand rapidly, it becomes difficult to define what “reasonable care” may look like in reality. The development of a central ethics framework within the field of AI is relatively new. Therefore, experts are still experimenting with implementing the proper strategies and using adequate terminology. They cannot afford to suffer casualties based on ambiguities. With the rapidly ever-changing landscape of AI, it is crucial that policy experts collaborate with ethics experts to design proposals that are clear, detailed, and comprehensive.
Despite the multi-layered politics that underlie Newsom’s veto, what matters going forward is that the progression of protective policies matches the speeds at which AI is developing. Humanity is entering a new frontier that’s implications are still unknown. Therefore, it is crucial now to ensure accountability on all fronts: developers, users, and the abusers of the technology. SB 1047 marks the beginning of a movement that can only gain momentum so long as individuals continue to stand for their democratic rights in the face of authoritative systems.
Edited by Bill Lin
Megan Tan is in her third year at McGill University, currently pursuing a BA&Sc in Cognitive Science with a minor in Philosophy. As a Staff Writer at Catalyst Publications, Megan aims to bridge her background in Behavioural Science with International Development as her writing is mainly focused on the Health and Technological dimensions of global political issues. Having grown up in Singapore, Qatar, and Canada, Megan strives to use her diverse upbringing to offer a multifaceted lens through which she examines the interplay of technology, health, and cognitive science.