California capitol building dome with US and state flags

Photo by trekandshoot/iStock

commentary

A Heated California Debate Offers Lessons for AI Safety Governance

The bill exposed divisions within the AI community, but proponents of safety regulation can heed the lessons of SB 1047 and tailor their future efforts accordingly.

Published on October 8, 2024

In late August, the California legislature managed a feat that has eluded the U.S. Congress: passing a bipartisan bill designed to ensure the safe development of advanced artificial intelligence (AI) models. That legislation, Senate Bill (SB) 1047, aimed to regulate frontier technologies emerging from an industry closely tied to California that is now raising hundreds of billions of dollars in investment and promising to reshape work, health care, national security, and even routine tasks of daily life.

On September 29, Governor Gavin Newsom vetoed the bill. His decision—following a pitched debate exposing rifts among AI researchers, technology companies, and policymakers—was tracked by leaders around the world. In his veto message, while Newsom affirmed his support for the bill’s safety objectives, announcing a new effort to craft guardrails for AI deployment and committing to continue working with the legislature, he ultimately concluded that a different approach was needed.

The problem the bill sought to address, at least in principle, is straightforward: the upcoming generation of frontier models could benefit millions of people. However, they could also risk serious harm to California’s 40 million residents and people around the world. For example, there are worries they could be weaponized to attack critical infrastructure or create biological or cyber weapons. Many companies have voluntarily agreed to test their models before release to reduce these risks. But no law requires them to do so. 

Nothing in the governor’s veto message denies that SB 1047 was a serious and layered attempt to formalize a scaffolding of mandatory governance around the most powerful upcoming AI systems, those just beyond the horizon of today’s flagship models such as OpenAI’s GPT-4, Google’s Gemini, and Meta’s Llama 3.2. The bill would have required developers of covered models to publish and implement written safety and security protocols; endeavor to prevent misuse; and take “reasonable care” to avoid causing or enabling catastrophic harms such as the creation of chemical or biological weapons, cyber attacks on critical infrastructure, or mass casualty events. It would also have required frontier model developers to build a “shutdown capability” enabling them to disable models within their control in the event that such harms materialized.

These governance mechanisms may sound procedural, but it is telling that they ignited such a heated and political debate during the bill’s introduction, evolution, and demise. 

Opponents argued that the bill was based on alarmist fantasies. They warned it would hamstring U.S. and Californian competitiveness, devastate the open ecosystem upon which small developers and countless innovations depend, and even impede the development of AI safety approaches. Moreover, they claimed, it would encroach on the federal government’s national security responsibilities, inviting a tangle of overlapping state rules without solving more realistic, near-term problems including disinformation, discrimination, and job displacement.

Supporters argued that by mandating basic governance—such as testing and safety planning—the bill was merely formalizing voluntary commitments already agreed to by most developers, including White House–brokered commitments in 2023 and a framework announced at the AI Seoul Summit in 2024. They noted that the bill’s “reasonable care” standard echoed existing law, promoting safety in an industry where “moving fast and breaking things” could have unimaginable consequences. Worries about open source, they contended, had been substantially resolved by amendments to the bill between its introduction in February and its passage by the legislature in August. Other bills could tackle other risks. Most urgently, supporters argued, the bill was a targeted and necessary response to very real threats. AI developer Anthropic, for example, warned that its “work with biodefense experts, cyber experts, and others shows a trend towards the potential for serious misuses in the coming years – perhaps as little as 1-3 years.”

A vanguard legislative effort around a technology gripping the public imagination would have been enough to make the bill national news. However, California’s unique role in both technological innovation and policy leadership added to the bill’s significance. As the governor’s veto message highlighted, the state is home to a majority of the world’s leading AI firms. Moreover, California policymaking across a host of issues, including carbon emissions, digital privacy, and social policy, has a long history of impacting the national landscape.

Meanwhile, California and many other states have been increasingly assertive technology regulators over the past few years, flexing their broad powers to rein in perceived abuses in the tech sector. Whether at the federal or state level, the debate smoldering over AI governance seems likely to continue.

Given that, we can draw important lessons from California’s experience.

First, SB 1047 was often cast as a zero-sum tradeoff between innovation and regulation. But the central issue is actually a complementary balance of innovation and safety—both for human betterment. Supporters and opponents alike generally shared these objectives, differing on the likelihood and imminence of various harms. While some disagreement is reasonable, future debate would benefit from greater transparency and clarity around this central balance. Those opposing regulation must be explicit about the severity, likelihood, and imminence of harms they expect could result from the use or misuse of models and their derivatives. Proponents must clarify whether and how far they are prepared to burden or curtail model development and use.

Second, future efforts should preserve SB 1047’s “trust but verify” approach. Californians report a mix of worry and guarded optimism around the technology’s impacts, including on scientific progress, health care, and education. Fulfilling their hopes for AI will require allowing private companies latitude to build, refine, and iterate. It will be vitally important to inculcate a widespread internal culture of responsible development—as society has done over many years in fields as varied as medical research, automotive engineering, and education. Voluntary commitments are steps in the right direction. However, safety governance cannot be left entirely to the discretion of private industry. Establishing a mandatory baseline, with threshold obligations of transparency and diligence, facilitates experimentation while enabling policymakers, external researchers, and society itself to understand and contribute to risk management decisions that affect us all.

Third, greater awareness of existing law could enrich and contextualize future debates. Under a long-established and widespread body of law—the common law of torts—AI developers must already take reasonable care that their products do not cause harm to others. SB 1047, particularly as it was amended during the legislative process to incorporate this well-worn “reasonable care” requirement, would not have imposed a completely new standard so much as clarified the steps needed under California law to meet an existing one.

It’s true that the bill would have broadened the state’s authority—allowing California to ensure companies faced costs and injunctive relief if they refused to test advanced models. For now, the status quo will remain the common law of torts—with private judgment calls anticipating retrospective liability. With this framing, future debates should turn less on whether to impose liability than on how to develop standards and practices to meet these existing legal obligations.

Fourth, SB 1047 included ideas, less central to the debate, that could aid in this effort. For example, the bill would have required annual third-party audits to assess AI developers’ safety and security controls as well as annual statements of compliance. Critics argued that this would prematurely calcify the emergent field of AI safety, hanging liability on nonexistent best practices. However, independent assessment to ensure the accuracy of developers’ disclosures and the efficacy of basic safety and security controls could promote the evolution of flexible standards, clarify developers’ existing tort law obligations, and increase public confidence as AI is introduced into ever-wider use.

Regulation is part of a healthy innovation ecosystem. SB 1047 became a lightning rod in part because its initial incarnation included arguably extraneous provisions such as perjury penalties and used comparatively unfamiliar legal language with potentially far-reaching impacts on the open model ecosystem. While the divisions the bill exposed within the AI community persist, pragmatic proponents of safety regulation can make headway by heeding the lessons of SB 1047 and tailoring their future efforts accordingly.

Carnegie India does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie India, its staff, or its trustees.