The bill exposed divisions within the AI community, but proponents of safety regulation can heed the lessons of SB 1047 and tailor their future efforts accordingly.
Scott Kohler is a nonresident scholar at Carnegie California. His research explores the nexus of technology, law, and public policy, with a focus on evolving approaches to regulation and structures of governance. He has spent the past decade advising technology companies on cutting edge legal and regulatory challenges, including at Google, where he served as lead counsel for Alphabet’s privacy governance program, and Nest Labs, where he led the global public policy team and was responsible for privacy, safety, and regulatory affairs.
Kohler previously served as senior legal advisor at the U.S. Department of Energy, advising the Secretary of Energy and programs across the department on issues spanning its energy, environmental, and scientific missions. Earlier in his career, he worked at the Partnership for Public Service and in private legal practice at Cravath, Swaine & Moore LLP. Before attending law school, he worked at CNN, receiving a Peabody Award for contributions to the network’s reporting on international terrorism.
Kohler is a graduate of Columbia Law School and Duke University, where he also served as a research associate in public policy studies, co-authoring Casebook to the Foundation: A Great American Secret (PublicAffairs, 2007).
The bill exposed divisions within the AI community, but proponents of safety regulation can heed the lessons of SB 1047 and tailor their future efforts accordingly.
The bill has galvanized a discussion about innovation, safety, and the appropriate role of government—particularly at the subnational level—in AI regulation.