Impacts Analysis of California's SB 1047
Executive Summary
Background
Framework for evaluating AI supervisory processes:
- Is it certain? Does it have high precision and recall?
- Is it efficient? Is it comprehensible to a wide range of people, simple, fast, low-cost?
- Is it adaptable? Can it handle unknown risks and can the process itself be adapted?
- Is it accountable? Does it encourage transparency and is it accountable to the public and scientific community?
- Does it minimize unintended harms and moral hazards?
Analysis of current 1047 proposal:
- While the proposal has good intent, it tries to solve a complex research problem with the legal liability system, which is ill-adapted to the task
- Key terms are uncertain, introducing moral hazard and the potential for regulatory abuse
- It may not even address the right research problems. Other important risks from AI are not covered, including threats from less advanced models
- Unclear how it interacts with scientific, open-source and consumer communities which already provide fast supervision with greater representation
- Concentrates power (even military power) in a small, minimally accountable Frontier Model Division which is a highly attractive target for regulatory capture
- Allocates power to scarce intermediaries - developers of specialized economic models of AI, law, and policy - for which no norms or competitive marketplace exists
- May incentivize geopolitical maneuvering for control of key regulatory positions
- Inflexible to change compared to open scientific processes like peer review and open letters, which have a long track record as supervisory tools for research questions
Suggestions:
- Fund:
- Competitive grant programs to reduce uncertainty over problems and solutions via research and standardization. These are currently underinvested by the community, especially for analyzing deployment of AI models.
- Advise:
- Provide key input to ongoing community processes to develop eval sets into official standards
- Provide key input to ongoing community processes to develop responsible disclosure processes for vulnerabilities
- Legislate:
- Mandate industry adoption of standards proposed by the community which mitigate urgent, near-term risks:
- Pass narrowly-scoped bills which mandate additional context for AI-generated content (e.g. for watermarking, political ads)
Related
- EFF Response
- Software & Information Industry Association Response
- AI and Accountability: Policymakers Risk Halting AI Innovation - Dean W. Ball in the Orange County Register
- Regulating Frontier Models in AI - Will Rinehart in The Dispatch
- r/LocalLLaMA Response (94 Comments)
- X/Twitter Response (37 Comments)
- "It almost seems like science fiction to see an actual bill like this. I support thoughtfully considering safety implications with any AI, but I think this bill is likely naive about what is actually possible to do before a model is even trained (note it says, "before initiating training of that covered model"). I think one consequence might be that researchers feel unsafe trying out ideas because they could be held responsible for even trying to train them, which would push AI research outside California. I'm hoping there are better solutions than that."
...
"My solution would be don't use models for things they haven't been validated for. But don't criminalize the researcher who created the model."
- Kenneth O. Stanley, artificial intelligence researcher, former Professor of Computer Science, founder @ Maven, author of "Why Greatness Cannot Be Planned"
-
"This bill is too broad and would likely have many unintended consequences. In particular, criminalizing model development is a step not to be taken lightly; I'm surprised to see it being seriously discussed in the California legislature. A more useful bill would be narrower in scope and focus on addressing specific near-term harms."
- Ethan Fast, Co-Founder, VCreate (Foundation models for T-cell receptors)
-
"SB-1047 could reduce AI safety, through reducing transparency, collaboration, diversity, and resilience." - Jeremy Howard, co-founder @ Answer.ai and Fast.ai, Digital Fellow @ Stanford, former President and Chief Scientist @ Kaggle. (Full Comment)
-
"California Bill 1047 is an attack on AI innovation." - Martin Casado, General Partner @ a16z (Full Comment)
-
"We cannot let this bill pass." - Guillaume Verdon, Founder @ Extropic (Full Comment)
- "This is the most brazen attempt to hurt startups and open source yet." - Brian Chau, Executive Director @ Alliance for the Future (Full Comment)
- "about 12 months ago, the Center For AI Safety's "Statement on AI Risk" warned that AI could cause human extinction and stoked fears of AI taking over. This alarmed leaders in Washington. But many people in AI pointed out that this dystopian science-fiction scenario had little basis in reality." - Andrew Ng, computer science professor @ Stanford, founder @ DeepLearning.ai, AI Fund, Coursera, Landing.ai, co-founder and former head of Google Brain, former Chief Scientist at Baidu (Full Comment)
- "Policy makers should not listen to fringe AI doomers." - Yann LeCun, ACM Turing Award Laureate, Professor at NYU. Chief AI Scientist at Meta. Researcher in AI, Machine Learning, Robotics. (Full Comment)
- "Wrapping up new AI models in red tape effectively cements the biggest tech players as winners of the AI race" - Todd O'Boyle, Tech Policy Director @ Chamber of Progress ([1] [2])
- "...there's just absolutely no need for it. It looks to me like it's just going to empower people with more resources. My advice to government officials is that if they're so lax in enforcing antitrust laws, this will continue to do the opposite, making AI companies even more powerful." - Ben Recht, engineering and computer science professor @ UC Berkeley (Full Comment)