Alternative to SB 1047

A Safe Harbor for Independent AI Evaluation in California?

Hi Scott,

Just a personal thought from the investing perspective, 1047 seems likely to be about 1 week - 6 months away from an SBF 2.0-style scandal.

The bill sponsors likely aren't being honest with you or themselves about their level of support (they've cherry-picked views from the people they've funded). There's also a heavy sampling of fiction in their writing which is hidden by lack of peer review (key writings often go to blogs with readers funded by the same sources, not mainstream ML conferences). That community has admitted publicly they didn't forecast the rise of AI so soon (others did, though). Why should we trust that their views on problems are grounded to reality, much less their views on solutions?

Most builders and investors have been busy doing other things before, but are now catching on.

Even with the small changes (appreciated) the bill is still fundamentally awkward - it tries to put together a research body, a standards agency and a prosecution agency with no walls between them. No separation of powers, no checks-and-balances and no transparency. Staffed by "experts" whose MO is half-truths, with a blank check for "reasonable" funding, this could be a disaster for California (and the world).

It's not too late to change (everyone makes mistakes when encountering new domains), but would require decisive action. Scrap the current text, take a clear open letter that well-respected AI security researchers have already asked for like https://sites.mit.edu/ai-safe-harbor/ and do exactly that, nothing more. The broader market and the scientific community can provide the accountability that the lesser-informed supporters of 1047 were promised, and much more (due to scale). Security researchers do need access to the data, though and legal protections to prevent being banned from APIs, as that letter outlines.

We can support that and help connect to the Stanford faculty above.

There's other things to do after that which would cement your reputation as being pro-tech and pro-AI-safety, for example:

1. Forcing large models deployed at a certain scale to transparently publish key benchmarks and network statistics online. This makes a market for accountability, similar to SEC filings for public companies. We could see the emergence of lemon investors like Hindenberg Research, but for powerful AI models.

2. Forcing companies deploying AI models at a certain scale to interoperate via open protocols. This addresses the problems of the monopolies, enshittification, and thickets of incompatible services we saw in the walled gardens of social media from the mid-2010s onwards. Respected commentators like Cory Doctorow have asked explicitly for interoperability in powerful techs, and it would be a pro-civilization move, likely accelerating the positive sectors of the economy 10x or more. Open protocols leading to words rather than secrecy leading to fear.

Happy to draft open letters (with researchers) + legislation (with you all) on this.

Best, Chris

Chris Lengerich /dd (@chrislengerich)
April 29, 2024