NTIA Open Weights Response: Towards A Secure Open Society Powered By Personal AI

Executive Summary



Strong evidence suggests that open models are safer than closed models due to efficiencies in the fields of science, economics and cybersecurity. In science and cybersecurity, this is due to inspectability and the ability to share the model with millions of others to distribute the burden of verification, thus solving the expert problem. This substantially aids defender and builder users, the majority use case. In terms of economic equality, open models allow for extreme efficiency as well as more equitable distribution, since they can be offered at low or zero cost. They also prevent society from descending back into non-evidence-based thinking and warfare, inspire faith in transparent rule of law, and allow anyone to generate examples of their ideas at a small scale, which is important for clear communication. Closed models are most likely to be abused by deployers, while open models can be abused by either deployer and users, however, the advantages that open models provide for users acting in defender roles outweigh the risks of availability to attackers, roughly by a factor of 100:1, considering the financial surface area that needs to be defended. Although less secure, we assess that it is acceptable for the government to allow closed APIs of foundation models to remain legal, as they can be used to satisfy commercial and technical considerations of deployment, for example, protection of trade secrets and engineering efficiency.

In our assessment, the government's support initially should be in administering standardization processes and RFCs (such as this one), legislating well-scoped mandates to add transparency to models and model outputs for high-scale deployments, funding defensive research, supporting responsible disclosure programs which would otherwise be underinvested by the private market, and participating in and administering open standards bodies.

Specific legal and technical designs are possible to add further design choices to the defensive acceleration of the AI sector. To harden deployments against spam and phishing, we recommend immediately encouraging the use of physical security keys or biometrics tied to anonymous-but-accountable-by-karma user accounts, as well as scaling verification APIs and promoting the adversarial hardening of open models using offline data prior to deployment as a best practice. Adding watermarking will also improve traceability of outputs. To harden specific deployments against adversarial distribution, per-use model tainting of open model weights may be possible, however we do not recommend this as a default or legal requirement. To harden deployments against financial attacks, licenses like differentiable credit licenses can be experimented with. Together, traceability and licensing form a credible deterrent to abuse for most malicious users, while inspectability and shareability of open models forms a credible deterrent to abuse from malicious deployers and backdoored models.

Read And Sign (16 pages)