Homomorphic Encryption for LLMs Challenge: $1000 Prize

Submission Deadline: 5/25/23

Submit: challenge@context.fund

To the best open-source model and paper that answers this homomorphic security question:

Threat model: Rogue employee at a cloud AI provider like OpenAI who can read LM API requests which may include email data (or equivalently, a hacker with access to OpenAI logs).

Task: Given dataset X and a fraud dataset Z, improve the data efficiency of fine-tuning a model using a transformation y with a key k such that:

  1. Without k, y(X) is hard for a human to read
  2. With k, y(X) is easy to read
  3. A model trained on y(X) has high prediction accuracy (close to that of a model trained on X)
  4. We can train a fraud discriminator on the encoded data without knowing the underlying x: {y(Z) → 1, y(X) → 0, with high probability}.

A baseline example is a substitution cipher in the token vocabulary space as y, where k is the substitution order - we know that if we completely retrain a model on y(X), it should have the same accuracy as one trained on X, and also be hard for a human to understand. But a.) it's vulnerable to statistical attack b.) it may be data-inefficient to train.

Is there a better such y? What is the data-efficiency of training it?

If we can do this, we make personal data safer when using deployed AI models (preserving the economic value of the individual), and the cloud logs less of a honeypot target for hackers.

*Partial credit may also be awarded to multiple submissions and to the open-source dependencies of the submissions.