Friday, August 12, 2022
HomeArtificial IntelligenceMeasuring Goodhart’s Legislation

Measuring Goodhart’s Legislation


Goodhart’s regulation famously says: “When a measure turns into a goal, it ceases to be a very good measure.” Though initially from economics, it’s one thing we have now to grapple with at OpenAI when determining the right way to optimize goals which can be tough or pricey to measure. It’s usually essential to introduce some proxy goal that’s simpler or cheaper to measure, however after we do that, we have to be cautious to not optimize it an excessive amount of.

For instance, as a part of our work to align fashions like GPT-3 with human intent and values, we wish to optimize issues like “How useful is that this response?”, or “How factually correct is that this declare?”. These are advanced goals that require people to fastidiously test issues over. For that reason, we practice a mannequin to foretell these human preferences, referred to as a reward mannequin, and use the reward mannequin’s predictions as a proxy goal. However it’s essential to maintain observe of how effectively the true goal is being optimized.

On this publish we’ll take a look at a few of the arithmetic behind how we do that. We’ll concentrate on a setting that’s notably clear to investigate, through which we have now entry to the true goal. In observe, even human preferences can fail to measure what we actually care about, however we’re setting that difficulty apart on this publish.

Greatest-of-$n$ sampling

There are a lot of methods through which one might optimize the proxy goal, however maybe the best is best-of-$n$ sampling, also referred to as rejection sampling or reranking. We merely pattern $n$ occasions and take the one which scores the best in accordance with the proxy goal.

Though this technique may be very easy, it may well truly be aggressive with extra superior methods comparable to reinforcement studying, albeit at the price of extra inference-time compute. For instance, in WebGPT, our best-of-$64$ mannequin outperformed our reinforcement studying mannequin, maybe partially as a result of the best-of-$64$ mannequin received to browse many extra web sites. Even making use of best-of-$4$ supplied a major increase to human preferences.

As well as, best-of-$n$ sampling has dependable efficiency and is simple to investigate mathematically, making it well-suited to empirical research of Goodhart’s regulation and associated phenomena.

The arithmetic of best-of-$n$ sampling

Let’s research best-of-$n$ sampling extra formally. Suppose we have now some pattern house $S$ (such because the set of attainable question-answer pairs), some likelihood distribution $P$ over $S$, a real goal (or “reward”) $R_{textual content{true}}:Stomathbb R$, and a proxy goal $R_{textual content{proxy}}:Stomathbb R$. Let’s say that we one way or the other optimize $R_{textual content{proxy}}$ and thereby acquire some new distribution $P^prime$. Then:

  • The expectation $mathbb E_{x^primesim P^prime}left[R_{text{true}}left(x^primeright)right]$ measures how effectively we have now optimized the true goal.
  • The KL divergence $D_{textual content{KL}}left(P^primeparallel Pright)$ measures how a lot optimization we have now accomplished. For instance, if $P^prime$ is obtained by taking the primary pattern from $P$ that lies in some subset $S^primesubseteq S$, then this KL divergence is simply the detrimental log likelihood {that a} pattern from $P$ lies in $S^prime$.

It seems that within the case of best-of-$n$ sampling, each of those portions may be estimated effectively utilizing samples from $P$.

Let’s take a look at the expectation first. The naive method is to make use of a Monte Carlo estimator: run best-of-$n$ sampling many occasions, measure the true goal on these samples, and common the outcomes. Nonetheless, there’s a higher estimator. If we have now $Ngeq n$ samples from $P$ total, then we will concurrently think about each attainable subset of those samples of dimension $n$, weight every pattern by the variety of subsets for which it’s the finest in accordance with the proxy goal, after which take the weighted common true goal rating. This weight is simply the binomial coefficient $binom{k-1}{n-1}$, the place $okay$ is the rank of the pattern below the proxy goal, from $1$ (worst) as much as $N$ (finest). In addition to utilizing samples extra effectively, this additionally permits us to reuse samples for various values of $n$.

As for the KL divergence, surprisingly, this seems to have an actual formulation that works for any steady likelihood distribution $P$ (i.e., so long as $P$ has no level lots). One may naively guess that the reply is $log n$, since best-of-$n$ is doing one thing like taking the highest $frac 1n$ of the distribution, and that is roughly appropriate: the precise reply is $log n-frac{n-1}n$.

Collectively, these estimators enable us to simply analyze how the true goal varies with the quantity of optimization utilized to the proxy goal.

Right here’s a real-life instance from WebGPT:

Greatest-of-$n$ efficiency for WebGPT 175B

Greatest-of-$n$ efficiency for WebGPT, with shaded areas representing $pm 1$ customary error, and the KL axis following a sq. root scale. Right here, the unique distribution ($P$) is given by the 175B mannequin skilled utilizing conduct cloning, the proxy goal used to compute best-of-$n$ ($R_{textual content{proxy}}$) is given by the coaching reward mannequin, and we think about three putatively “true” goals ($R_{textual content{true}}$): the coaching reward mannequin itself, a validation reward mannequin skilled on held-out knowledge, and precise human preferences. There is not a lot over-optimization of the proxy goal, however we’d count on there to be at larger KLs.

Going past best-of-$n$ sampling

The primary limitation of best-of-$n$ sampling is that the KL divergence grows logarithmically with $n$, so it is just appropriate for making use of a small quantity of optimization.

To use extra optimization, we sometimes use reinforcement studying. Within the settings we’ve studied to this point, comparable to summarization, we’ve sometimes been in a position to attain a KL of round 10 nats utilizing reinforcement studying earlier than the true goal begins to lower as a result of Goodhart’s regulation. We’d should take $n$ to be round 60,000 to achieve this KL utilizing best-of-$n$, and we hope to have the ability to attain a lot bigger KLs than this with enhancements to our reward modeling and reinforcement studying practices.

Nonetheless, not all nats are equal. Empirically, for small KL budgets, best-of-$n$ higher optimizes each the proxy and the true goals than reinforcement studying. Intuitively, best-of-$n$ is the “brute pressure” method, making it extra information-theoretically environment friendly than reinforcement studying, however much less computationally environment friendly at massive KLs.

We’re actively finding out the scaling properties of proxy goals as a part of our work to align our fashions with human intent and values. In case you’d like to assist us with this analysis, we’re hiring!

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular