Poster
Theoretical guarantees on the best-of-n alignment policy
Ahmad Beirami · Alekh Agarwal · Jonathan Berant · Alexander D'Amour · Jacob Eisenstein · Chirag Nagpal · Ananda Suresh
Poster Session Room TBD
[
Abstract
]
Wed 16 Jul 11 a.m. PDT
— 1:30 p.m. PDT
Abstract:
A simple and effective method for the inference-time alignment of generative models is the best-of-$n$ policy, where $n$ samples are drawn from a reference policy, ranked based on a reward function, and the highest ranking one is selected. A commonly used analytical expression in the literature claims that the KL divergence between the best-of-$n$ policy and the reference policy is equal to $\log (n) - (n-1)/n.$ We disprove the validity of this claim, and show that it is an upper bound on the actual KL divergence. We also explore the tightness of this upper bound in different regimes, and propose a new estimator for the KL divergence and empirically show that it provides a tight approximation. We also show that the win rate of the best-of-$n$ policy against the reference policy is upper bounded by $n/(n+1)$ and derive bounds on the tightness of this characterization. We conclude with analyzing the tradeoffs between win rate and KL divergence of the best-of-$n$ alignment policy, which demonstrate that very good tradeoffs are achievable with $n < 1000$.
Live content is unavailable. Log in and register to view live content