Skip to yearly menu bar Skip to main content


Poster

Provable In-Context Vector Arithmetic via Retrieving Task Concepts

Dake Bu · Wei Huang · Andi Han · Atsushi Nitanda · Qingfu Zhang · Hau-San Wong · Taiji Suzuki

[ ]
Tue 15 Jul 11 a.m. PDT — 1:30 p.m. PDT

Abstract:

In-context learning (ICL) has garnered significant attention for its ability to grasp functions/tasks from demonstrations. Recent studies suggest the presence of a latent task/function vector in LLMs during ICL. Merullo et al. (2024) showed that LLMs leverage this vector alongside the residual stream for Word2Vec-like vector arithmetic, solving factual-recall ICL tasks. Additionally, recent work empirically highlighted the key role of Question-Answer data in enhancing factual-recall capabilities. Despite these insights, a theoretical explanation remains elusive. To move one step forward, we propose a theoretical framework building on empirically grounded hierarchical concept modeling. We develop an optimization theory, showing how nonlinear residual transformers trained via gradient descent on cross-entropy loss perform factual-recall ICL tasks via vector arithmetic. We prove 0-1 loss convergence and show the strong generalization, including robustness to concept recombination and distribution shifts. These results elucidate the advantages of transformers over static embedding predecessors. Empirical simulations corroborate our theoretical insights.

Live content is unavailable. Log in and register to view live content