Skip to yearly menu bar Skip to main content


Spotlight Poster

STAIR: Improving Safety Alignment with Introspective Reasoning

Yichi Zhang · Siyuan Zhang · Yao Huang · Zeyu Xia · Zhengwei Fang · Xiao Yang · Ranjie Duan · Dong Yan · Yinpeng Dong · Jun Zhu

[ ]
Thu 17 Jul 11 a.m. PDT — 1:30 p.m. PDT
 
Oral presentation: Oral 5A Safety and Security
Thu 17 Jul 10 a.m. PDT — 11 a.m. PDT

Abstract:

Ensuring the safety and harmlessness of Large Language Models (LLMs) has become equally critical as their performance in applications. However, existing safety alignment methods typically suffer from safety-performance trade-offs and the susceptibility to jailbreak attacks, primarily due to their reliance on direct refusals for malicious queries. In this paper, we propose STAIR, a novel framework that integrates SafeTy Alignment with Itrospective Reasoning. We enable LLMs to identify safety risks through step-by-step analysis by self-improving chain-of-thought (CoT) reasoning with safety awareness. STAIR first equips the model with a structured reasoning capability and then advances safety alignment via iterative preference optimization on step-level reasoning data generated using our newly proposed Safety-Informed Monte Carlo Tree Search (SI-MCTS). We further train a process reward model on this data to guide test-time searches for improved responses. Extensive experiments show that STAIR effectively mitigates harmful outputs while better preserving helpfulness, compared to instinctive alignment strategies. With test-time scaling, STAIR achieves a safety performance comparable to Claude-3.5 against popular jailbreak attacks.

Live content is unavailable. Log in and register to view live content