Poster
NegMerge: Sign-Consensual Weight Merging for Machine Unlearning
Hyo Seo Kim · Dongyoon Han · Junsuk Choe
Machine unlearning aims to selectively remove specific knowledge from a trained model. Existing approaches, such as task arithmetic, fine-tune the model on the forget set to create a task vector (i.e., a direction in weight space) for subtraction from the original weight. However, their effectiveness is highly sensitive to hyperparameter selection, requiring extensive validation to identify the optimal vector from many fine-tuned candidates. In this paper, we propose a novel method that utilizes all fine-tuned models trained with varying hyperparameters instead of a single selection. Specifically, we aggregate the computed task vectors by retaining only the elements with consistent shared signs. The merged task vector is then negated to induce unlearning on the original model. Evaluations on zero-shot and standard image recognition tasks across ten datasets and three backbone architectures show that our approach achieves superior unlearning performance. It outperforms state-of-the-art methods while requiring similar or fewer computational resources. Our code will be open-sourced.
Live content is unavailable. Log in and register to view live content