Skip to yearly menu bar Skip to main content


Poster

ARS: Adaptive Reward Scaling for Multi-Task Reinforcement Learning

MYUNG-SIK CHO · Jong Eui Park · Jeonghye Kim · Youngchul Sung

[ ]
Wed 16 Jul 11 a.m. PDT — 1:30 p.m. PDT

Abstract:

Multi-task reinforcement learning (RL) encounters significant challenges due to varying task complexities and their reward distributions from the environment. To address these issues, in this paper, we propose Adaptive Reward Scaling (ARS), a novel framework that dynamically adjusts reward magnitudes and leverages a periodic network reset mechanism. ARS introduces a history-based reward scaling strategy that ensures balanced reward distributions across tasks, enabling stable and efficient training. The reset mechanism complements this approach by mitigating overfitting and ensuring robust convergence. Empirical evaluations on the Meta-World benchmark demonstrate that ARS significantly outperforms baseline methods, achieving superior performance on challenging tasks while maintaining overall learning efficiency. These results validate ARS's effectiveness in tackling diverse multi-task RL problems, paving the way for scalable solutions in complex real-world applications.

Live content is unavailable. Log in and register to view live content