Skip to yearly menu bar Skip to main content


Poster

UP-VLA: A Unified Understanding and Prediction Model for Embodied Agent

Jianke Zhang · Yanjiang Guo · Yucheng Hu · Xiaoyu Chen · Xiang Zhu · Jianyu Chen

[ ]
Thu 17 Jul 4:30 p.m. PDT — 7 p.m. PDT

Abstract:

Recent advancements in Vision-Language-Action (VLA) models have leveraged pre-trained Vision-Language Models (VLMs) to improve the generalization capabilities.VLMs, typically pre-trained on vision-language understanding tasks, provide rich semantic knowledge and reasoning abilities. However, prior research has shown that VLMs often focus onhigh-level semantic content and neglect low-level features, limiting their ability to capture detailed spatial information and understand physical dynamics.These aspects, which are crucial for embodied control tasks, remain underexplored in existing pre-training paradigms.In this paper, we investigate the training paradigm for VLAs, and introduce \textbf{UP-VLA}, a \textbf{U}nified VLA model training with both multi-modal \textbf{U}nderstanding and future \textbf{P}rediction objectives, enhancing both high-level semantic comprehension and low-level spatial understanding. Experimental results show that UP-VLA achieves a 33\% improvement on the Calvin ABC-D benchmark compared to the previous state-of-the-art method. Additionally, UP-VLA demonstrates improved success rates in real-world manipulation tasks, particularly those requiring precise spatial information.

Live content is unavailable. Log in and register to view live content