Skip to yearly menu bar Skip to main content


Poster

OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction

Huang Huang · Fangchen Liu · Letian Fu · Tingfan Wu · Mustafa Mukadam · Jitendra Malik · Ken Goldberg · Pieter Abbeel

[ ]
Tue 15 Jul 11 a.m. PDT — 1:30 p.m. PDT

Abstract:

Vision-Language-Action (VLA) models aim to predict robotic actions based on visual observations and language instructions. Existing approaches require fine-tuning pre-trained vision-language models (VLMs) as visual and language features are independently fed into downstream policies, degrading the pre-trained semantic alignments. We propose OTTER, a novel VLA architecture that leverages these existing alignments through explicit, text-aware visual feature extraction. Instead of processing all visual features, OTTER selectively extracts and passes only task-relevant visual features that are semantically aligned with the language instruction to the policy transformer. This allows OTTER to keep the pre-trained vision-language encoders frozen. Thereby, OTTER preserves and utilizes the rich semantic understanding learned from large-scale pre-training, enabling strong zero-shot generalization capabilities. In simulation and real-world experiments, OTTER significantly outperforms existing VLA models, demonstrating strong zero-shot generalization to novel objects and environments.

Live content is unavailable. Log in and register to view live content