Skip to yearly menu bar Skip to main content


Spotlight Poster

Retrieval-Augmented Perception: High-resolution Image Perception Meets Visual RAG

Wenbin Wang · Yongcheng Jing · Liang Ding · Yingjie Wang · Li Shen · Yong Luo · Bo Du · Dacheng Tao

[ ]
Thu 17 Jul 4:30 p.m. PDT — 7 p.m. PDT
 
Oral presentation: Oral 6B Deep Learning Architectures
Thu 17 Jul 3:30 p.m. PDT — 4:30 p.m. PDT

Abstract: High-resolution (HR) image perception remains a key challenge in multimodal large language models (MLLMs). To overcome the limitations of existing methods, this paper shifts away from prior dedicated heuristic approaches and revisits the most fundamental idea to HR perception by enhancing the long-context capability of MLLMs, driven by recent advances in long-context techniques like retrieval-augmented generation (RAG) for general LLMs. Towards this end, this paper presents the first study exploring the use of RAG to address HR perception challenges. Specifically, we propose Retrieval-Augmented Perception (RAP), a training-free framework that retrieves and fuses relevant image crops while preserving spatial context using the proposed Spatial-Awareness Layout. To accommodate different tasks, the proposed Retrieved-Exploration Search (RE-Search) dynamically selects the optimal number of crops based on model confidence and retrieval scores. Experimental results on HR benchmarks demonstrate the significant effectiveness of RAP, with LLaVA-v1.5-13B achieving a 43\% improvement on $V^*$ Bench and 19\% on HR-Bench. Code can be found in the supplementary material.

Live content is unavailable. Log in and register to view live content