Poster
DUNIA: Pixel-Sized Embeddings via Cross-Modal Alignment for Earth Observation Applications
Ibrahim Fayad · Max Zimmer · Martin Schwartz · Philippe CIAIS · Fabian Gieseke · AurĂ©lien de Truchis · Sarah Brood · Gabriel Belouze · Alexandre d'Aspremont
In recent years, significant efforts have been directed towards adapting self-supervised multimodal learning for Earth observation. However, existing methods produce coarse patch-sized embeddings, limiting their effectiveness and integration with other modalities like LiDAR. To close this gap, we present DUNIA, an approach to learn pixel-sized embeddings through cross-modal alignment between images and full-waveform LiDAR data. As the model is trained in a contrastive manner, the embeddings can be directly leveraged in the context of a variety of environmental monitoring tasks in a zero-shot setting. In our experimental evaluation, we demonstrate the effectiveness of the embeddings for seven such tasks (canopy height mapping, fractional canopy cover, land cover mapping, tree species identification, plant area index, crop type classification, and per-pixel waveform-based vertical structure mapping). The results show that the embeddings along with zero-shot classifiers often outperform specialized supervised models both quantitatively and qualitatively, even in low-labeled data regimes. In the fine-tuning setting, we show strong low-shot capabilities with performances near or better than state of the art on five out of six tasks.
Live content is unavailable. Log in and register to view live content