Mind Your Augmentation: The Key to Decoupling Dense Self-supervised Learning

ICLR 2024

1Xi'an Jiaotong University   2EPFL  

Abstract

Dense Self-Supervised Learning (SSL) creates positive pairs by building positive paired regions or points, thereby aiming to preserve local features, for example of individual objects. However, existing approaches tend to couple objects by leaking information from the neighboring contextual regions when the pairs have a limited overlap. In this paper, we first quantitatively identify and confirm the existence of such a coupling phenomenon. We then address it by developing a remarkably simple yet highly effective solution comprising a novel augmentation method, Region Collaborative Cutout (RCC), and a corresponding decoupling branch. Importantly, our design is versatile and can be seamlessly integrated into existing SSL frameworks, whether based on Convolutional Neural Networks (CNNs) or Vision Transformers (ViTs). We conduct extensive experiments, incorporating our solution into two CNN-based and two ViT-based methods, with results confirming the effectiveness of our approach. Moreover, we provide empirical evidence that our method significantly contributes to the disentanglement of feature representations among objects, both in quantitative and qualitative terms.

Empirical Study

We identify that recent dense-level pre-trained models tend to retrieve context information as the optimization shortcut, which we termed as the coupling issue.

Method Overview

We introduce a mixture-based de-coupling branch to cooperate with the dense-level SSL models.

Poster

BibTeX

@inproceedings{
qiu2024mind,
title={Mind Your Augmentation: The Key to Decoupling Dense Self-Supervised Learning},
author={Congpei Qiu and Tong Zhang and Yanhao Wu and Wei Ke and Mathieu Salzmann and Sabine S{\"u}sstrunk},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=WQYHbr36Fo}
}