Representation Learning of Remote Sensing Images
Research better representation of remote sensing images using contrastive pre-training with data from multiple sensors.
With many reviews and studies through our processes, we leverage large-scale training data that include versatile remote sensing information from different sensors with global scene distributions to build the sensor-based location invariance momentum contrast for representation learning. With the advantage of the MoCo framework, we proposed two methods to experiment with the learned representations. In our transfer learning tasks, we are excited to see the results are pretty much aligned with our expectations, and that helps us understand the qualities of our learned representations so far. Overall, we demonstrate that natural augmentation and data fusion helps bridge the gaps between self-supervised learning and sup. learning models.
We are humble to explore and pleased to accomplish the solutions so far in remote sensing applications.