|
|
|
|||||||
|
|
|
|
Change detection in remote sensing images is an essential tool for analyzing a region at different times. It finds varied applications in monitoring environmental changes, man-made changes as well as corresponding decision-making and prediction of future trends. Deep learning methods like Convolutional Neural Networks (CNNs) and Transformers have achieved remarkable success in detecting significant changes, given two images at different times. In this paper, we propose a Mamba-based Change Detector (M-CD) that segments out the regions of interest even better. Mamba-based architectures demonstrate linear-time training capabilities and an improved receptive field over transformers. Our experiments on four widely used change detection datasets demonstrate significant improvements over existing state-of-the-art (SOTA) methods. |
|
M-CD consists of three main components - the Siamese Image Encoder (SIE), the Difference Module (DM) and the Mask Decoder (MD). Given two images, they are passed through the two branches of the encoder to generate image features. The two branches work on the same modality of images and so weights between them are shared. This also reduces the computational complexity. The SIE is responsible for extracting features on multiple scales, facilitated by the cascading of four Visual State Space (VSS) blocks and downsampling operations. The DM is responsible for analyzing features from both images together at different scales and generating combined multi-scale features. These are further transformed by the Mask Decoder using Channel-Averaged VSS blocks and upsampling operations. These transformed features are finally passed through a classifier that segments out the regions involving significant change. |
|
Qualitative results on four public datasets. White represents true positives, black represents true negatives, green represents false positives and red represents false negatives. |
|
Comparison of M-CD with respect to SOTA CD methods. F1 denotes the F1 metric, IoU denotes the Intersection-Over-Union metric and OA denotes the overall pixel accuracy. IN1k denotes training data from ImageNet 1k dataset. The best result is indicated in bold and the second-best result is underlined. Our method outperforms existing methods for all datasets. |
A Mamba-based Siamese Network for Remote Sensing Change Detection (hosted on ArXiv) |
Acknowledgements |