Deep learning models for change detection of remote sensing images

Overview

Change Detection Models (Remote Sensing)

Python library with Neural Networks for Change Detection based on PyTorch.

โšก โšก โšก I am trying to build this project, if you are interested, don't hesitate to join us!

๐Ÿ‘ฏ ๐Ÿ‘ฏ ๐Ÿ‘ฏ Contact me at [email protected] or pull a request directly.


This project is inspired by segmentation_models.pytorch and built based on it. ๐Ÿ˜„

๐ŸŒฑ How to use

Please refer to local_test.py temporarily.


๐Ÿ”ญ Models

Architectures

Encoders

The following is a list of supported encoders in the CDP. Select the appropriate family of encoders and click to expand the table and select a specific encoder and its pre-trained weights (encoder_name and encoder_weights parameters).

ResNet
Encoder Weights Params, M
resnet18 imagenet / ssl / swsl 11M
resnet34 imagenet 21M
resnet50 imagenet / ssl / swsl 23M
resnet101 imagenet 42M
resnet152 imagenet 58M
ResNeXt
Encoder Weights Params, M
resnext50_32x4d imagenet / ssl / swsl 22M
resnext101_32x4d ssl / swsl 42M
resnext101_32x8d imagenet / instagram / ssl / swsl 86M
resnext101_32x16d instagram / ssl / swsl 191M
resnext101_32x32d instagram 466M
resnext101_32x48d instagram 826M
ResNeSt
Encoder Weights Params, M
timm-resnest14d imagenet 8M
timm-resnest26d imagenet 15M
timm-resnest50d imagenet 25M
timm-resnest101e imagenet 46M
timm-resnest200e imagenet 68M
timm-resnest269e imagenet 108M
timm-resnest50d_4s2x40d imagenet 28M
timm-resnest50d_1s4x24d imagenet 23M
Res2Ne(X)t
Encoder Weights Params, M
timm-res2net50_26w_4s imagenet 23M
timm-res2net101_26w_4s imagenet 43M
timm-res2net50_26w_6s imagenet 35M
timm-res2net50_26w_8s imagenet 46M
timm-res2net50_48w_2s imagenet 23M
timm-res2net50_14w_8s imagenet 23M
timm-res2next50 imagenet 22M
RegNet(x/y)
Encoder Weights Params, M
timm-regnetx_002 imagenet 2M
timm-regnetx_004 imagenet 4M
timm-regnetx_006 imagenet 5M
timm-regnetx_008 imagenet 6M
timm-regnetx_016 imagenet 8M
timm-regnetx_032 imagenet 14M
timm-regnetx_040 imagenet 20M
timm-regnetx_064 imagenet 24M
timm-regnetx_080 imagenet 37M
timm-regnetx_120 imagenet 43M
timm-regnetx_160 imagenet 52M
timm-regnetx_320 imagenet 105M
timm-regnety_002 imagenet 2M
timm-regnety_004 imagenet 3M
timm-regnety_006 imagenet 5M
timm-regnety_008 imagenet 5M
timm-regnety_016 imagenet 10M
timm-regnety_032 imagenet 17M
timm-regnety_040 imagenet 19M
timm-regnety_064 imagenet 29M
timm-regnety_080 imagenet 37M
timm-regnety_120 imagenet 49M
timm-regnety_160 imagenet 80M
timm-regnety_320 imagenet 141M
GERNet
Encoder Weights Params, M
timm-gernet_s imagenet 6M
timm-gernet_m imagenet 18M
timm-gernet_l imagenet 28M
SE-Net
Encoder Weights Params, M
senet154 imagenet 113M
se_resnet50 imagenet 26M
se_resnet101 imagenet 47M
se_resnet152 imagenet 64M
se_resnext50_32x4d imagenet 25M
se_resnext101_32x4d imagenet 46M
SK-ResNe(X)t
Encoder Weights Params, M
timm-skresnet18 imagenet 11M
timm-skresnet34 imagenet 21M
timm-skresnext50_32x4d imagenet 25M
DenseNet
Encoder Weights Params, M
densenet121 imagenet 6M
densenet169 imagenet 12M
densenet201 imagenet 18M
densenet161 imagenet 26M
Inception
Encoder Weights Params, M
inceptionresnetv2 imagenet / imagenet+background 54M
inceptionv4 imagenet / imagenet+background 41M
xception imagenet 22M
EfficientNet
Encoder Weights Params, M
efficientnet-b0 imagenet 4M
efficientnet-b1 imagenet 6M
efficientnet-b2 imagenet 7M
efficientnet-b3 imagenet 10M
efficientnet-b4 imagenet 17M
efficientnet-b5 imagenet 28M
efficientnet-b6 imagenet 40M
efficientnet-b7 imagenet 63M
timm-efficientnet-b0 imagenet / advprop / noisy-student 4M
timm-efficientnet-b1 imagenet / advprop / noisy-student 6M
timm-efficientnet-b2 imagenet / advprop / noisy-student 7M
timm-efficientnet-b3 imagenet / advprop / noisy-student 10M
timm-efficientnet-b4 imagenet / advprop / noisy-student 17M
timm-efficientnet-b5 imagenet / advprop / noisy-student 28M
timm-efficientnet-b6 imagenet / advprop / noisy-student 40M
timm-efficientnet-b7 imagenet / advprop / noisy-student 63M
timm-efficientnet-b8 imagenet / advprop 84M
timm-efficientnet-l2 noisy-student 474M
timm-efficientnet-lite0 imagenet 4M
timm-efficientnet-lite1 imagenet 5M
timm-efficientnet-lite2 imagenet 6M
timm-efficientnet-lite3 imagenet 8M
timm-efficientnet-lite4 imagenet 13M
MobileNet
Encoder Weights Params, M
mobilenet_v2 imagenet 2M
timm-mobilenetv3_large_075 imagenet 1.78M
timm-mobilenetv3_large_100 imagenet 2.97M
timm-mobilenetv3_large_minimal_100 imagenet 1.41M
timm-mobilenetv3_small_075 imagenet 0.57M
timm-mobilenetv3_small_100 imagenet 0.93M
timm-mobilenetv3_small_minimal_100 imagenet 0.43M
DPN
Encoder Weights Params, M
dpn68 imagenet 11M
dpn68b imagenet+5k 11M
dpn92 imagenet+5k 34M
dpn98 imagenet 58M
dpn107 imagenet+5k 84M
dpn131 imagenet 76M
VGG
Encoder Weights Params, M
vgg11 imagenet 9M
vgg11_bn imagenet 9M
vgg13 imagenet 9M
vgg13_bn imagenet 9M
vgg16 imagenet 14M
vgg16_bn imagenet 14M
vgg19 imagenet 20M
vgg19_bn imagenet 20M

๐Ÿšš Dataset

๐Ÿ“ƒ Citing

@misc{likyoocdp:2021,
  Author = {Kaiyu Li, Fulin Sun},
  Title = {Change Detection Pytorch},
  Year = {2021},
  Publisher = {GitHub},
  Journal = {GitHub repository},
  Howpublished = {\url{https://github.com/likyoo/change_detection.pytorch}}
}

๐Ÿ“š Reference

Comments
  • Suggest to loosen the dependency on albumentations

    Suggest to loosen the dependency on albumentations

    Hi, your project change_detection.pytorch(commit id: 0a86d51b31276d9c413798ab3fb332889f02d8aa) requires "albumentations==1.0.3" in its dependency. After analyzing the source code, we found that the following versions of albumentations can also be suitable, i.e., albumentations 1.0.0, 1.0.1, 1.0.2, since all functions that you directly (8 APIs: albumentations.core.transforms_interface.BasicTransform.init, albumentations.augmentations.geometric.resize.Resize.init, albumentations.core.composition.Compose.init, albumentations.pytorch.transforms.ToTensorV2.init, albumentations.augmentations.crops.functional.random_crop, albumentations.core.transforms_interface.DualTransform.init, albumentations.augmentations.crops.transforms.RandomCrop.init, albumentations.augmentations.transforms.Normalize.init) or indirectly (propagate to 11 albumentations's internal APIs and 0 outsider APIs) used from the package have not been changed in these versions, thus not affecting your usage.

    Therefore, we believe that it is quite safe to loose your dependency on albumentations from "albumentations==1.0.3" to "albumentations>=1.0.0,<=1.0.3". This will improve the applicability of change_detection.pytorch and reduce the possibility of any further dependency conflict with other projects.

    May I pull a request to further loosen the dependency on albumentations?

    By the way, could you please tell us whether such an automatic tool for dependency analysis may be potentially helpful for maintaining dependencies easier during your development?

    opened by Agnes-U 3
  • dimensional error

    dimensional error

    ๆ‚จๅฅฝ๏ผŒๆˆ‘ๅœจ่ฟ่กŒlocal_test.pyๆ–‡ไปถๆ—ถๅ‡บ็Žฐไบ†้”™่ฏฏ๏ผŒ่€Œๆˆ‘ไธ€็›ด่งฃๅ†ณไธไบ†๏ผŒ้”™่ฏฏๅฆ‚ไธ‹๏ผš RuntimeError: Expected 4-dimensional input for 4-dimensional weight [64, 3, 7, 7], but got 3-dimensional input of size [3, 256, 256] instead ๆˆ‘ๆƒณ็Ÿฅ้“[6,3,7,7]ไปฃ่กจ็š„ๆ˜ฏไป€ไนˆ๏ผŸ ่ฟ™ไธช้”™่ฏฏๆ˜ฏๅ‘็”Ÿๅœจvaled้ƒจๅˆ†๏ผŒๅœจๆ‰ง่กŒepoch1ๆ—ถtrainๅฏไปฅๆญฃๅธธ่ฏปๅ–ๅ›พ็‰‡ๅนถ่ฟ่กŒ๏ผŒไฝ†ๅˆฐvaledๅฐฑๆŠฅ้”™ไบ†๏ผŒๅธŒๆœ›่ƒฝ่Žทๅพ—ๆ‚จ็š„ๅปบ่ฎฎใ€‚

    opened by 18339185538 0
  • Evaluation with different thresholds give the same results

    Evaluation with different thresholds give the same results

    This piece of code :

    for x in np.arange(0.6, 0.9, 0.1):
        print('Eval with TH:', x)
        metrics = [
            cdp.utils.metrics.Fscore(activation='argmax2d', threshold=x),
            cdp.utils.metrics.Precision(activation='argmax2d', threshold=x),
            cdp.utils.metrics.Recall(activation='argmax2d', threshold=x),
        ]
    
        valid_epoch = cdp.utils.train.ValidEpoch(
            model,
            loss=loss,
            metrics=metrics,
            device=DEVICE,
            verbose=True,
        )
    
        valid_logs = valid_epoch.run(valid_loader)
        print(valid_logs)
    

    Give me the following result:

    Eval with TH: 0.6
    valid: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 505/505 [01:12<00:00,  6.98it/s, cross_entropy_loss - 0.08708, fscore - 0.8799, precision - 0.8946, recall - 0.8789]
    {'cross_entropy_loss': 0.0870812193864016, 'fscore': 0.8798528309538921, 'precision': 0.8946225793644936, 'recall': 0.8789094516579565}
    
    Eval with TH: 0.7
    valid: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 505/505 [01:12<00:00,  6.99it/s, cross_entropy_loss - 0.08708, fscore - 0.8799, precision - 0.8946, recall - 0.8789]
    {'cross_entropy_loss': 0.08708121913835626, 'fscore': 0.8798528309538921, 'precision': 0.8946225793644936, 'recall': 0.8789094516579565}
    
    Eval with TH: 0.7999999999999999
    valid: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 505/505 [01:11<00:00,  7.02it/s, cross_entropy_loss - 0.08708, fscore - 0.8799, precision - 0.8946, recall - 0.8789]
    {'cross_entropy_loss': 0.08708121978843793, 'fscore': 0.8798528309538921, 'precision': 0.8946225793644936, 'recall': 0.8789094516579565}
    
    opened by mikel-brostrom 0
  • Load trained model weigths

    Load trained model weigths

    Hi @likyoo ,

    I study with yoru repo for my project.I have been added to new features to your repo.I'll share it with you when I'm done.

    But I have a significant question;

    How can I load weigths after training operation?

    opened by ozanpkr 1
  • How to test on new images?

    How to test on new images?

    Dear @likyoo thanks for your open source project. I have trained models and saved the best model. Now, how can I test model on new images (not validation)

    opened by manapshymyr-OB 0
Releases(v0.1.0)
Owner
Kaiyu Li
CV & RS & ML Sys
Kaiyu Li
Deep Illuminator is a data augmentation tool designed for image relighting. It can be used to easily and efficiently generate a wide range of illumination variants of a single image.

Deep Illuminator Deep Illuminator is a data augmentation tool designed for image relighting. It can be used to easily and efficiently generate a wide

George Chogovadze 52 Nov 29, 2022
This project deploys a yolo fastest model in the form of tflite on raspberry 3b+. The model is from another repository of mine called -Trash-Classification-Car

Deploy-yolo-fastest-tflite-on-raspberry ่ง‰ๅพ—ๆœ‰็”จ็š„่ฏๅฏไปฅ้กบๆ‰‹็‚นไธชstarๅ—ท ่ฟ™ไธช้กน็›ฎๅฐ†ๅžƒๅœพๅˆ†็ฑปๅฐ่ฝฆไธญ็š„tfliteๆจกๅž‹็งปๆคๅˆฐไบ†ๆ ‘่Ž“ๆดพ3b+ไธŠ้ขใ€‚ ่ฏฅ้กน็›ฎไธป่ฆๆ˜ฏไธบไบ†่ฎฐๅฝ•ๅœจๆ ‘่Ž“ๆดพ้ƒจ็ฝฒyolo fastest tflite็š„ๆต็จ‹ (ไน‹ๅŽๆœ‰ๆ—ถ้—ดไผšๅฐ่ฏ•็”จC++้ƒจ็ฝฒๆฅๆๅ‡

7 Aug 16, 2022
Cascaded Deep Video Deblurring Using Temporal Sharpness Prior and Non-local Spatial-Temporal Similarity

This repository is the official PyTorch implementation of Cascaded Deep Video Deblurring Using Temporal Sharpness Prior and Non-local Spatial-Temporal Similarity

hippopmonkey 4 Dec 11, 2022
[ICCV '21] In this repository you find the code to our paper Keypoint Communities

Keypoint Communities In this repository you will find the code to our ICCV '21 paper: Keypoint Communities Duncan Zauss, Sven Kreiss, Alexandre Alahi,

Duncan Zauss 262 Dec 13, 2022
MGFN: Multi-Graph Fusion Networks for Urban Region Embedding was accepted by IJCAI-2022.

Multi-Graph Fusion Networks for Urban Region Embedding (IJCAI-22) This is the implementation of Multi-Graph Fusion Networks for Urban Region Embedding

202 Nov 18, 2022
MediaPipe Kullanarak ฤฐleri Seviye Bilgisayarla Gรถrรผ

MediaPipe Kullanarak ฤฐleri Seviye Bilgisayarla Gรถrรผ

Burak Bagatarhan 12 Mar 29, 2022
Line-level Handwritten Text Recognition (HTR) system implemented with TensorFlow.

Line-level Handwritten Text Recognition with TensorFlow This model is an extended version of the Simple HTR system implemented by @Harald Scheidl and

Hoร ng Tรนng Lรขm (Linus) 72 May 07, 2022
Code for the paper "Reinforced Active Learning for Image Segmentation"

Reinforced Active Learning for Image Segmentation (RALIS) Code for the paper Reinforced Active Learning for Image Segmentation Dependencies python 3.6

Arantxa Casanova 79 Dec 19, 2022
The official implementation of NeMo: Neural Mesh Models of Contrastive Features for Robust 3D Pose Estimation [ICLR-2021]. https://arxiv.org/pdf/2101.12378.pdf

NeMo: Neural Mesh Models of Contrastive Features for Robust 3D Pose Estimation [ICLR-2021] Release Notes The offical PyTorch implementation of NeMo, p

Angtian Wang 76 Nov 23, 2022
Metric learning algorithms in Python

metric-learn: Metric Learning in Python metric-learn contains efficient Python implementations of several popular supervised and weakly-supervised met

1.3k Dec 28, 2022
A graph adversarial learning toolbox based on PyTorch and DGL.

GraphWar: Arms Race in Graph Adversarial Learning NOTE: GraphWar is still in the early stages and the API will likely continue to change. ๐Ÿš€ Installat

Jintang Li 54 Jan 05, 2023
A Pytree Module system for Deep Learning in JAX

Treex A Pytree-based Module system for Deep Learning in JAX Intuitive: Modules are simple Python objects that respect Object-Oriented semantics and sh

Cristian Garcia 216 Dec 20, 2022
SplineConv implementation for Paddle.

SplineConv implementation for Paddle This module implements the SplineConv operators from Matthias Fey, Jan Eric Lenssen, Frank Weichert, Heinrich Mรผl

ๅŒ—ๆตท่‹ฅ 3 Dec 29, 2021
Convolutional Neural Networks

Darknet Darknet is an open source neural network framework written in C and CUDA. It is fast, easy to install, and supports CPU and GPU computation. D

Joseph Redmon 23.7k Jan 05, 2023
GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training @ KDD 2020

GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training Original implementation for paper GCC: Graph Contrastive Coding for Graph Neural N

THUDM 274 Dec 27, 2022
Back to the Feature: Learning Robust Camera Localization from Pixels to Pose (CVPR 2021)

Back to the Feature with PixLoc We introduce PixLoc, a neural network for end-to-end learning of camera localization from an image and a 3D model via

Computer Vision and Geometry Lab 610 Jan 05, 2023
Progressive Domain Adaptation for Object Detection

Progressive Domain Adaptation for Object Detection Implementation of our paper Progressive Domain Adaptation for Object Detection, based on pytorch-fa

96 Nov 25, 2022
A large-scale video dataset for the training and evaluation of 3D human pose estimation models

ASPset-510 (Australian Sports Pose Dataset) is a large-scale video dataset for the training and evaluation of 3D human pose estimation models. It contains 17 different amateur subjects performing 30

Aiden Nibali 25 Jun 20, 2021
Project repo for the paper SILT: Self-supervised Lighting Transfer Using Implicit Image Decomposition

SILT: Self-supervised Lighting Transfer Using Implicit Image Decomposition (BMVC 2021) Project repo for the paper SILT: Self-supervised Lighting Trans

6 Dec 04, 2022
some academic posters as references. May we have in-person poster session soon!

some academic posters as references. May we have in-person poster session soon!

Bolei Zhou 472 Jan 06, 2023