Code for paper "Context-self contrastive pretraining for crop type semantic segmentation"

Overview

Code for paper "Context-self contrastive pretraining for crop type semantic segmentation"

Setting up a python environment

  • Follow the instruction in https://docs.conda.io/projects/conda/en/latest/user-guide/install/linux.html for downloading and installing Miniconda

  • Open a terminal in the code directory

  • Create an environment using the .yml file:

    conda env create -f deepsatmodels_env.yml

  • Activate the environment:

    source activate deepsatmodels

  • Install required version of torch:

    conda install pytorch torchvision torchaudio cudatoolkit=10.1 -c pytorch-nightly

Datasets

MTLCC dataset (Germany)

Download the dataset (.tfrecords)

The data for Germany can be downloaded from: https://github.com/TUM-LMF/MTLCC

  • clone the repository in a separate directory:

    git clone https://github.com/TUM-LMF/MTLCC

  • move to the MTLCC root directory:

    cd MTLCC

  • download the data (40 Gb):

    bash download.sh full

Transform the dataset (.tfrecords -> .pkl)

  • go to the "CSCL_code" home directory:

    cd <.../CSCL_code>

  • activate the "cssl" python environment:

    conda activate cscl

  • add "CSCL_code" home directory to PYTHONPATH:

    export PYTHONPATH="<.../CSCL_code>:$PYTHONPATH"

  • Run the "data/MTLCC/make_pkl_dataset.py" script. Parameter numworkers defines the number of parallel processes employed:

    python data/MTLCC/make_pkl_dataset.py --rootdir <.../MTLCC> --numworkers

  • Running the above script will have the following effects:

    • will create a paths file for the tfrecords files in ".../MTLCC/data_IJGI18/datasets/full/tfrecords240_paths.csv"
    • will create a new directory to save data ".../MTLCC/data_IJGI18/datasets/full/240pkl"
    • will save data in ".../MTLCC/data_IJGI18/datasets/full/240pkl/ "
    • will save relative paths for all data, train data, eval data in ".../MTLCC/data_IJGI18/datasets/full/240pkl"

T31TFM_1618 dataset (France)

Download the dataset

The T31TFM_1618 dataset can be downloaded from Google drive here. Unzipping will create the following folder tree.

T31TFM_1618
├── 2016
│   ├── pkl_timeseries
│       ├── W799943_N6568107_E827372_S6540681
│       |   └── 6541426_800224_2016.pickle
|       |   └── ...
|       ├── ...
├── 2017
│   ├── pkl_timeseries
│       ├── W854602_N6650582_E882428_S6622759
│       |   └── 6623702_854602_2017.pickle
|       |   └── ...
|       ├── ...
├── 2018
│   ├── pkl_timeseries
│       ├── W882228_N6595532_E909657_S6568107
│       |   └── 6568846_888751_2018.pickle
|       |   └── ...
|       ├── ...
├── deepsatdata
|   └── T31TFM_16_products.csv
|   └── ...
|   └── T31TFM_16_parcels.csv
|   └── ...
└── paths
    └── train_paths.csv
    └── eval_paths.csv

Recreate the dataset from scratch

To recreate the dataset use the DeepSatData data generation pipeline.

  • Clone and move to the DeepSatData base directory
git clone https://github.com/michaeltrs/DeepSatData
cd .../DeepSatData
  • Download the Sentinel-2 products.
sh download/download.sh .../T31TFM_16_parcels.csv,.../T31TFM_17_parcels.csv,.../T31TFM_18_parcels.csv
  • Generate a labelled dataset (use case 1) for each year.
sh dataset/labelled_dense/make_labelled_dataset.sh ground_truths_file=<1:ground_truths_file> products_dir=<2:products_dir> labels_dir=<3:labels_dir> windows_dir=<4:windows_dir> timeseries_dir=<5:timeseries_dir> 
res=<6:res> sample_size=<7:sample_size> num_processes<8:num_processes> bands=<8:bands (optional)>

Experiments

Initial steps

  • Add the base directory and paths to train and evaluation path files in "data/datasets.yaml".

  • For each experiment we use a separate ".yaml" configuration file. Examples files are providedided in "configs". The default values filled in these files correspond to parameters used in the experiments presented in the paper.

  • activate "deepsatmodels" python environment:

    conda activate deepsatmodels

Model training

Modify respective .yaml config files accordingly to define the save directory or loading a pre-trained model from pre-trained checkpoints.

Randomly initialized "UNet3D" model

`python train_and_eval/segmentation_training.py --config_file configs/**/UNet3D.yaml --gpu_ids 0,1`

Randomly initialized "UNet2D-CLSTM" model

`python train_and_eval/segmentation_training.py --config_file configs/**/UNet2D_CLSTM.yaml --gpu_ids 0,1`

CSCL-pretrained "UNet2D-CLSTM" model

  • model pre-training

     python train_and_eval/segmentation_cscl_training.py --config_file configs/**/UNet2D_CLSTM_CSCL.yaml --gpu_ids 0,1
  • copy the path to the pre-training save directory in CHECKPOINT.load_from_checkpoint. This will load the latest saved model. To load a specific checkpoint copy the path to the .pth file

     python train_and_eval/segmentation_training.py --config_file configs/**/UNet2D_CLSTM.yaml --gpu_ids 0,1

Randomly initialized "UNet3Df" model

`python train_and_eval/segmentation_training.py --config_file configs/**/UNet3Df.yaml --gpu_ids 0,1`

CSCL-pretrained "UNet3Df" model

  • model pre-training

     python train_and_eval/segmentation_cscl_training.py --config_file configs/**/UNet3Df_CSCL.yaml --gpu_ids 0,1
  • copy the path to the pre-training save directory in CHECKPOINT.load_from_checkpoint. This will load the latest saved model. To load a specific checkpoint copy the path to the .pth file

     python train_and_eval/segmentation_training.py --config_file configs/**/UNet3Df.yaml --gpu_ids 0,1
Owner
Michael Tarasiou
Michael Tarasiou
Scene-Text-Detection-and-Recognition (Pytorch)

Scene-Text-Detection-and-Recognition (Pytorch) Competition URL: https://tbrain.t

Gi-Luen Huang 9 Jan 02, 2023
Additional functionality for use with fastai’s medical imaging module

fmi Adding additional functionality to fastai's medical imaging module To learn more about medical imaging using Fastai you can view my blog Install g

14 Oct 31, 2022
Kohei's 5th place solution for xview3 challenge

xview3-kohei-solution Usage This repository assumes that the given data set is stored in the following locations: $ ls data/input/xview3/*.csv data/in

Kohei Ozaki 2 Jan 17, 2022
PyAF is an Open Source Python library for Automatic Time Series Forecasting built on top of popular pydata modules.

PyAF (Python Automatic Forecasting) PyAF is an Open Source Python library for Automatic Forecasting built on top of popular data science python module

CARME Antoine 405 Jan 02, 2023
End-to-End Referring Video Object Segmentation with Multimodal Transformers

End-to-End Referring Video Object Segmentation with Multimodal Transformers This repo contains the official implementation of the paper: End-to-End Re

608 Dec 30, 2022
Training Cifar-10 Classifier Using VGG16

opevcvdl-hw3 This project uses pytorch and Qt to achieve the requirements. Version Python 3.6 opencv-contrib-python 3.4.2.17 Matplotlib 3.1.1 pyqt5 5.

Kenny Cheng 3 Aug 17, 2022
Python scripts performing class agnostic object localization using the Object Localization Network model in ONNX.

ONNX Object Localization Network Python scripts performing class agnostic object localization using the Object Localization Network model in ONNX. Ori

Ibai Gorordo 15 Oct 14, 2022
mmdetection version of TinyBenchmark.

introduction This project is an mmdetection version of TinyBenchmark. TODO list: add TinyPerson dataset and evaluation add crop and merge for image du

34 Aug 27, 2022
optimization routines for hyperparameter tuning

Hyperopt: Distributed Hyperparameter Optimization Hyperopt is a Python library for serial and parallel optimization over awkward search spaces, which

Marc Claesen 398 Nov 09, 2022
Bytedance Inc. 2.5k Jan 06, 2023
Unofficial implementation of Google "CutPaste: Self-Supervised Learning for Anomaly Detection and Localization" in PyTorch

CutPaste CutPaste: image from paper Unofficial implementation of Google's "CutPaste: Self-Supervised Learning for Anomaly Detection and Localization"

Lilit Yolyan 59 Nov 27, 2022
Code for EMNLP2020 long paper: BERT-Attack: Adversarial Attack Against BERT Using BERT

BERT-ATTACK Code for our EMNLP2020 long paper: BERT-ATTACK: Adversarial Attack Against BERT Using BERT Dependencies Python 3.7 PyTorch 1.4.0 transform

Linyang Li 142 Jan 04, 2023
A PyTorch implementation of the baseline method in Panoptic Narrative Grounding (ICCV 2021 Oral)

A PyTorch implementation of the baseline method in Panoptic Narrative Grounding (ICCV 2021 Oral)

Biomedical Computer Vision @ Uniandes 52 Dec 19, 2022
Object Depth via Motion and Detection Dataset

ODMD Dataset ODMD is the first dataset for learning Object Depth via Motion and Detection. ODMD training data are configurable and extensible, with ea

Brent Griffin 172 Dec 21, 2022
SOTR: Segmenting Objects with Transformers [ICCV 2021]

SOTR: Segmenting Objects with Transformers [ICCV 2021] By Ruohao Guo, Dantong Niu, Liao Qu, Zhenbo Li Introduction This is the official implementation

186 Dec 20, 2022
An example of Scatterbrain implementation (combining local attention and Performer)

An example of Scatterbrain implementation (combining local attention and Performer)

HazyResearch 97 Jan 02, 2023
Azion the best solution of Edge Computing in the world.

Azion Edge Function docker action Create or update an Edge Functions on Azion Edge Nodes. The domain name is the key for decision to a create or updat

8 Jul 16, 2022
ECAENet (TensorFlow and Keras)

ECAENet: EfficientNet with Efficient Channel Attention for Plant Species Recognition (SCI:Q3) (Journal of Intelligent & Fuzzy Systems)

4 Dec 22, 2022
Training DiffWave using variational method from Variational Diffusion Models.

Variational DiffWave Training DiffWave using variational method from Variational Diffusion Models. Quick Start python train_distributed.py discrete_10

Chin-Yun Yu 26 Dec 13, 2022
Gif-caption - A straightforward GIF Captioner written in Python

Broksy's GIF Captioner Have you ever wanted to easily caption a GIF without havi

3 Apr 09, 2022