Code for paper "Context-self contrastive pretraining for crop type semantic segmentation"

Overview

Code for paper "Context-self contrastive pretraining for crop type semantic segmentation"

Setting up a python environment

  • Follow the instruction in https://docs.conda.io/projects/conda/en/latest/user-guide/install/linux.html for downloading and installing Miniconda

  • Open a terminal in the code directory

  • Create an environment using the .yml file:

    conda env create -f deepsatmodels_env.yml

  • Activate the environment:

    source activate deepsatmodels

  • Install required version of torch:

    conda install pytorch torchvision torchaudio cudatoolkit=10.1 -c pytorch-nightly

Datasets

MTLCC dataset (Germany)

Download the dataset (.tfrecords)

The data for Germany can be downloaded from: https://github.com/TUM-LMF/MTLCC

  • clone the repository in a separate directory:

    git clone https://github.com/TUM-LMF/MTLCC

  • move to the MTLCC root directory:

    cd MTLCC

  • download the data (40 Gb):

    bash download.sh full

Transform the dataset (.tfrecords -> .pkl)

  • go to the "CSCL_code" home directory:

    cd <.../CSCL_code>

  • activate the "cssl" python environment:

    conda activate cscl

  • add "CSCL_code" home directory to PYTHONPATH:

    export PYTHONPATH="<.../CSCL_code>:$PYTHONPATH"

  • Run the "data/MTLCC/make_pkl_dataset.py" script. Parameter numworkers defines the number of parallel processes employed:

    python data/MTLCC/make_pkl_dataset.py --rootdir <.../MTLCC> --numworkers

  • Running the above script will have the following effects:

    • will create a paths file for the tfrecords files in ".../MTLCC/data_IJGI18/datasets/full/tfrecords240_paths.csv"
    • will create a new directory to save data ".../MTLCC/data_IJGI18/datasets/full/240pkl"
    • will save data in ".../MTLCC/data_IJGI18/datasets/full/240pkl/ "
    • will save relative paths for all data, train data, eval data in ".../MTLCC/data_IJGI18/datasets/full/240pkl"

T31TFM_1618 dataset (France)

Download the dataset

The T31TFM_1618 dataset can be downloaded from Google drive here. Unzipping will create the following folder tree.

T31TFM_1618
├── 2016
│   ├── pkl_timeseries
│       ├── W799943_N6568107_E827372_S6540681
│       |   └── 6541426_800224_2016.pickle
|       |   └── ...
|       ├── ...
├── 2017
│   ├── pkl_timeseries
│       ├── W854602_N6650582_E882428_S6622759
│       |   └── 6623702_854602_2017.pickle
|       |   └── ...
|       ├── ...
├── 2018
│   ├── pkl_timeseries
│       ├── W882228_N6595532_E909657_S6568107
│       |   └── 6568846_888751_2018.pickle
|       |   └── ...
|       ├── ...
├── deepsatdata
|   └── T31TFM_16_products.csv
|   └── ...
|   └── T31TFM_16_parcels.csv
|   └── ...
└── paths
    └── train_paths.csv
    └── eval_paths.csv

Recreate the dataset from scratch

To recreate the dataset use the DeepSatData data generation pipeline.

  • Clone and move to the DeepSatData base directory
git clone https://github.com/michaeltrs/DeepSatData
cd .../DeepSatData
  • Download the Sentinel-2 products.
sh download/download.sh .../T31TFM_16_parcels.csv,.../T31TFM_17_parcels.csv,.../T31TFM_18_parcels.csv
  • Generate a labelled dataset (use case 1) for each year.
sh dataset/labelled_dense/make_labelled_dataset.sh ground_truths_file=<1:ground_truths_file> products_dir=<2:products_dir> labels_dir=<3:labels_dir> windows_dir=<4:windows_dir> timeseries_dir=<5:timeseries_dir> 
res=<6:res> sample_size=<7:sample_size> num_processes<8:num_processes> bands=<8:bands (optional)>

Experiments

Initial steps

  • Add the base directory and paths to train and evaluation path files in "data/datasets.yaml".

  • For each experiment we use a separate ".yaml" configuration file. Examples files are providedided in "configs". The default values filled in these files correspond to parameters used in the experiments presented in the paper.

  • activate "deepsatmodels" python environment:

    conda activate deepsatmodels

Model training

Modify respective .yaml config files accordingly to define the save directory or loading a pre-trained model from pre-trained checkpoints.

Randomly initialized "UNet3D" model

`python train_and_eval/segmentation_training.py --config_file configs/**/UNet3D.yaml --gpu_ids 0,1`

Randomly initialized "UNet2D-CLSTM" model

`python train_and_eval/segmentation_training.py --config_file configs/**/UNet2D_CLSTM.yaml --gpu_ids 0,1`

CSCL-pretrained "UNet2D-CLSTM" model

  • model pre-training

     python train_and_eval/segmentation_cscl_training.py --config_file configs/**/UNet2D_CLSTM_CSCL.yaml --gpu_ids 0,1
  • copy the path to the pre-training save directory in CHECKPOINT.load_from_checkpoint. This will load the latest saved model. To load a specific checkpoint copy the path to the .pth file

     python train_and_eval/segmentation_training.py --config_file configs/**/UNet2D_CLSTM.yaml --gpu_ids 0,1

Randomly initialized "UNet3Df" model

`python train_and_eval/segmentation_training.py --config_file configs/**/UNet3Df.yaml --gpu_ids 0,1`

CSCL-pretrained "UNet3Df" model

  • model pre-training

     python train_and_eval/segmentation_cscl_training.py --config_file configs/**/UNet3Df_CSCL.yaml --gpu_ids 0,1
  • copy the path to the pre-training save directory in CHECKPOINT.load_from_checkpoint. This will load the latest saved model. To load a specific checkpoint copy the path to the .pth file

     python train_and_eval/segmentation_training.py --config_file configs/**/UNet3Df.yaml --gpu_ids 0,1
Owner
Michael Tarasiou
Michael Tarasiou
library for nonlinear optimization, wrapping many algorithms for global and local, constrained or unconstrained, optimization

NLopt is a library for nonlinear local and global optimization, for functions with and without gradient information. It is designed as a simple, unifi

Steven G. Johnson 1.4k Dec 25, 2022
Ground truth data for the Optical Character Recognition of Historical Classical Commentaries.

OCR Ground Truth for Historical Commentaries The dataset OCR ground truth for historical commentaries (GT4HistComment) was created from the public dom

Ajax Multi-Commentary 3 Sep 08, 2022
Simple helper library to convert a collection of numpy data to tfrecord, and build a tensorflow dataset from the tfrecord.

numpy2tfrecord Simple helper library to convert a collection of numpy data to tfrecord, and build a tensorflow dataset from the tfrecord. Installation

Ryo Yonetani 2 Jan 16, 2022
PyTorch implementation for Stochastic Fine-grained Labeling of Multi-state Sign Glosses for Continuous Sign Language Recognition.

Stochastic CSLR This is the PyTorch implementation for the ECCV 2020 paper: Stochastic Fine-grained Labeling of Multi-state Sign Glosses for Continuou

Zhe Niu 28 Dec 19, 2022
VQMIVC - Vector Quantization and Mutual Information-Based Unsupervised Speech Representation Disentanglement for One-shot Voice Conversion

VQMIVC: Vector Quantization and Mutual Information-Based Unsupervised Speech Representation Disentanglement for One-shot Voice Conversion (Interspeech

Disong Wang 262 Dec 31, 2022
code for "Feature Importance-aware Transferable Adversarial Attacks"

Feature Importance-aware Attack(FIA) This repository contains the code for the paper: Feature Importance-aware Transferable Adversarial Attacks (ICCV

Hengchang Guo 44 Nov 24, 2022
An end-to-end PyTorch framework for image and video classification

What's New: March 2021: Added RegNetZ models November 2020: Vision Transformers now available, with training recipes! 2020-11-20: Classy Vision v0.5 R

Facebook Research 1.5k Dec 31, 2022
PenguinSpeciesPredictionML - Basic model to predict Penguin species based on beak size and sex.

Penguin Species Prediction (ML) 🐧 👨🏽‍💻 What? 💻 This project is a basic model using sklearn methods to predict Penguin species based on beak size

Tucker Paron 0 Jan 08, 2022
My coursework for Machine Learning (2021 Spring) at National Taiwan University (NTU)

Machine Learning 2021 Machine Learning (NTU EE 5184, Spring 2021) Instructor: Hung-yi Lee Course Website : (https://speech.ee.ntu.edu.tw/~hylee/ml/202

100 Dec 26, 2022
BADet: Boundary-Aware 3D Object Detection from Point Clouds (Pattern Recognition 2022)

BADet: Boundary-Aware 3D Object Detection from Point Clouds (Pattern Recognition

Rui Qian 17 Dec 12, 2022
🔅 Shapash makes Machine Learning models transparent and understandable by everyone

🎉 What's new ? Version New Feature Description Tutorial 1.6.x Explainability Quality Metrics To help increase confidence in explainability methods, y

MAIF 2.1k Dec 27, 2022
Python port of R's Comprehensive Dynamic Time Warp algorithm package

Welcome to the dtw-python package Comprehensive implementation of Dynamic Time Warping algorithms. DTW is a family of algorithms which compute the loc

Dynamic Time Warping algorithms 154 Dec 26, 2022
A little software to generate and save Julia or Mandelbrot's Fractals.

Julia-Mandelbrot-s-Fractals A little software to generate and save Julia or Mandelbrot's Fractals. Dependencies : Python 3.7 or more. (Also possible t

Olivier 0 Jul 09, 2022
Official implementation for ICDAR 2021 paper "Handwritten Mathematical Expression Recognition with Bidirectionally Trained Transformer"

Handwritten Mathematical Expression Recognition with Bidirectionally Trained Transformer Description Convert offline handwritten mathematical expressi

Wenqi Zhao 87 Dec 27, 2022
YOLOv5🚀 reproduction by Guo Quanhao using PaddlePaddle

YOLOv5-Paddle YOLOv5 🚀 reproduction by Guo Quanhao using PaddlePaddle 支持AutoBatch 支持AutoAnchor 支持GPU Memory 快速开始 使用AIStudio高性能环境快速构建YOLOv5训练(PaddlePa

QuanHao Guo 20 Nov 14, 2022
[NeurIPS 2021] Garment4D: Garment Reconstruction from Point Cloud Sequences

Garment4D [PDF] | [OpenReview] | [Project Page] Overview This is the codebase for our NeurIPS 2021 paper Garment4D: Garment Reconstruction from Point

Fangzhou Hong 112 Dec 23, 2022
A new play-and-plug method of controlling an existing generative model with conditioning attributes and their compositions.

Viz-It Data Visualizer Web-Application If I ask you where most of the data wrangler looses their time ? It is Data Overview and EDA. Presenting "Viz-I

NVIDIA Research Projects 66 Jan 01, 2023
Official repository for HOTR: End-to-End Human-Object Interaction Detection with Transformers (CVPR'21, Oral Presentation)

Official PyTorch Implementation for HOTR: End-to-End Human-Object Interaction Detection with Transformers (CVPR'2021, Oral Presentation) HOTR: End-to-

Kakao Brain 114 Nov 28, 2022
UmlsBERT: Clinical Domain Knowledge Augmentation of Contextual Embeddings Using the Unified Medical Language System Metathesaurus

UmlsBERT: Clinical Domain Knowledge Augmentation of Contextual Embeddings Using the Unified Medical Language System Metathesaurus General info This is

71 Oct 25, 2022
Minimalistic PyTorch training loop

Backbone for PyTorch training loop Will try to keep it minimalistic. pip install back from back import Bone Features Progress bar Checkpoints saving/l

Kashin 4 Jan 16, 2020