Sentinel-1 vessel detection model used in the xView3 challenge

Overview

sar_vessel_detect

Code for the AI2 Skylight team's submission in the xView3 competition (https://iuu.xview.us) for vessel detection in Sentinel-1 SAR images. See whitepaper.pdf for a summary of our approach.

Dependencies

Install dependiences using conda:

cd sar_vessel_detect/
conda env create -f environment.yml

Pre-processing

First, ensure that training and validation scenes are extracted to the same directory, e.g. /xview3/all/images/. The training and validation labels should be concatenated and written to a CSV file like /xview3/all/labels.csv.

Prior to training, the large scenes must be split up into 800x800 windows (chips). Set paths and parameters in data/configs/chipping_config.txt, and then run:

cd sar_vessel_detect/src/
python -m xview3.processing.preprocessing ../data/configs/chipping_config.txt

Initial Training

We first train a model on the 50 xView3-Validation scenes only. We will apply this model in the xView3-Train scenes, and incorporate high-confidence predictions as additional labels. This is because xView3-Train scenes are not comprehensively labeled since most labels are derived automatically from AIS tracks.

To train, set paths and parameters in data/configs/initial.txt, and then run:

python -m xview3.training.train ../data/configs/initial.txt

Apply the trained model in xView3-Train, and incorporate high-confidence predictions as additional labels:

python -m xview3.infer.inference --image_folder /xview3/all/images/ --weights ../data/models/initial/best.pth --output out.csv --config_path ../data/configs/initial.txt --padding 400 --window_size 3072 --overlap 20 --scene_path ../data/splits/xview-train.txt
python -m xview3.eval.prune --in_path out.csv --out_path out-conf80.csv --conf 0.8
python -m xview3.misc.pred2label out-conf80.csv /xview3/all/chips/ out-conf80-tolabel.csv
python -m xview3.misc.pred2label_concat /xview3/all/chips/chip_annotations.csv out-conf80-tolabel.csv out-conf80-tolabel-concat.csv
python -m xview3.eval.prune --in_path out-conf80-tolabel-concat.csv --out_path out-conf80-tolabel-concat-prune.csv --nms 10
python -m xview3.misc.pred2label_fixlow out-conf80-tolabel-concat-prune.csv
python -m xview3.misc.pred2label_drop out-conf80-tolabel-concat-prune.csv out.csv out-conf80-tolabel-concat-prune-drop.csv
mv out-conf80-tolabel-concat-prune-drop.csv ../data/xval1b-conf80-concat-prune-drop.csv

Final Training

Now we can train the final object detection model. Set paths and parameters in data/configs/final.txt, and then run:

python -m xview3.training.train ../data/configs/final.txt

Attribute Prediction

We use a separate model to predict is_vessel, is_fishing, and vessel length.

python -m xview3.postprocess.v2.make_csv /xview3/all/chips/chip_annotations.csv out.csv ../data/splits/our-train.txt /xview3/postprocess/labels.csv
python -m xview3.postprocess.v2.get_boxes /xview3/postprocess/labels.csv /xview3/all/chips/ /xview3/postprocess/boxes/
python -m xview3.postprocess.v2.train /xview3/postprocess/model.pth /xview3/postprocess/labels.csv /xview3/postprocess/boxes/

Inference

Suppose that test images are in a directory like /xview3/test/images/. First, apply the object detector:

python -m xview3.infer.inference --image_folder /xview3/test/images/ --weights ../data/models/final/best.pth --output out.csv --config_path ../data/configs/final.txt --padding 400 --window_size 3072 --overlap 20
python -m xview3.eval.prune --in_path out.csv --out_path out-prune.csv --nms 10

Now apply the attribute prediction model:

python -m xview3.postprocess.v2.infer /xview3/postprocess/model.pth out-prune.csv /xview3/test/chips/ out-prune-attribute.csv attribute

Test-time Augmentation

We employ test-time augmentation in our final submission, which we find provides a small 0.5% performance improvement.

python -m xview3.infer.inference --image_folder /xview3/test/images/ --weights ../data/models/final/best.pth --output out-1.csv --config_path ../data/configs/final.txt --padding 400 --window_size 3072 --overlap 20
python -m xview3.infer.inference --image_folder /xview3/test/images/ --weights ../data/models/final/best.pth --output out-2.csv --config_path ../data/configs/final.txt --padding 400 --window_size 3072 --overlap 20 --fliplr True
python -m xview3.infer.inference --image_folder /xview3/test/images/ --weights ../data/models/final/best.pth --output out-3.csv --config_path ../data/configs/final.txt --padding 400 --window_size 3072 --overlap 20 --flipud True
python -m xview3.infer.inference --image_folder /xview3/test/images/ --weights ../data/models/final/best.pth --output out-4.csv --config_path ../data/configs/final.txt --padding 400 --window_size 3072 --overlap 20 --fliplr True --flipud True
python -m xview3.eval.ensemble out-1.csv out-2.csv out-3.csv out-4.csv out-tta.csv
python -m xview3.eval.prune --in_path out-tta.csv --out_path out-tta-prune.csv --nms 10
python -m xview3.postprocess.v2.infer /xview3/postprocess/model.pth out-tta-prune.csv /xview3/test/chips/ out-tta-prune-attribute.csv attribute

Confidence Threshold

We tune the confidence threshold on the validation set. Repeat the inference steps with test-time augmentation on the our-validation.txt split to get out-validation-tta-prune-attribute.csv. Then:

python -m xview3.eval.metric --label_file /xview3/all/chips/chip_annotations.csv --scene_path ../data/splits/our-validation.txt --costly_dist --drop_low_detect --inference_file out-validation-tta-prune-attribute.csv --threshold -1
python -m xview3.eval.prune --in_path out-tta-prune-attribute.csv --out_path submit.csv --conf 0.3 # Change to the best confidence threshold.

Inquiries

For inquiries, please open a Github issue.

Event sourced bank - A wide-and-shallow example using the Python event sourcing library

Event Sourced Bank A "wide but shallow" example of using the Python event sourci

3 Mar 09, 2022
A torch implementation of "Pixel-Level Domain Transfer"

Pixel Level Domain Transfer A torch implementation of "Pixel-Level Domain Transfer". based on dcgan.torch. Dataset The dataset used is "LookBook", fro

Fei Xia 260 Sep 02, 2022
Official PyTorch implementation of "AASIST: Audio Anti-Spoofing using Integrated Spectro-Temporal Graph Attention Networks"

AASIST This repository provides the overall framework for training and evaluating audio anti-spoofing systems proposed in 'AASIST: Audio Anti-Spoofing

Clova AI Research 56 Jan 02, 2023
Code release for the ICML 2021 paper "PixelTransformer: Sample Conditioned Signal Generation".

PixelTransformer Code release for the ICML 2021 paper "PixelTransformer: Sample Conditioned Signal Generation". Project Page Installation Please insta

Shubham Tulsiani 24 Dec 17, 2022
Reinforcement learning for self-driving in a 3D simulation

SelfDrive_AI Reinforcement learning for self-driving in a 3D simulation (Created using UNITY-3D) 1. Requirements for the SelfDrive_AI Gym You need Pyt

Surajit Saikia 17 Dec 14, 2021
GUI for TOAD-GAN, a PCG-ML algorithm for Token-based Super Mario Bros. Levels.

If you are using this code in your own project, please cite our paper: @inproceedings{awiszus2020toadgan, title={TOAD-GAN: Coherent Style Level Gene

Maren A. 13 Dec 14, 2022
A Broader Picture of Random-walk Based Graph Embedding

Random-walk Embedding Framework This repository is a reference implementation of the random-walk embedding framework as described in the paper: A Broa

Zexi Huang 23 Dec 13, 2022
Net2net - Network-to-Network Translation with Conditional Invertible Neural Networks

Net2Net Code accompanying the NeurIPS 2020 oral paper Network-to-Network Translation with Conditional Invertible Neural Networks Robin Rombach*, Patri

CompVis Heidelberg 206 Dec 20, 2022
Meta graph convolutional neural network-assisted resilient swarm communications

Resilient UAV Swarm Communications with Graph Convolutional Neural Network This repository contains the source codes of Resilient UAV Swarm Communicat

62 Dec 06, 2022
ArtEmis: Affective Language for Art

ArtEmis: Affective Language for Art Created by Panos Achlioptas, Maks Ovsjanikov, Kilichbek Haydarov, Mohamed Elhoseiny, Leonidas J. Guibas Introducti

Panos 268 Dec 12, 2022
Open source Python module for computer vision

About PCV PCV is a pure Python library for computer vision based on the book "Programming Computer Vision with Python" by Jan Erik Solem. More details

Jan Erik Solem 1.9k Jan 06, 2023
The authors' implementation of Unsupervised Adversarial Learning of 3D Human Pose from 2D Joint Locations

Unsupervised Adversarial Learning of 3D Human Pose from 2D Joint Locations This is the authors' implementation of Unsupervised Adversarial Learning of

Dwango Media Village 140 Dec 07, 2022
E2EC: An End-to-End Contour-based Method for High-Quality High-Speed Instance Segmentation

E2EC: An End-to-End Contour-based Method for High-Quality High-Speed Instance Segmentation E2EC: An End-to-End Contour-based Method for High-Quality H

zhangtao 146 Dec 29, 2022
Attention Probe: Vision Transformer Distillation in the Wild

Attention Probe: Vision Transformer Distillation in the Wild Jiahao Wang, Mingdeng Cao, Shuwei Shi, Baoyuan Wu, Yujiu Yang In ICASSP 2022 This code is

Wang jiahao 3 Oct 31, 2022
OpenMMLab Video Perception Toolbox. It supports Video Object Detection (VID), Multiple Object Tracking (MOT), Single Object Tracking (SOT), Video Instance Segmentation (VIS) with a unified framework.

English | 简体中文 Documentation: https://mmtracking.readthedocs.io/ Introduction MMTracking is an open source video perception toolbox based on PyTorch.

OpenMMLab 2.7k Jan 08, 2023
GradAttack is a Python library for easy evaluation of privacy risks in public gradients in Federated Learning

GradAttack is a Python library for easy evaluation of privacy risks in public gradients in Federated Learning, as well as corresponding mitigation strategies.

129 Dec 30, 2022
Official repository for ABC-GAN

ABC-GAN The work represented in this repository is the result of a 14 week semesterthesis on photo-realistic image generation using generative adversa

IgorSusmelj 10 Jun 23, 2022
Mercer Gaussian Process (MGP) and Fourier Gaussian Process (FGP) Regression

Mercer Gaussian Process (MGP) and Fourier Gaussian Process (FGP) Regression We provide the code used in our paper "How Good are Low-Rank Approximation

Aristeidis (Ares) Panos 0 Dec 13, 2021
Semi-supervised learning for object detection

Source code for STAC: A Simple Semi-Supervised Learning Framework for Object Detection STAC is a simple yet effective SSL framework for visual object

Google Research 348 Dec 25, 2022
McGill Physics Hackathon 2021: Reaction-Diffusion Models for the Generation of Biological Patterns

DiffuseAnimals: Reaction-Diffusion Models for the Generation of Biological Patterns Introduction Reaction-diffusion equations can be utilized in order

Austin Szuminsky 2 Mar 07, 2022