This repository contains the code for our fast polygonal building extraction from overhead images pipeline.

Overview

Polygonal Building Segmentation by Frame Field Learning

We add a frame field output to an image segmentation neural network to improve segmentation quality and provide structural information for the subsequent polygonization step.


Figure 1: Close-up of our additional frame field output on a test image.



Figure 2: Given an overhead image, the model outputs an edge mask, an interior mask, and a frame field for buildings. The total loss includes terms that align the masks and frame field to ground truth data as well as regularizers to enforce smoothness of the frame field and consistency between the outputs.



Figure 3: Given classification maps and a frame field as input, we optimize skeleton polylines to align to the frame field using an Active Skeleton Model (ASM) and detect corners using the frame field, simplifying non-corner vertices.

This repository contains the official code for the paper:

Polygonal Building Segmentation by Frame Field Learning
Nicolas Girard, Dmitriy Smirnov, Justin Solomon, Yuliya Tarabalka
Pre-print
[paper, video]

Whose short version has been published as:

Regularized Building Segmentation by Frame Field Learning
Nicolas Girard, Dmitriy Smirnov, Justin Solomon, Yuliya Tarabalka
IGARSS 2020

Setup

Git submodules

This project uses various git submodules that should be cloned too.

To clone a repository including its submodules execute:

git clone --recursive --jobs 8 <URL to Git repo>

If you already have cloned the repository and now want to load it’s submodules execute:

git submodule update --init --recursive --jobs 8

or:

git submodule update --recursive

For more about explanations about using submodules and git, see SUBMODULES.md.

Docker

The easiest way to setup environment is to use the Docker image provided in the docker (see README inside the folder).

Once the docker container is built and launched, execute the setup.sh script inside to install required packages.

The environment in the container is now ready for use.

Conda environment

Alternatively you can install all dependencies in a conda environment. I provide my environment specifications in the environment.yml which you can use to create your environment own with:

conda env create -f environment.yml

Data

Several datasets are used in this work. We typically put all datasets in a "data" folder which we link to the "/data" folder in the container (with the -v argument when running the container). Each dataset has it's own sub-folder, usually named with a short version of that dataset's name. Each dataset sub-folder should have a "raw" folder inside containing all the original folders and files fo the datset. When pre-processing data, "processed" folders will be created alongside the "raw" folder.

For example, here is an example working file structure inside the container:

/data 
|-- AerialImageDataset
     |-- raw
         |-- train
         |   |-- aligned_gt_polygons_2
         |   |-- gt
         |   |-- gt_polygonized
         |   |-- images
         `-- test
             |-- aligned_gt_polygons_2
             |-- images
`-- mapping_challenge_dataset
     |-- raw
         |-- train
         |   |-- images
         |   |-- annotation.json
         |   `-- annotation-small.json
         `-- val
              `-- ...

If however you would like to use a different folder for the datasets (for example while not using Docker), you can change the path to datasets in config files. You can modify the "data_dir_candidates" list in the config to only include your path. The training script checks this list of paths one at a time and picks the first one that exists. It then appends the "data_root_partial_dirpath" directory to get to the dataset.

You can find some of the data we used in this shared "data" folder: https://drive.google.com/drive/folders/19yqseUsggPEwLFTBl04CmGmzCZAIOYhy?usp=sharing.

Inria Aerial Image Labeling Dataset

Link to the dataset: https://project.inria.fr/aerialimagelabeling/

For the Inria dataset, the original ground truth is just a collection of raster masks. As our method requires annotations to be polygons in order to compute the ground truth angle for the frame field, we made 2 versions of the dataset:

The Inria OSM dataset has aligned annotations pulled from OpenStreetMap.

The Inria Polygonized dataset has polygon annotations obtained from using our frame field polygonization algorithm on the original raster masks. This was done by running the polygonize_mask.py script like so: python polygonize_mask.py --run_name inria_dataset_osm_mask_only.unet16 --filepath ~/data/AerialImageDataset/raw/train/gt/*.tif

You can find this new ground truth for both cases in the shared "data" folder (https://drive.google.com/drive/folders/19yqseUsggPEwLFTBl04CmGmzCZAIOYhy?usp=sharing.).

Running the main.py script

Execute main.py script to train a model, test a model or use a model on your own image. See the help of the main script with:

python main.py --help

The script can be launched on multiple GPUs for multi-GPU training and evaluation. Simply set the --gpus argument to the number of gpus you want to use. However, for the first launch of the script on a particular dataset (when it will pre-process the data), it is best to leave it at 1 as I did not implement multi-GPU synchronization when pre-processing datasets.

An example use is for training a model with a certain config file, like so: python main.py --config configs/config.mapping_dataset.unet_resnet101_pretrained which will train the Unet-Resnet101 on the CrowdAI Mapping Challenge dataset. The batch size can be adjusted like so: python main.py --config configs/config.mapping_dataset.unet_resnet101_pretrained -b <new batch size>

When training is done, the script can be launched in eval mode, to evaluate the trained model: python main.py --config configs/config.mapping_dataset.unet_resnet101_pretrained --mode eval. Depending on the eval parameters of the config file, running this will output results on the test dataset.

Finally, if you wish to compute AP and AR metrics with the COCO API, you can run: python main.py --config configs/config.mapping_dataset.unet_resnet101_pretrained --mode eval_coco.

Launch inference on one image

Make sure the run folder has the correct structure:

Polygonization-by-Frame-Field-Learning
|-- frame_field_learning
|   |-- runs
|   |   |-- <run_name> | <yyyy-mm-dd hh:mm:ss>
|   |   `-- ...
|   |-- inference.py
|   `-- ...
|-- main.py
|-- README.md (this file)
`-- ...

Execute the [main.py] script like so (filling values for arguments run_name and in_filepath): python main.py --run_name <run_name> --in_filepath <your_image_filepath>

The outputs will be saved next to the input image

Download trained models

We provide already-trained models so you can run inference right away. Download here: https://drive.google.com/drive/folders/1poTQbpCz12ra22CsucF_hd_8dSQ1T3eT?usp=sharing. Each model was trained in a "run", whose folder (named with the format <run_name> | <yyyy-mm-dd hh:mm:ss>) you can download at the provided link. You should then place those runs in a folder named "runs" inside the "frame_field_learning" folder like so:

Polygonization-by-Frame-Field-Learning
|-- frame_field_learning
|   |-- runs
|   |   |-- inria_dataset_polygonized.unet_resnet101_pretrained.leaderboard | 2020-06-02 07:57:31
|   |   |-- mapping_dataset.unet_resnet101_pretrained.field_off.train_val | 2020-09-07 11:54:48
|   |   |-- mapping_dataset.unet_resnet101_pretrained.train_val | 2020-09-07 11:28:51
|   |   `-- ...
|   |-- inference.py
|   `-- ...
|-- main.py
|-- README.md (this file)
`-- ...

Because Google Drive reformats folder names, you have to rename the run folders as above.

Cite:

If you use this code for your own research, please cite

@InProceedings{Girard_2020_IGARSS,
  title = {{Regularized Building Segmentation by Frame Field Learning}},
  author = {Girard, Nicolas and Smirnov, Dmitriy and Solomon, Justin and Tarabalka, Yuliya},
  booktitle = {IEEE International Geoscience and Remote Sensing Symposium (IGARSS)},
  ADDRESS = {Waikoloa, Hawaii},
  year = {2020},
  month = Jul,
}

@misc{girard2020polygonal,
    title={Polygonal Building Segmentation by Frame Field Learning},
    author={Nicolas Girard and Dmitriy Smirnov and Justin Solomon and Yuliya Tarabalka},
    year={2020},
    eprint={2004.14875},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}
Owner
Nicolas Girard
Research engineer at LuxCarta with a PhD in deep learning applied to remote sensing.
Nicolas Girard
The Official Repository for "Generalized OOD Detection: A Survey"

Generalized Out-of-Distribution Detection: A Survey 1. Overview This repository is with our survey paper: Title: Generalized Out-of-Distribution Detec

Jingkang Yang 338 Jan 03, 2023
Example Of Fine-Tuning BERT For Named-Entity Recognition Task And Preparing For Cloud Deployment Using Flask, React, And Docker

Example Of Fine-Tuning BERT For Named-Entity Recognition Task And Preparing For Cloud Deployment Using Flask, React, And Docker This repository contai

Nikita 12 Dec 14, 2022
Jax/Flax implementation of Variational-DiffWave.

jax-variational-diffwave Jax/Flax implementation of Variational-DiffWave. (Zhifeng Kong et al., 2020, Diederik P. Kingma et al., 2021.) DiffWave with

YoungJoong Kim 37 Dec 16, 2022
Pytorch implementations of popular off-policy multi-agent reinforcement learning algorithms, including QMix, VDN, MADDPG, and MATD3.

Off-Policy Multi-Agent Reinforcement Learning (MARL) Algorithms This repository contains implementations of various off-policy multi-agent reinforceme

183 Dec 28, 2022
CLIPort: What and Where Pathways for Robotic Manipulation

CLIPort CLIPort: What and Where Pathways for Robotic Manipulation Mohit Shridhar, Lucas Manuelli, Dieter Fox CoRL 2021 CLIPort is an end-to-end imitat

246 Dec 11, 2022
An OpenAI Gym environment for multi-agent car racing based on Gym's original car racing environment.

Multi-Car Racing Gym Environment This repository contains MultiCarRacing-v0 a multiplayer variant of Gym's original CarRacing-v0 environment. This env

Igor Gilitschenski 56 Nov 01, 2022
Custom implementation of Corrleation Module

Pytorch Correlation module this is a custom C++/Cuda implementation of Correlation module, used e.g. in FlowNetC This tutorial was used as a basis for

Clément Pinard 361 Dec 12, 2022
natural image generation using ConvNets

The Eyescream Project Generating Natural Images using Neural Networks. For our research summary on this work, please read the Arxiv paper: http://arxi

Meta Archive 601 Nov 23, 2022
Codes for AAAI22 paper "Learning to Solve Travelling Salesman Problem with Hardness-Adaptive Curriculum"

Paper For more details, please see our paper Learning to Solve Travelling Salesman Problem with Hardness-Adaptive Curriculum which has been accepted a

14 Sep 30, 2022
A modular, open and non-proprietary toolkit for core robotic functionalities by harnessing deep learning

A modular, open and non-proprietary toolkit for core robotic functionalities by harnessing deep learning Website • About • Installation • Using OpenDR

OpenDR 304 Dec 28, 2022
JumpDiff: Non-parametric estimator for Jump-diffusion processes for Python

jumpdiff jumpdiff is a python library with non-parametric Nadaraya─Watson estimators to extract the parameters of jump-diffusion processes. With jumpd

Rydin 28 Dec 10, 2022
Lightweight tool to perform MITM attack on local network

ARPSpy - A lightweight tool to perform MITM attack Using many library to perform ARP Spoof and auto-sniffing HTTP packet containing credential. (Never

MinhItachi 8 Aug 28, 2022
We propose a new method for effective shadow removal by regarding it as an exposure fusion problem.

Auto-exposure fusion for single-image shadow removal We propose a new method for effective shadow removal by regarding it as an exposure fusion proble

Qing Guo 146 Dec 31, 2022
ParaGen is a PyTorch deep learning framework for parallel sequence generation

ParaGen is a PyTorch deep learning framework for parallel sequence generation. Apart from sequence generation, ParaGen also enhances various NLP tasks, including sequence-level classification, extrac

Bytedance Inc. 169 Dec 22, 2022
Feature extraction made simple with torchextractor

torchextractor: PyTorch Intermediate Feature Extraction Introduction Too many times some model definitions get remorselessly copy-pasted just because

Antoine Broyelle 89 Oct 31, 2022
Face Detection & Age Gender & Expression & Recognition

Face Detection & Age Gender & Expression & Recognition

Sajjad Ayobi 188 Dec 28, 2022
FluidNet re-written with ATen tensor lib

fluidnet_cxx: Accelerating Fluid Simulation with Convolutional Neural Networks. A PyTorch/ATen Implementation. This repository is based on the paper,

JoliBrain 50 Jun 07, 2022
Tutorials, assignments, and competitions for MIT Deep Learning related courses.

MIT Deep Learning This repository is a collection of tutorials for MIT Deep Learning courses. More added as courses progress. Tutorial: Deep Learning

Lex Fridman 9.5k Jan 07, 2023
BisQue is a web-based platform designed to provide researchers with organizational and quantitative analysis tools for 5D image data. Users can extend BisQue by implementing containerized ML workflows.

Overview BisQue is a web-based platform specifically designed to provide researchers with organizational and quantitative analysis tools for up to 5D

Vision Research Lab @ UCSB 26 Nov 29, 2022
Perception-aware multi-sensor fusion for 3D LiDAR semantic segmentation (ICCV 2021)

Perception-Aware Multi-Sensor Fusion for 3D LiDAR Semantic Segmentation (ICCV 2021) [中文|EN] 概述 本工作主要探索一种高效的多传感器(激光雷达和摄像头)融合点云语义分割方法。现有的多传感器融合方法主要将点云投影

ICE 126 Dec 30, 2022