This repository contains the code for our fast polygonal building extraction from overhead images pipeline.

Overview

Polygonal Building Segmentation by Frame Field Learning

We add a frame field output to an image segmentation neural network to improve segmentation quality and provide structural information for the subsequent polygonization step.


Figure 1: Close-up of our additional frame field output on a test image.



Figure 2: Given an overhead image, the model outputs an edge mask, an interior mask, and a frame field for buildings. The total loss includes terms that align the masks and frame field to ground truth data as well as regularizers to enforce smoothness of the frame field and consistency between the outputs.



Figure 3: Given classification maps and a frame field as input, we optimize skeleton polylines to align to the frame field using an Active Skeleton Model (ASM) and detect corners using the frame field, simplifying non-corner vertices.

This repository contains the official code for the paper:

Polygonal Building Segmentation by Frame Field Learning
Nicolas Girard, Dmitriy Smirnov, Justin Solomon, Yuliya Tarabalka
Pre-print
[paper, video]

Whose short version has been published as:

Regularized Building Segmentation by Frame Field Learning
Nicolas Girard, Dmitriy Smirnov, Justin Solomon, Yuliya Tarabalka
IGARSS 2020

Setup

Git submodules

This project uses various git submodules that should be cloned too.

To clone a repository including its submodules execute:

git clone --recursive --jobs 8 <URL to Git repo>

If you already have cloned the repository and now want to load it’s submodules execute:

git submodule update --init --recursive --jobs 8

or:

git submodule update --recursive

For more about explanations about using submodules and git, see SUBMODULES.md.

Docker

The easiest way to setup environment is to use the Docker image provided in the docker (see README inside the folder).

Once the docker container is built and launched, execute the setup.sh script inside to install required packages.

The environment in the container is now ready for use.

Conda environment

Alternatively you can install all dependencies in a conda environment. I provide my environment specifications in the environment.yml which you can use to create your environment own with:

conda env create -f environment.yml

Data

Several datasets are used in this work. We typically put all datasets in a "data" folder which we link to the "/data" folder in the container (with the -v argument when running the container). Each dataset has it's own sub-folder, usually named with a short version of that dataset's name. Each dataset sub-folder should have a "raw" folder inside containing all the original folders and files fo the datset. When pre-processing data, "processed" folders will be created alongside the "raw" folder.

For example, here is an example working file structure inside the container:

/data 
|-- AerialImageDataset
     |-- raw
         |-- train
         |   |-- aligned_gt_polygons_2
         |   |-- gt
         |   |-- gt_polygonized
         |   |-- images
         `-- test
             |-- aligned_gt_polygons_2
             |-- images
`-- mapping_challenge_dataset
     |-- raw
         |-- train
         |   |-- images
         |   |-- annotation.json
         |   `-- annotation-small.json
         `-- val
              `-- ...

If however you would like to use a different folder for the datasets (for example while not using Docker), you can change the path to datasets in config files. You can modify the "data_dir_candidates" list in the config to only include your path. The training script checks this list of paths one at a time and picks the first one that exists. It then appends the "data_root_partial_dirpath" directory to get to the dataset.

You can find some of the data we used in this shared "data" folder: https://drive.google.com/drive/folders/19yqseUsggPEwLFTBl04CmGmzCZAIOYhy?usp=sharing.

Inria Aerial Image Labeling Dataset

Link to the dataset: https://project.inria.fr/aerialimagelabeling/

For the Inria dataset, the original ground truth is just a collection of raster masks. As our method requires annotations to be polygons in order to compute the ground truth angle for the frame field, we made 2 versions of the dataset:

The Inria OSM dataset has aligned annotations pulled from OpenStreetMap.

The Inria Polygonized dataset has polygon annotations obtained from using our frame field polygonization algorithm on the original raster masks. This was done by running the polygonize_mask.py script like so: python polygonize_mask.py --run_name inria_dataset_osm_mask_only.unet16 --filepath ~/data/AerialImageDataset/raw/train/gt/*.tif

You can find this new ground truth for both cases in the shared "data" folder (https://drive.google.com/drive/folders/19yqseUsggPEwLFTBl04CmGmzCZAIOYhy?usp=sharing.).

Running the main.py script

Execute main.py script to train a model, test a model or use a model on your own image. See the help of the main script with:

python main.py --help

The script can be launched on multiple GPUs for multi-GPU training and evaluation. Simply set the --gpus argument to the number of gpus you want to use. However, for the first launch of the script on a particular dataset (when it will pre-process the data), it is best to leave it at 1 as I did not implement multi-GPU synchronization when pre-processing datasets.

An example use is for training a model with a certain config file, like so: python main.py --config configs/config.mapping_dataset.unet_resnet101_pretrained which will train the Unet-Resnet101 on the CrowdAI Mapping Challenge dataset. The batch size can be adjusted like so: python main.py --config configs/config.mapping_dataset.unet_resnet101_pretrained -b <new batch size>

When training is done, the script can be launched in eval mode, to evaluate the trained model: python main.py --config configs/config.mapping_dataset.unet_resnet101_pretrained --mode eval. Depending on the eval parameters of the config file, running this will output results on the test dataset.

Finally, if you wish to compute AP and AR metrics with the COCO API, you can run: python main.py --config configs/config.mapping_dataset.unet_resnet101_pretrained --mode eval_coco.

Launch inference on one image

Make sure the run folder has the correct structure:

Polygonization-by-Frame-Field-Learning
|-- frame_field_learning
|   |-- runs
|   |   |-- <run_name> | <yyyy-mm-dd hh:mm:ss>
|   |   `-- ...
|   |-- inference.py
|   `-- ...
|-- main.py
|-- README.md (this file)
`-- ...

Execute the [main.py] script like so (filling values for arguments run_name and in_filepath): python main.py --run_name <run_name> --in_filepath <your_image_filepath>

The outputs will be saved next to the input image

Download trained models

We provide already-trained models so you can run inference right away. Download here: https://drive.google.com/drive/folders/1poTQbpCz12ra22CsucF_hd_8dSQ1T3eT?usp=sharing. Each model was trained in a "run", whose folder (named with the format <run_name> | <yyyy-mm-dd hh:mm:ss>) you can download at the provided link. You should then place those runs in a folder named "runs" inside the "frame_field_learning" folder like so:

Polygonization-by-Frame-Field-Learning
|-- frame_field_learning
|   |-- runs
|   |   |-- inria_dataset_polygonized.unet_resnet101_pretrained.leaderboard | 2020-06-02 07:57:31
|   |   |-- mapping_dataset.unet_resnet101_pretrained.field_off.train_val | 2020-09-07 11:54:48
|   |   |-- mapping_dataset.unet_resnet101_pretrained.train_val | 2020-09-07 11:28:51
|   |   `-- ...
|   |-- inference.py
|   `-- ...
|-- main.py
|-- README.md (this file)
`-- ...

Because Google Drive reformats folder names, you have to rename the run folders as above.

Cite:

If you use this code for your own research, please cite

@InProceedings{Girard_2020_IGARSS,
  title = {{Regularized Building Segmentation by Frame Field Learning}},
  author = {Girard, Nicolas and Smirnov, Dmitriy and Solomon, Justin and Tarabalka, Yuliya},
  booktitle = {IEEE International Geoscience and Remote Sensing Symposium (IGARSS)},
  ADDRESS = {Waikoloa, Hawaii},
  year = {2020},
  month = Jul,
}

@misc{girard2020polygonal,
    title={Polygonal Building Segmentation by Frame Field Learning},
    author={Nicolas Girard and Dmitriy Smirnov and Justin Solomon and Yuliya Tarabalka},
    year={2020},
    eprint={2004.14875},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}
Owner
Nicolas Girard
Research engineer at LuxCarta with a PhD in deep learning applied to remote sensing.
Nicolas Girard
AI that generate music

PianoGPT ai that generate music try it here https://share.streamlit.io/annasajkh/pianogpt/main/main.py or here https://huggingface.co/spaces/Annas/Pia

Annas 28 Nov 27, 2022
Robust & Reliable Route Recommendation on Road Networks

NeuroMLR: Robust & Reliable Route Recommendation on Road Networks This repository is the official implementation of NeuroMLR: Robust & Reliable Route

4 Dec 20, 2022
🙄 Difficult algorithm, Simple code.

🎉TensorFlow2.0-Examples🎉! "Talk is cheap, show me the code." ----- Linus Torvalds Created by YunYang1994 This tutorial was designed for easily divin

1.7k Dec 25, 2022
Implementation of Segformer, Attention + MLP neural network for segmentation, in Pytorch

Segformer - Pytorch Implementation of Segformer, Attention + MLP neural network for segmentation, in Pytorch. Install $ pip install segformer-pytorch

Phil Wang 208 Dec 25, 2022
Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

CoProtector Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

Zhensu Sun 1 Oct 26, 2021
🍀 Pytorch implementation of various Attention Mechanisms, MLP, Re-parameter, Convolution, which is helpful to further understand papers.⭐⭐⭐

🍀 Pytorch implementation of various Attention Mechanisms, MLP, Re-parameter, Convolution, which is helpful to further understand papers.⭐⭐⭐

xmu-xiaoma66 7.7k Jan 05, 2023
Rotation Robust Descriptors

RoRD Rotation-Robust Descriptors and Orthographic Views for Local Feature Matching Project Page | Paper link Evaluation and Datasets MMA : Training on

Udit Singh Parihar 25 Nov 15, 2022
Fast and Simple Neural Vocoder, the Multiband RNNMS

Multiband RNN_MS Fast and Simple vocoder, Multiband RNN_MS. Demo Quick training How to Use System Details Results References Demo ToDO: Link super gre

tarepan 5 Jan 11, 2022
Official code of paper "PGT: A Progressive Method for Training Models on Long Videos" on CVPR2021

PGT Code for paper PGT: A Progressive Method for Training Models on Long Videos. Install Run pip install -r requirements.txt. Run python setup.py buil

Bo Pang 27 Mar 30, 2022
Sequence-to-Sequence learning using PyTorch

Seq2Seq in PyTorch This is a complete suite for training sequence-to-sequence models in PyTorch. It consists of several models and code to both train

Elad Hoffer 514 Nov 17, 2022
[ICCV 2021] Official Pytorch implementation for Discriminative Region-based Multi-Label Zero-Shot Learning SOTA results on NUS-WIDE and OpenImages

Discriminative Region-based Multi-Label Zero-Shot Learning (ICCV 2021) [arXiv][Project page coming soon] Sanath Narayan*, Akshita Gupta*, Salman Kh

Akshita Gupta 54 Nov 21, 2022
Improving Object Detection by Estimating Bounding Box Quality Accurately

Improving Object Detection by Estimating Bounding Box Quality Accurately Abstrac

2 Apr 14, 2022
Framework for estimating the structures and parameters of Bayesian networks (DAGs) at per-sample resolution

Sample-specific Bayesian Networks A framework for estimating the structures and parameters of Bayesian networks (DAGs) at per-sample or per-patient re

Caleb Ellington 1 Sep 23, 2022
Node-level Graph Regression with Deep Gaussian Process Models

Node-level Graph Regression with Deep Gaussian Process Models Prerequests our implementation is mainly based on tensorflow 1.x and gpflow 1.x: python

1 Jan 16, 2022
PyTorch implementation of InstaGAN: Instance-aware Image-to-Image Translation

InstaGAN: Instance-aware Image-to-Image Translation Warning: This repo contains a model which has potential ethical concerns. Remark that the task of

Sangwoo Mo 827 Dec 29, 2022
Orchestrating Distributed Materials Acceleration Platform Tutorial

Orchestrating Distributed Materials Acceleration Platform Tutorial This tutorial for orchestrating distributed materials acceleration platform was pre

BIG-MAP 1 Jan 25, 2022
A Small and Easy approach to the BraTS2020 dataset (2D Segmentation)

BraTS2020 A Light & Scalable Solution to BraTS2020 | Medical Brain Tumor Segmentation (2D Segmentation) Developed the segmentation models for segregat

Gunjan Haldar 0 Jan 19, 2022
This is the official repository of the paper Stocastic bandits with groups of similar arms (NeurIPS 2021). It contains the code that was used to compute the figures and experiments of the paper.

Experiments How to reproduce experimental results of Stochastic bandits with groups of similar arms submitted paper ? Section 5 of the paper To reprod

Fabien 0 Oct 25, 2021
GPT-Code-Clippy (GPT-CC) is an open source version of GitHub Copilot

GPT-Code-Clippy (GPT-CC) is an open source version of GitHub Copilot, a language model -- based on GPT-3, called GPT-Codex -- that is fine-tuned on publicly available code from GitHub.

2.3k Jan 09, 2023
Deep learning with TensorFlow and earth observation data.

Deep Learning with TensorFlow and EO Data Complete file set for Jupyter Book Autor: Development Seed Date: 04 October 2021 ISBN: (to come) Notebook tu

Development Seed 20 Nov 16, 2022