Code accompanying our NeurIPS 2021 traffic4cast challenge

Overview

Traffic forecasting on traffic movie snippets

This repo contains all code to reproduce our approach to the IARAI Traffic4cast 2021 challenge. In the challenge, traffic data is provided in movie format, i.e. a rasterised map with volume and average speed values evolving over time. The code is based on (and forked from) the code provided by the competition organizers, which can be found here. For further information on the data and the challenge we also refer to the competition Website or GitHub.

Installation and setup

To install the repository and all required packages, run

git clone https://github.com/NinaWie/NeurIPS2021-traffic4cast.git
cd NeurIPS2021-traffic4cast

conda env update -f environment.yaml
conda activate t4c

export PYTHONPATH="$PYTHONPATH:$PWD"

Instructions on installation with GPU support can be found in the yaml file.

To reproduce the results and train or test on the original data, download the data and extract it to the subfolder data/raw.

Test model

Download the weights of our best model here and put it in a new folder named trained_model in the main directory. The path to the checkpoint should now be NeurIPS2021-traffic4cast/trained_models/ckpt_upp_patch_d100.pt.

To create a submission on the test data, run

DEVICE=cpu
DATA_RAW_PATH="data/raw"
STRIDE=10

python baselines/baselines_cli.py --model_str=up_patch --resume_checkpoint='trained_models/ckpt_upp_patch_d100.pt' --radius=50 --stride=$STRIDE --epochs=0 --batch_size=1 --num_workers=0 --data_raw_path=$DATA_RAW_PATH --device=$DEVICE --submit

Notes:

  • For our best submission (score 59.93) a stride of 10 is used. This means that patches are extracted from the test data in a very densely overlapping manner. However, much more patches per sample have to be predicted and the runtime thus increases significantly. We thus recommend to use a stride of 50 for testing (score 60.13 on leaderboard).
  • In our paper, we define d as the side length of each patch. In this codebase we set a radius instead. The best performing model was trained with radius 50 corresponding to d=100.
  • The --submit-flag was added to the arguments to be called whenever a submission should be created.

Train

To train a model from scratch with our approach, run

DEVICE=cpu
DATA_RAW_PATH="data/raw"

python baselines/baselines_cli.py --model_str=up_patch --radius=50 --epochs=1000 --limit=100 --val_limit=10 --batch_size=8 --checkpoint_name='_upp_50_retrained' --num_workers=0 --data_raw_path=$DATA_RAW_PATH --device=$DEVICE

Notes:

  • The model will be saved in a folder called ckpt_upp_50_retrained, as specified with the checkpoint_name argument. The checkpoints will be saved every 50 epochs and whenever a better validation score is achieved (best.pt). Later, training can be resumed (or the model can be tested) by setting --resume_checkpoint='ckpt_upp_50_retrained/best.pt'.
  • No submission will be created after the run. Add the flag --submit in order to create a submission
  • The stride argument is not necessary for training, since it is only relevant for test data. The validation MSE is computed on the patches, not a full city.
  • In order to use our dataset, the number of workers must be set to 0. Otherwise, the random seed will be set such that the same files are loaded for every epoch. This is due to the setup of the PatchT4CDataset, where files are randomly loaded every epoch and then kept in memory.

Reproduce experiments

In our short paper, further experiments comparing model architectures and different strides are shown. To reproduce the experiment on stride values, execute the following steps:

  • Run python baselines/naive_shifted_stats.py to create artifical test data from the city Antwerp
  • Adapt the paths in the script
  • Run python test_script.py
  • Analyse the output csv file results_test_script.csv

For the other experiments, we regularly write training and validation losses to a file results.json during training (file is stored in the same folder as the checkpoints).

Other approaches

  • In naive_shifted_stats we have implemented a naive approach to the temporal challenge, namely using averages of the previous year and adapting the values to 2020 with a simple factor dependent on the shift of the input hour. The statistics however first have to be computed for each city.
  • In the configs file further options were added, for example u_patch which is the normal U-Net with patching, and models from the segmentation_models_pytorch (smp) PyPI package. For the latter, smp must be installed with pip install segmentation_models_pytorch.
Owner
Nina Wiedemann
Nina Wiedemann
Large Scale Fine-Grained Categorization and Domain-Specific Transfer Learning. CVPR 2018

Large Scale Fine-Grained Categorization and Domain-Specific Transfer Learning Tensorflow code and models for the paper: Large Scale Fine-Grained Categ

Yin Cui 187 Oct 01, 2022
Demo project for real time anomaly detection using kafka and python

kafkaml-anomaly-detection Project for real time anomaly detection using kafka and python It's assumed that zookeeper and kafka are running in the loca

Rodrigo Arenas 36 Dec 12, 2022
Official Code for "Constrained Mean Shift Using Distant Yet Related Neighbors for Representation Learning"

CMSF Official Code for "Constrained Mean Shift Using Distant Yet Related Neighbors for Representation Learning" Requirements Python = 3.7.6 PyTorch

4 Nov 25, 2022
A simple root calculater for python

Root A simple root calculater Usage/Examples python3 root.py 9 3 4 # Order: number - grid - number of decimals # Output: 2.08

Reza Hosseinzadeh 5 Feb 10, 2022
2021:"Bridging Global Context Interactions for High-Fidelity Image Completion"

TFill arXiv | Project This repository implements the training, testing and editing tools for "Bridging Global Context Interactions for High-Fidelity I

Chuanxia Zheng 111 Jan 08, 2023
Codes for Causal Semantic Generative model (CSG), the model proposed in "Learning Causal Semantic Representation for Out-of-Distribution Prediction" (NeurIPS-21)

Learning Causal Semantic Representation for Out-of-Distribution Prediction This repository is the official implementation of "Learning Causal Semantic

Chang Liu 54 Dec 01, 2022
PyTorch implementation of Deformable Convolution

PyTorch implementation of Deformable Convolution !!!Warning: There is some issues in this implementation and this repo is not maintained any more, ple

Wei Ouyang 893 Dec 18, 2022
Robot Servers and Server Manager software for robo-gym

robo-gym-server-modules Robot Servers and Server Manager software for robo-gym. For info on how to use this package please visit the robo-gym website

JR ROBOTICS 4 Aug 16, 2021
Code for "SRHEN: Stepwise-Refining Homography Estimation Network via Parsing Geometric Correspondences in Deep Latent Space"

SRHEN This is a better and simpler implementation for "SRHEN: Stepwise-Refining Homography Estimation Network via Parsing Geometric Correspondences in

1 Oct 28, 2022
Does MAML Only Work via Feature Re-use? A Data Set Centric Perspective

Does-MAML-Only-Work-via-Feature-Re-use-A-Data-Set-Centric-Perspective Does MAML Only Work via Feature Re-use? A Data Set Centric Perspective Installin

2 Nov 07, 2022
なりすまし検出(anti-spoof-mn3)のWebカメラ向けデモ

FaceDetection-Anti-Spoof-Demo なりすまし検出(anti-spoof-mn3)のWebカメラ向けデモです。 モデルはPINTO_model_zoo/191_anti-spoof-mn3からONNX形式のモデルを使用しています。 Requirement mediapipe

KazuhitoTakahashi 8 Nov 18, 2022
PyTorch implementation of NeurIPS 2021 paper: "CoFiNet: Reliable Coarse-to-fine Correspondences for Robust Point Cloud Registration"

CoFiNet: Reliable Coarse-to-fine Correspondences for Robust Point Cloud Registration (NeurIPS 2021) PyTorch implementation of the paper: CoFiNet: Reli

76 Jan 03, 2023
HiddenMarkovModel implements hidden Markov models with Gaussian mixtures as distributions on top of TensorFlow

Class HiddenMarkovModel HiddenMarkovModel implements hidden Markov models with Gaussian mixtures as distributions on top of TensorFlow 2.0 Installatio

Susara Thenuwara 2 Nov 03, 2021
OpenDILab Multi-Agent Environment

Go-Bigger: Multi-Agent Decision Intelligence Environment GoBigger Doc (中文版) Ongoing 2021.11.13 We are holding a competition —— Go-Bigger: Multi-Agent

OpenDILab 441 Jan 05, 2023
Segmentation Training Pipeline

Segmentation Training Pipeline This package is a part of Musket ML framework. Reasons to use Segmentation Pipeline Segmentation Pipeline was developed

Musket ML 52 Dec 12, 2022
TensorFlow implementation of ENet, trained on the Cityscapes dataset.

segmentation TensorFlow implementation of ENet (https://arxiv.org/pdf/1606.02147.pdf) based on the official Torch implementation (https://github.com/e

Fredrik Gustafsson 248 Dec 16, 2022
A TensorFlow 2.x implementation of Masked Autoencoders Are Scalable Vision Learners

Masked Autoencoders Are Scalable Vision Learners A TensorFlow implementation of Masked Autoencoders Are Scalable Vision Learners [1]. Our implementati

Aritra Roy Gosthipaty 59 Dec 10, 2022
Disease Informed Neural Networks (DINNs) — neural networks capable of learning how diseases spread, forecasting their progression, and finding their unique parameters (e.g. death rate).

DINN We introduce Disease Informed Neural Networks (DINNs) — neural networks capable of learning how diseases spread, forecasting their progression, a

19 Dec 10, 2022
[AAAI 2021] MVFNet: Multi-View Fusion Network for Efficient Video Recognition

MVFNet: Multi-View Fusion Network for Efficient Video Recognition (AAAI 2021) Overview We release the code of the MVFNet (Multi-View Fusion Network).

Wenhao Wu 114 Nov 27, 2022
NHS AI Lab Skunkworks project: Long Stayer Risk Stratification

NHS AI Lab Skunkworks project: Long Stayer Risk Stratification A pilot project for the NHS AI Lab Skunkworks team, Long Stayer Risk Stratification use

NHSX 21 Nov 14, 2022