Code accompanying our NeurIPS 2021 traffic4cast challenge

Overview

Traffic forecasting on traffic movie snippets

This repo contains all code to reproduce our approach to the IARAI Traffic4cast 2021 challenge. In the challenge, traffic data is provided in movie format, i.e. a rasterised map with volume and average speed values evolving over time. The code is based on (and forked from) the code provided by the competition organizers, which can be found here. For further information on the data and the challenge we also refer to the competition Website or GitHub.

Installation and setup

To install the repository and all required packages, run

git clone https://github.com/NinaWie/NeurIPS2021-traffic4cast.git
cd NeurIPS2021-traffic4cast

conda env update -f environment.yaml
conda activate t4c

export PYTHONPATH="$PYTHONPATH:$PWD"

Instructions on installation with GPU support can be found in the yaml file.

To reproduce the results and train or test on the original data, download the data and extract it to the subfolder data/raw.

Test model

Download the weights of our best model here and put it in a new folder named trained_model in the main directory. The path to the checkpoint should now be NeurIPS2021-traffic4cast/trained_models/ckpt_upp_patch_d100.pt.

To create a submission on the test data, run

DEVICE=cpu
DATA_RAW_PATH="data/raw"
STRIDE=10

python baselines/baselines_cli.py --model_str=up_patch --resume_checkpoint='trained_models/ckpt_upp_patch_d100.pt' --radius=50 --stride=$STRIDE --epochs=0 --batch_size=1 --num_workers=0 --data_raw_path=$DATA_RAW_PATH --device=$DEVICE --submit

Notes:

  • For our best submission (score 59.93) a stride of 10 is used. This means that patches are extracted from the test data in a very densely overlapping manner. However, much more patches per sample have to be predicted and the runtime thus increases significantly. We thus recommend to use a stride of 50 for testing (score 60.13 on leaderboard).
  • In our paper, we define d as the side length of each patch. In this codebase we set a radius instead. The best performing model was trained with radius 50 corresponding to d=100.
  • The --submit-flag was added to the arguments to be called whenever a submission should be created.

Train

To train a model from scratch with our approach, run

DEVICE=cpu
DATA_RAW_PATH="data/raw"

python baselines/baselines_cli.py --model_str=up_patch --radius=50 --epochs=1000 --limit=100 --val_limit=10 --batch_size=8 --checkpoint_name='_upp_50_retrained' --num_workers=0 --data_raw_path=$DATA_RAW_PATH --device=$DEVICE

Notes:

  • The model will be saved in a folder called ckpt_upp_50_retrained, as specified with the checkpoint_name argument. The checkpoints will be saved every 50 epochs and whenever a better validation score is achieved (best.pt). Later, training can be resumed (or the model can be tested) by setting --resume_checkpoint='ckpt_upp_50_retrained/best.pt'.
  • No submission will be created after the run. Add the flag --submit in order to create a submission
  • The stride argument is not necessary for training, since it is only relevant for test data. The validation MSE is computed on the patches, not a full city.
  • In order to use our dataset, the number of workers must be set to 0. Otherwise, the random seed will be set such that the same files are loaded for every epoch. This is due to the setup of the PatchT4CDataset, where files are randomly loaded every epoch and then kept in memory.

Reproduce experiments

In our short paper, further experiments comparing model architectures and different strides are shown. To reproduce the experiment on stride values, execute the following steps:

  • Run python baselines/naive_shifted_stats.py to create artifical test data from the city Antwerp
  • Adapt the paths in the script
  • Run python test_script.py
  • Analyse the output csv file results_test_script.csv

For the other experiments, we regularly write training and validation losses to a file results.json during training (file is stored in the same folder as the checkpoints).

Other approaches

  • In naive_shifted_stats we have implemented a naive approach to the temporal challenge, namely using averages of the previous year and adapting the values to 2020 with a simple factor dependent on the shift of the input hour. The statistics however first have to be computed for each city.
  • In the configs file further options were added, for example u_patch which is the normal U-Net with patching, and models from the segmentation_models_pytorch (smp) PyPI package. For the latter, smp must be installed with pip install segmentation_models_pytorch.
Owner
Nina Wiedemann
Nina Wiedemann
Paper Code:A Self-adaptive Weighted Differential Evolution Approach for Large-scale Feature Selection

1. SaWDE.m is the main function 2. DataPartition.m is used to randomly partition the original data into training sets and test sets with a ratio of 7

wangxb 14 Dec 08, 2022
Optimizaciones incrementales al problema N-Body con el fin de evaluar y comparar las prestaciones de los traductores de Python en el ámbito de HPC.

Python HPC Optimizaciones incrementales de N-Body (all-pairs) con el fin de evaluar y comparar las prestaciones de los traductores de Python en el ámb

Andrés Milla 12 Aug 04, 2022
Waymo motion prediction challenge 2021: 3rd place solution

Waymo motion prediction challenge 2021: 3rd place solution 📜 Technical report 🗨️ Presentation 🎉 Announcement 🛆Motion Prediction Channel Website 🛆

158 Jan 08, 2023
Source code of CIKM2021 Long Paper "PSSL: Self-supervised Learning for Personalized Search with Contrastive Sampling".

PSSL Source code of CIKM2021 Long Paper "PSSL: Self-supervised Learning for Personalized Search with Contrastive Sampling". It consists of the pre-tra

2 Dec 21, 2021
This implements one of result networks from Large-scale evolution of image classifiers

Exotic structured image classifier This implements one of result networks from Large-scale evolution of image classifiers by Esteban Real, et. al. Req

54 Nov 25, 2022
U-Net for GBM

My Final Year Project(FYP) In National University of Singapore(NUS) You need Pytorch(stable 1.9.1) Both cuda version and cpu version are OK File Str

PinkR1ver 1 Oct 27, 2021
Source code for our paper "Do Not Trust Prediction Scores for Membership Inference Attacks"

Do Not Trust Prediction Scores for Membership Inference Attacks Abstract: Membership inference attacks (MIAs) aim to determine whether a specific samp

<a href=[email protected]"> 3 Oct 25, 2022
用opencv的dnn模块做yolov5目标检测,包含C++和Python两个版本的程序

yolov5-dnn-cpp-py yolov5s,yolov5l,yolov5m,yolov5x的onnx文件在百度云盘下载, 链接:https://pan.baidu.com/s/1d67LUlOoPFQy0MV39gpJiw 提取码:bayj python版本的主程序是main_yolov5.

365 Jan 04, 2023
用强化学习DQN算法,训练AI模型来玩合成大西瓜游戏,提供Keras版本和PARL(paddle)版本

用强化学习玩合成大西瓜 代码地址:https://github.com/Sharpiless/play-daxigua-using-Reinforcement-Learning 用强化学习DQN算法,训练AI模型来玩合成大西瓜游戏,提供Keras版本、PARL(paddle)版本和pytorch版本

72 Dec 17, 2022
Tutorial in Python targeted at Epidemiologists. Will discuss the basics of analysis in Python 3

Python-for-Epidemiologists This repository is an introduction to epidemiology analyses in Python. Additionally, the tutorials for my library zEpid are

Paul Zivich 120 Nov 17, 2022
Adversarial-autoencoders - Tensorflow implementation of Adversarial Autoencoders

Adversarial Autoencoders (AAE) Tensorflow implementation of Adversarial Autoencoders (ICLR 2016) Similar to variational autoencoder (VAE), AAE imposes

Qian Ge 236 Nov 13, 2022
Parsing, analyzing, and comparing source code across many languages

Semantic semantic is a Haskell library and command line tool for parsing, analyzing, and comparing source code. In a hurry? Check out our documentatio

GitHub 8.6k Dec 28, 2022
Paper Title: Heterogeneous Knowledge Distillation for Simultaneous Infrared-Visible Image Fusion and Super-Resolution

HKDnet Paper Title: "Heterogeneous Knowledge Distillation for Simultaneous Infrared-Visible Image Fusion and Super-Resolution" Email:

wasteland 11 Nov 12, 2022
Pytorch implementation of VAEs for heterogeneous likelihoods.

Heterogeneous VAEs Beware: This repository is under construction 🛠️ Pytorch implementation of different VAE models to model heterogeneous data. Here,

Adrián Javaloy 35 Nov 29, 2022
The official implementation for "FQ-ViT: Fully Quantized Vision Transformer without Retraining".

FQ-ViT [arXiv] This repo contains the official implementation of "FQ-ViT: Fully Quantized Vision Transformer without Retraining". Table of Contents In

132 Jan 08, 2023
Convex optimization for fun and profit.

CFMM Optimal Routing This repository contains the code needed to generate the figures used in the paper Optimal Routing for Constant Function Market M

Guillermo Angeris 183 Dec 29, 2022
Implementation of Fast Transformer in Pytorch

Fast Transformer - Pytorch Implementation of Fast Transformer in Pytorch. This only work as an encoder. Yannic video AI Epiphany Install $ pip install

Phil Wang 167 Dec 27, 2022
This project provides the proof of the uniqueness of the equilibrium and the global asymptotic stability.

Delayed-cellular-neural-network This project provides the proof of the uniqueness of the equilibrium and the global asymptotic stability. There is als

4 Apr 28, 2022
Learning to Reach Goals via Iterated Supervised Learning

Vanilla GCSL This repository contains a vanilla implementation of "Learning to Reach Goals via Iterated Supervised Learning" proposed by Dibya Gosh et

Christoph Heindl 4 Aug 10, 2022
Unofficial PyTorch implementation of Masked Autoencoders Are Scalable Vision Learners

Unofficial PyTorch implementation of Masked Autoencoders Are Scalable Vision Learners This repository is built upon BEiT, thanks very much! Now, we on

Zhiliang Peng 2.3k Jan 04, 2023