Kaggle Lyft Motion Prediction for Autonomous Vehicles 4th place solution

Overview

Lyft Motion Prediction for Autonomous Vehicles

Code for the 4th place solution of Lyft Motion Prediction for Autonomous Vehicles on Kaggle.

Directory structure

input               --- Please locate data here
src
|-ensemble          --- For 4. Ensemble scripts
|-lib               --- Library codes
|-modeling          --- For 1. training, 2. prediction and 3. evaluation scripts
  |-results         --- Training, prediction and evaluation results will be stored here
README.md           --- This instruction file
requirements.txt    --- For python library versions

Hardware (The following specs were used to create the original solution)

  • Ubuntu 18.04 LTS
  • 32 CPUs
  • 128GB RAM
  • 8 x NVIDIA Tesla V100 GPUs

Software (python packages are detailed separately in requirements.txt):

Python 3.8.5 CUDA 10.1.243 cuddn 7.6.5 nvidia drivers v.55.23.0 -- Equivalent Dockerfile for the GPU installs: Use nvidia/cuda:10.1-cudnn7-devel-ubuntu18.04 as base image

Also, we installed OpenMPI==4.0.4 for running pytorch distributed training.

Python Library

Deep learning framework, base library

  • torch==1.6.0+cu101
  • torchvision==0.7.0
  • l5kit==1.1.0
  • cupy-cuda101==7.0.0
  • pytorch-ignite==0.4.1
  • pytorch-pfn-extras==0.3.1

CNN models

Data processing/augmentation

  • albumentations==0.4.3
  • scikit-learn==0.22.2.post1

We also installed apex https://github.com/nvidia/apex

Please refer requirements.txt for more details.

Environment Variable

We recommend to set following environment variables for better performance.

export MKL_NUM_THREADS=1
export OMP_NUM_THREADS=1
export NUMEXPR_NUM_THREADS=1

Data setup

Please download competition data:

For the lyft-motion-prediction-autonomous-vehicles dataset, extract them under input/lyft-motion-prediction-autonomous-vehicles directory.

For the lyft-full-training-set data which only contains train_full.zarr, please place it under input/lyft-motion-prediction-autonomous-vehicles/scenes as follows:

input
|-lyft-motion-prediction-autonomous-vehicles
  |-scenes
    |-train_full.zarr (Place here!)
    |-train.zarr
    |-validate.zarr
    |-test.zarr
    |-... (other data)
  |-... (other data)

Pipeline

Our submission pipeline consists of 1. Training, 2. Prediction, 3. Ensemble.

Training with training/validation dataset

The training script is located under src/modeling.

train_lyft.py is the training script and the training configuration is specified by flags yaml file.

[Note] If you want to run training from scratch, please remove results folder once. The training script tries to resume from results folder when resume_if_possible=True is set.

[Note] For the first time of training, it creates cache for training to run efficiently. This cache creation should be done in single process, so please try with the single GPU training until training loop starts. The cache is directly created under input directory.

Once the cache is created, we can run multi-GPU training using same train_lyft.py script, with mpiexec command.

$ cd src/modeling

# Single GPU training (Please run this for first time, for input data cache creation)
$ python train_lyft.py --yaml_filepath ./flags/20201104_cosine_aug.yaml

# Multi GPU training (-n 8 for 8 GPU training)
$ mpiexec -x MASTER_ADDR=localhost -x MASTER_PORT=8899 -n 8 \
  python train_lyft.py --yaml_filepath ./flags/20201104_cosine_aug.yaml

We have trained 9 different models for final submission. Each training configuration can be found in src/modeling/flags, and the training results are located in src/modeling/results.

Prediction for test dataset

predict_lyft.py under src/modeling executes the prediction for test data.

Specify out as trained directory, the script uses trained model of this directory to inference. Please set --convert_world_from_agent true after l5kit==1.1.0.

$ cd src/modeling
$ python predict_lyft.py --out results/20201104_cosine_aug --use_ema true --convert_world_from_agent true

Predicted results are stored under out directory. For example, results/20201104_cosine_aug/prediction_ema/submission.csv is created with above setting.

We executed this prediction for all 9 trained models. We can submit this submission.csv file as the single model prediction.

(Optional) Evaluation with validation dataset

eval_lyft.py under src/modeling executes the evaluation for validation data (chopped data).

python eval_lyft.py --out results/20201104_cosine_aug --use_ema true

The script shows validation error, which is useful for local evaluation of model performance.

Ensemble

Finally all trained models' predictions are ensembled using GMM fitting.

The ensemble script is located under src/ensemble.

# Please execute from root of this repository.
$ python src/ensemble/ensemble_test.py --yaml_filepath src/ensemble/flags/20201126_ensemble.yaml

The location of final ensembled submission.csv is specified in the yaml file. You can submit this submission.csv by uploading it as dataset, and submit via Kaggle kernel. Please follow Save your time, submit without kernel inference for the submission procedure.

最新版本yolov5+deepsort目标检测和追踪,支持5.0版本可训练自己数据集

使用YOLOv5+Deepsort实现车辆行人追踪和计数,代码封装成一个Detector类,更容易嵌入到自己的项目中。

422 Dec 30, 2022
Highly comparative time-series analysis

〰️ hctsa 〰️ : highly comparative time-series analysis hctsa is a software package for running highly comparative time-series analysis using Matlab (fu

Ben Fulcher 569 Dec 21, 2022
SuperSDR: multiplatform KiwiSDR + CAT transceiver integrator

SuperSDR SuperSDR integrates a realtime spectrum waterfall and audio receive from any KiwiSDR around the world, together with a local (or remote) cont

Marco Cogoni 30 Nov 29, 2022
This is a clean and robust Pytorch implementation of DQN and Double DQN.

DQN/DDQN-Pytorch This is a clean and robust Pytorch implementation of DQN and Double DQN. Here is the training curve: All the experiments are trained

XinJingHao 15 Dec 27, 2022
This application explain how we can easily integrate Deepface framework with Python Django application

deepface_suite This application explain how we can easily integrate Deepface framework with Python Django application install redis cache install requ

Mohamed Naji Aboo 3 Apr 18, 2022
Phonetic PosteriorGram (PPG)-Based Voice Conversion (VC)

ppg-vc Phonetic PosteriorGram (PPG)-Based Voice Conversion (VC) This repo implements different kinds of PPG-based VC models. Pretrained models. More m

Liu Songxiang 227 Dec 28, 2022
LoL Runes Recommender With Python

LoL-Runes-Recommender Para ejecutar la aplicación se debe llamar a execute_app.p

Sebastián Salinas 1 Jan 10, 2022
PConv-Keras - Unofficial implementation of "Image Inpainting for Irregular Holes Using Partial Convolutions". Try at: www.fixmyphoto.ai

Partial Convolutions for Image Inpainting using Keras Keras implementation of "Image Inpainting for Irregular Holes Using Partial Convolutions", https

Mathias Gruber 871 Jan 05, 2023
Code for "Adversarial Attack Generation Empowered by Min-Max Optimization", NeurIPS 2021

Min-Max Adversarial Attacks [Paper] [arXiv] [Video] [Slide] Adversarial Attack Generation Empowered by Min-Max Optimization Jingkang Wang, Tianyun Zha

Jingkang Wang 12 Nov 23, 2022
“袋鼯麻麻——智能购物平台”能够精准地定位识别每一个商品

“袋鼯麻麻——智能购物平台”能够精准地定位识别每一个商品,并且能够返回完整地购物清单及顾客应付的实际商品总价格,极大地降低零售行业实际运营过程中巨大的人力成本,提升零售行业无人化、自动化、智能化水平。

thomas-yanxin 192 Jan 05, 2023
A Deep learning based streamlit web app which can tell with which bollywood celebrity your face resembles.

Project Name: Which Bollywood Celebrity You look like A Deep learning based streamlit web app which can tell with which bollywood celebrity your face

BAPPY AHMED 20 Dec 28, 2021
Code repository for "Reducing Underflow in Mixed Precision Training by Gradient Scaling" presented at IJCAI '20

Reducing Underflow in Mixed Precision Training by Gradient Scaling This project implements the gradient scaling method to improve the performance of m

Ruizhe Zhao 5 Apr 14, 2022
git《Self-Attention Attribution: Interpreting Information Interactions Inside Transformer》(AAAI 2021) GitHub:

Self-Attention Attribution This repository contains the implementation for AAAI-2021 paper Self-Attention Attribution: Interpreting Information Intera

60 Dec 29, 2022
An open-source project for applying deep learning to medical scenarios

Auto Vaidya An open source solution for creating end-end web app for employing the power of deep learning in various clinical scenarios like implant d

Smaranjit Ghose 18 May 29, 2022
Official PyTorch Implementation of Unsupervised Learning of Scene Flow Estimation Fusing with Local Rigidity

UnRigidFlow This is the official PyTorch implementation of UnRigidFlow (IJCAI2019). Here are two sample results (~10MB gif for each) of our unsupervis

Liang Liu 28 Nov 16, 2022
Real-time analysis of intracranial neurophysiology recordings.

py_neuromodulation Click this button to run the "Tutorial ML with py_neuro" notebooks: The py_neuromodulation toolbox allows for real time capable pro

Interventional Cognitive Neuromodulation - Neumann Lab Berlin 15 Nov 03, 2022
PyTorch implementation of the paper Dynamic Data Augmentation with Gating Networks

Dynamic Data Augmentation with Gating Networks This is an official PyTorch implementation of the paper Dynamic Data Augmentation with Gating Networks

九州大学 ヒューマンインタフェース研究室 3 Oct 26, 2022
Practical Blind Denoising via Swin-Conv-UNet and Data Synthesis

Practical Blind Denoising via Swin-Conv-UNet and Data Synthesis [Paper] [Online Demo] The following results are obtained by our SCUNet with purely syn

Kai Zhang 312 Jan 07, 2023
Pytorch Lightning code guideline for conferences

Deep learning project seed Use this seed to start new deep learning / ML projects. Built in setup.py Built in requirements Examples with MNIST Badges

Pytorch Lightning 1k Jan 02, 2023
Use graph-based analysis to re-classify stocks and to improve Markowitz portfolio optimization

Dynamic Stock Industrial Classification Use graph-based analysis to re-classify stocks and experiment different re-classification methodologies to imp

Sheng Yang 10 Dec 05, 2022