The official PyTorch code implementation of "Personalized Trajectory Prediction via Distribution Discrimination" in ICCV 2021.

Related tags

Deep LearningDisDis
Overview

Personalized Trajectory Prediction via Distribution Discrimination (DisDis)

The official PyTorch code implementation of "Personalized Trajectory Prediction via Distribution Discrimination" in ICCV 2021,arxiv.

Introduction

The motivation of DisDis is to learn the latent distribution to represent different motion patterns, where the motion pattern of each person is personalized due to his/her habit. We learn the distribution discriminator in a self-supervised manner, which encourages the latent variable distributions of the same motion pattern to be similar while pushing the ones of the different motion patterns away. DisDis is a plug-and-play module which could be integrated with existing multi-modal stochastic predictive models to enhance the discriminative ability of latent distribution. Besides, we propose a new evaluation metric for stochastic trajectory prediction methods. We calculate the probability cumulative minimum distance (PCMD) curve to comprehensively and stably evaluate the learned model and latent distribution, which cumulatively selects the minimum distance between sampled trajectories and ground-truth trajectories from high probability to low probability. PCMD considers the predictions with corresponding probabilities and evaluates the prediction model under the whole latent distribution.

image Figure 1. Training process for the DisDis method. DisDis regards the latent distribution as the motion pattern and optimizes the trajectories with the same motion pattern to be close while the ones with different patterns are pushed away, where the same latent distributions are in the same color. For a given history trajectory, DisDis predicts a latent distribution as the motion pattern, and takes the latent distribution as the discrimination to jointly optimize the embeddings of trajectories and latent distributions.

Requirements

  • Python 3.6+
  • PyTorch 1.4

To build all the dependency, you can follow the instruction below.

pip install -r requirements.txt

Our code is based on Trajectron++. Please cite it if it's useful.

Dataset

The preprocessed data splits for the ETH and UCY datasets are in experiments/pedestrians/raw/. Before training and evaluation, execute the following to process the data. This will generate .pkl files in experiments/processed.

cd experiments/pedestrians
python process_data.py

The train/validation/test/ splits are the same as those found in Social GAN.

Model training

You can train the model for zara1 dataset as

python train.py --eval_every 10 --vis_every 1 --train_data_dict zara1_train.pkl --eval_data_dict zara1_val.pkl --offline_scene_graph yes --preprocess_workers 2 --log_dir ../experiments/pedestrians/models --log_tag _zara1_disdis --train_epochs 100 --augment --conf ../experiments/pedestrians/models/config/config_zara1.json --device cuda:0

The pre-trained models can be found in experiments/pedestrians/models/. And the model configuration is in experiments/pedestrians/models/config/.

Model evaluation

To reproduce the PCMD results in Table 1, you can use

python evaluate_pcmd.py --node_type PEDESTRIAN --data ../processed/zara1_test.pkl --model models/zara1_pretrain --checkpoint 100

To use the most-likely strategy, you can use

python evaluate_mostlikely_z.py --node_type PEDESTRIAN --data ../processed/zara1_test.pkl --model models/zara1_pretrain --checkpoint 100

Welcome to use our PCMD evaluation metric in your experiments. It is a more comprehensive and stable evaluation metric for stochastic trajectory prediction methods.

Citation

The bibtex of our paper 'Personalized Trajectory Prediction via Distribution Discrimination' is provided below:

@inproceedings{Disdis,
  title={Personalized Trajectory Prediction via Distribution Discrimination},
  author={Chen, Guangyi and Li, Junlong and Zhou, Nuoxing and Ren, Liangliang and Lu, Jiwen},
  booktitle={ICCV},
  year={2021}
}
Using OpenAI's CLIP to upscale and enhance images

CLIP Upscaler and Enhancer Using OpenAI's CLIP to upscale and enhance images Based on nshepperd's JAX CLIP Guided Diffusion v2.4 Sample Results Viewpo

Tripp Lyons 5 Jun 14, 2022
Notes taking website build with Docker + Django + React.

Notes website. Try it in browser! / But how to run? Description. This is monorepository with notes website. Website provides web interface for creatin

Kirill Zhosul 2 Jul 27, 2022
Cognition-aware Cognate Detection

Cognition-aware Cognate Detection The repository which contains our code for our EACL 2021 paper titled, "Cognition-aware Cognate Detection". This wor

Prashant K. Sharma 1 Feb 01, 2022
PyTorch code for our paper "Image Super-Resolution with Non-Local Sparse Attention" (CVPR2021).

Image Super-Resolution with Non-Local Sparse Attention This repository is for NLSN introduced in the following paper "Image Super-Resolution with Non-

143 Dec 28, 2022
Implementation of Stochastic Image-to-Video Synthesis using cINNs.

Stochastic Image-to-Video Synthesis using cINNs Official PyTorch implementation of Stochastic Image-to-Video Synthesis using cINNs accepted to CVPR202

CompVis Heidelberg 135 Dec 28, 2022
“Data Augmentation for Cross-Domain Named Entity Recognition” (EMNLP 2021)

Data Augmentation for Cross-Domain Named Entity Recognition Authors: Shuguang Chen, Gustavo Aguilar, Leonardo Neves and Thamar Solorio This repository

<a href=[email protected]"> 18 Sep 10, 2022
DROPO: Sim-to-Real Transfer with Offline Domain Randomization

DROPO: Sim-to-Real Transfer with Offline Domain Randomization Gabriele Tiboni, Karol Arndt, Ville Kyrki. This repository contains the code for the pap

Gabriele Tiboni 8 Dec 19, 2022
The official code of Anisotropic Stroke Control for Multiple Artists Style Transfer

ASMA-GAN Anisotropic Stroke Control for Multiple Artists Style Transfer Proceedings of the 28th ACM International Conference on Multimedia The officia

Six_God 146 Nov 21, 2022
Starter Code for VALUE benchmark

StarterCode for VALUE Benchmark This is the starter code for VALUE Benchmark [website], [paper]. This repository currently supports all baseline model

VALUE Benchmark 73 Dec 09, 2022
Pretrained Pytorch face detection (MTCNN) and recognition (InceptionResnet) models

Face Recognition Using Pytorch Python 3.7 3.6 3.5 Status This is a repository for Inception Resnet (V1) models in pytorch, pretrained on VGGFace2 and

Tim Esler 3.3k Jan 04, 2023
Image Restoration Toolbox (PyTorch). Training and testing codes for DPIR, USRNet, DnCNN, FFDNet, SRMD, DPSR, BSRGAN, SwinIR

Image Restoration Toolbox (PyTorch). Training and testing codes for DPIR, USRNet, DnCNN, FFDNet, SRMD, DPSR, BSRGAN, SwinIR

Kai Zhang 2k Dec 31, 2022
Framework web SnakeServer.

SnakeServer - Framework Web 🐍 Documentação oficial do framework SnakeServer. Conteúdo Sobre Como contribuir Enviar relatórios de segurança Pull reque

Jaedson Silva 0 Jul 21, 2022
Multi-Scale Aligned Distillation for Low-Resolution Detection (CVPR2021)

MSAD Multi-Scale Aligned Distillation for Low-Resolution Detection Lu Qi*, Jason Kuen*, Jiuxiang Gu, Zhe Lin, Yi Wang, Yukang Chen, Yanwei Li, Jiaya J

Jia Research Lab 115 Dec 23, 2022
Can we do Customers Segmentation using PHP and Unsupervized Machine Learning ? Yes we can ! 🤡

Customers Segmentation using PHP and Rubix ML PHP Library Can we do Customers Segmentation using PHP and Unsupervized Machine Learning ? Yes we can !

Mickaël Andrieu 11 Oct 08, 2022
nnFormer: Interleaved Transformer for Volumetric Segmentation

nnFormer: Interleaved Transformer for Volumetric Segmentation Code for paper "nnFormer: Interleaved Transformer for Volumetric Segmentation ". Please

jsguo 610 Dec 28, 2022
This is a model to classify Vietnamese sign language using Motion history image (MHI) algorithm and CNN.

Vietnamese sign lagnuage recognition using MHI and CNN This is a model to classify Vietnamese sign language using Motion history image (MHI) algorithm

Phat Pham 3 Feb 24, 2022
DyNet: The Dynamic Neural Network Toolkit

The Dynamic Neural Network Toolkit General Installation C++ Python Getting Started Citing Releases and Contributing General DyNet is a neural network

Chris Dyer's lab @ LTI/CMU 3.3k Jan 06, 2023
Data and analysis code for an MS on SK VOC genomes phenotyping/neutralisation assays

Description Summary of phylogenomic methods and analyses used in "Immunogenicity of convalescent and vaccinated sera against clinical isolates of ance

Finlay Maguire 1 Jan 06, 2022
Official implementation of "Towards Good Practices for Efficiently Annotating Large-Scale Image Classification Datasets" (CVPR2021)

Towards Good Practices for Efficiently Annotating Large-Scale Image Classification Datasets This is the official implementation of "Towards Good Pract

Sanja Fidler's Lab 52 Nov 22, 2022
This repository contains the re-implementation of our paper deSpeckNet: Generalizing Deep Learning Based SAR Image Despeckling

deSpeckNet-TF-GEE This repository contains the re-implementation of our paper deSpeckNet: Generalizing Deep Learning Based SAR Image Despeckling publi

Adugna Mullissa 16 Sep 07, 2022