Baselines for TrajNet++

Overview

TrajNet++ : The Trajectory Forecasting Framework

PyTorch implementation of Human Trajectory Forecasting in Crowds: A Deep Learning Perspective

docs/train/cover.png

TrajNet++ is a large scale interaction-centric trajectory forecasting benchmark comprising explicit agent-agent scenarios. Our framework provides proper indexing of trajectories by defining a hierarchy of trajectory categorization. In addition, we provide an extensive evaluation system to test the gathered methods for a fair comparison. In our evaluation, we go beyond the standard distance-based metrics and introduce novel metrics that measure the capability of a model to emulate pedestrian behavior in crowds. Finally, we provide code implementations of > 10 popular human trajectory forecasting baselines.

Data Setup

The detailed step-by-step procedure for setting up the TrajNet++ framework can be found here

Converting External Datasets

To convert external datasets into the TrajNet++ framework, refer to this guide

Training Models

LSTM

The training script and its help menu: python -m trajnetbaselines.lstm.trainer --help

Run Example

## Our Proposed D-LSTM
python -m trajnetbaselines.lstm.trainer --type directional --augment

## Social LSTM
python -m trajnetbaselines.lstm.trainer --type social --augment --n 16 --embedding_arch two_layer --layer_dims 1024

GAN

The training script and its help menu: python -m trajnetbaselines.sgan.trainer --help

Run Example

## Social GAN (L2 Loss + Adversarial Loss)
python -m trajnetbaselines.sgan.trainer --type directional --augment

## Social GAN (Variety Loss only)
python -m trajnetbaselines.sgan.trainer --type directional --augment --d_steps 0 --k 3

Evaluation

The evaluation script and its help menu: python -m evaluator.trajnet_evaluator --help

Run Example

## TrajNet++ evaluator (saves model predictions. Useful for submission to TrajNet++ benchmark)
python -m evaluator.trajnet_evaluator --output OUTPUT_BLOCK/trajdata/lstm_directional_None.pkl --path <path_to_test_file>

## Fast Evaluator (does not save model predictions)
python -m evaluator.fast_evaluator --output OUTPUT_BLOCK/trajdata/lstm_directional_None.pkl --path <path_to_test_file>

More details regarding TrajNet++ evaluator are provided here

Evaluation on datasplits is based on the following categorization

Results

Unimodal Comparison of interaction encoder designs on interacting trajectories of TrajNet++ real world dataset. Errors reported are ADE / FDE in meters, collisions in mean % (std. dev. %) across 5 independent runs. Our goal is to reduce collisions in model predictions without compromising distance-based metrics.

Method ADE/FDE Collisions
LSTM 0.60/1.30 13.6 (0.2)
S-LSTM 0.53/1.14 6.7 (0.2)
S-Attn 0.56/1.21 9.0 (0.3)
S-GAN 0.64/1.40 6.9 (0.5)
D-LSTM (ours) 0.56/1.22 5.4 (0.3)

Interpreting Forecasting Models

docs/train/LRP.gif

Visualizations of the decision-making of social interaction modules using layer-wise relevance propagation (LRP). The darker the yellow circles, the more is the weight provided by the primary pedestrian (blue) to the corresponding neighbour (yellow).

Code implementation for explaining trajectory forecasting models using LRP can be found here

Benchmarking Models

We host the Trajnet++ Challenge on AICrowd allowing researchers to objectively evaluate and benchmark trajectory forecasting models on interaction-centric data. We rely on the spirit of crowdsourcing, and encourage researchers to submit their sequences to our benchmark, so the quality of trajectory forecasting models can keep increasing in tackling more challenging scenarios.

Citation

If you find this code useful in your research then please cite

@article{Kothari2020HumanTF,
  title={Human Trajectory Forecasting in Crowds: A Deep Learning Perspective},
  author={Parth Kothari and S. Kreiss and Alexandre Alahi},
  journal={ArXiv},
  year={2020},
  volume={abs/2007.03639}
}
Comments
  • Problem training lstm

    Problem training lstm

    Hi, while trying to train social Lstm I encountered this error UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate "https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)

    Which is weird because the older version of the repo works fine with the same dataset.

    Also I tried switching to Pytorch 1.0.0 but it doesn't work either because of Flatten. AttributeError: module 'torch.nn' has no attribute 'Flatten'

    Can you please tell me what's going wrong? Thanks

    opened by sanmoh99 8
  • Data normalization? Minor errors and minor suggestions

    Data normalization? Minor errors and minor suggestions

    Hi,

    First of all congratulations on this fruitful work.

    Then, I have a technical question. It seems that you don't normalize the data in any of the steps. Thus, why did you choose the standard Gaussian noise? It should provide samples with high variance wrt to k.

    After downloading and installing the social force simulator, I ran the trainer and it threw an error: ModuleNotFoundError: No module named 'socialforce.fieldofview'

    After changing to: from socialforce.field_of_view import FieldOfView

    Everything worked fine.

    opened by tmralmeida 2
  • Can't compute collision percentages for Kalman Filter baseline

    Can't compute collision percentages for Kalman Filter baseline

    Hello. Hope everyone that is reading this is doing well.

    I was trying to run the trajnet evaluation code for the Kalman filter implementation, but I get "-1" for the Col-I metric.

    From what I read in #15 , this is because the number of predicted tracks for the neighbours is not equal to the number of ground truth tracks. Upon closer inspection, I was obtaining additional elements on the list of tracks, that corresponded to empty lists (no actual positions).

    While I'm not sure why this happened, I think it might be related to this issue, where the start and end frames for different scenes are not completely separate, for the converted data using the [Trajnet++ dataset]((https://github.com/vita-epfl/trajnetplusplusdataset) code.

    Can someone confirm that that is the case? I'm assuming I'm not the only one to have come accross this issue. I could make a script to perform such separation, and see if that is the actual problem. If I don't find any existing code to do so, I suppose that's my best option.

    opened by pedro-mgb 2
  • Issue about plot_log.py

    Issue about plot_log.py

    Dear Author, When I use plot_log.py,only the resulting accuracy picture is blank.The name is xx.val.png. As shown in the figure below: image What should I do to make the accuracy show up correctly? Thank you for your reply.

    opened by xieyunjiao 2
  • Issue about fast_evaluator and trajnet_evaluator

    Issue about fast_evaluator and trajnet_evaluator

    Hello,I've been using Trajnet ++ to evaluate trained models recently, Whether I use fast_evaluator or trajner_evaluator, my col-I is always -1. I read that part of the code, and the condition for col-I to occur isnum_gt_neigh ==num_predicted_neigh. But I don't know how I can modify the code to compute COL-I. Thank you very much for answering my questions.

    opened by xieyunjiao 2
  • RuntimeError: CUDA error: out of memory

    RuntimeError: CUDA error: out of memory

    Hi, When I run trajnet_evaluator.py after training with cuda. RuntimeError: CUDA error: out of memory

    Is it my personal problem? or Only can I train this code on CPU?

    opened by 396559551 2
  • No module named 'socialforce' ??

    No module named 'socialforce' ??

    Hi, first of all, thank you for sharing this great work.

    "python -m trajnetbaselines.lstm.trainer --type directional --augment" I just ran this command but I have faced the below error. No module named 'socialforce'

    Is there something I should do install or include? Thank you,

    opened by moonsh 2
  • Problem running Sgan model

    Problem running Sgan model

    Hello, I've tried to run the code and encountered error regarding to layer_dims parameter. In the help section it's said to give it like an array [--layer_dims [LAYER_DIMS [LAYER_DIMS ...]]] but again I can't train the model.

    I run the following command: python -m trajnetbaselines.sgan.trainer --batch_size 1 --lr 1e-3 --obs_length 9 --pred_length 12 --type 'social' --norm_pool --layer_dims 10 10

    and get this error:

    Traceback (most recent call last): File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/Users/sasa/Desktop/trajnetplusplusbaselines/trajnetbaselines/sgan/trainer.py", line 533, in main() File "/Users/sasa/Desktop/trajnetplusplusbaselines/trajnetbaselines/sgan/trainer.py", line 529, in main trainer.loop(train_scenes, val_scenes, train_goals, val_goals, args.output, epochs=args.epochs, start_epoch=start_epoch) File "/Users/sasa/Desktop/trajnetplusplusbaselines/trajnetbaselines/sgan/trainer.py", line 73, in loop self.train(train_scenes, train_goals, epoch) File "/Users/sasa/Desktop/trajnetplusplusbaselines/trajnetbaselines/sgan/trainer.py", line 141, in train loss, _ = self.train_batch(scene, scene_goal, step_type) File "/Users/sasa/Desktop/trajnetplusplusbaselines/trajnetbaselines/sgan/trainer.py", line 210, in train_batch rel_output_list, outputs, scores_real, scores_fake = self.model(observed, goals, prediction_truth, step_type=step_type) File "/Users/sasa/Desktop/trajnetplusplusbaselines/venv/trajnet3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/Users/sasa/Desktop/trajnetplusplusbaselines/trajnetbaselines/sgan/sgan.py", line 77, in forward rel_pred_scene, pred_scene = self.generator(observed, goals, prediction_truth, n_predict) File "/Users/sasa/Desktop/trajnetplusplusbaselines/venv/trajnet3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/Users/sasa/Desktop/trajnetplusplusbaselines/trajnetbaselines/sgan/sgan.py", line 283, in forward hidden_cell_state = self.adding_noise(hidden_cell_state) File "/Users/sasa/Desktop/trajnetplusplusbaselines/trajnetbaselines/sgan/sgan.py", line 154, in adding_noise noise = torch.zeros(self.noise_dim, device=hidden_cell_state.device) AttributeError: 'tuple' object has no attribute 'device'

    I appreciate it if you can tell me where i went wrong or give an example command that trains the model.

    Thanks in advance

    opened by sanmoh99 2
  • Challenge submission:

    Challenge submission: "No more retries left"

    I just submitted a zip file with my predictions to the AIcrowd challenge. However, the submission failed with the message: "No more retries left". What does this mean?

    opened by S-Hauri 1
  •  FDE score of 1.14 with social LSTM

    FDE score of 1.14 with social LSTM

    Hi! I am trying to get the FDE score of 1.14 with social LSTM.

    Did you train on the whole (with cff) training dataset? How many epochs? And with which parameters?

    Thanks in advance Many greetings

    opened by Mirorrn 1
  • Fix for Kalman filter to also output trajectories of neighbours

    Fix for Kalman filter to also output trajectories of neighbours

    Summary

    Minor but important fix in Kalman filter model, to also output trajectories of neighbours

    Content

    The variable that contained the neighbour predictions (neighbour_tracks - see line 8 of the file that was changed for initialization) was being overwriten, and so those tracks ended up being lost. This PR involves removing the line in which the variable is overwritten.

    Effect

    This was causing KF to only output trajectory of primary pedestrian, which made the computation of Col-I metric impossible.

    Related PRs/Issues

    This (partially) addresses #19.

    opened by pedro-mgb 1
  • Generative loss stuck

    Generative loss stuck

    Hi,

    Regarding the Social GAN model and while playing with your code, I found something that I couldn't understand.

    E.g while running:

    python -m trajnetbaselines.sgan.trainer --k 1
    

    It means that we are running a vanilla GAN where the generator outputs one sample (the most common GAN setting without the L2 loss); In doing so, the GAN loss is always 1.38 throughout the training. Thus, the vanilla GAN (with only the adversarial loss) is not capable of modeling the data.

    My question is to what extent are we taking advantage of a GAN framework? It seems that we are only training an LSTM predictor (when running under the aforementioned conditions).

    opened by tmralmeida 0
  • Inclusion of Social Anchor model as baseline?

    Inclusion of Social Anchor model as baseline?

    Hello!

    I just saw a release of a recent paper for Interpretable Social Anchors for Human Trajectory Forecasting in Crowds, and it seems like a very intuitive idea for modelling crowd behaviour.

    I was wondering if there will be any open source version of the model available in the future, and if it may be added to this repository as a list of baselines?

    Thank you!

    opened by pedro-mgb 2
Releases(v1.0)
Owner
VITA lab at EPFL
Visual Intelligence for Transportation
VITA lab at EPFL
Parallel and High-Fidelity Text-to-Lip Generation; AAAI 2022 ; Official code

Parallel and High-Fidelity Text-to-Lip Generation This repository is the official PyTorch implementation of our AAAI-2022 paper, in which we propose P

Zhying 77 Dec 21, 2022
CLEAR algorithm for multi-view data association

CLEAR: Consistent Lifting, Embedding, and Alignment Rectification Algorithm The Matlab, Python, and C++ implementation of the CLEAR algorithm, as desc

MIT Aerospace Controls Laboratory 30 Jan 02, 2023
ElegantRL is featured with lightweight, efficient and stable, for researchers and practitioners.

Lightweight, efficient and stable implementations of deep reinforcement learning algorithms using PyTorch. 🔥

AI4Finance 2.5k Jan 08, 2023
Notebooks em Python para Métodos Eletromagnéticos

GeoSci Labs This is a repository of code used to power the notebooks and interactive examples for https://em.geosci.xyz and https://gpg.geosci.xyz. Th

Victor Cezar Tocantins 1 Nov 16, 2021
Alpha-IoU: A Family of Power Intersection over Union Losses for Bounding Box Regression

Alpha-IoU: A Family of Power Intersection over Union Losses for Bounding Box Regression YOLOv5 with alpha-IoU losses implemented in PyTorch. Example r

Jacobi(Jiabo He) 147 Dec 05, 2022
Layered Neural Atlases for Consistent Video Editing

Layered Neural Atlases for Consistent Video Editing Project Page | Paper This repository contains an implementation for the SIGGRAPH Asia 2021 paper L

Yoni Kasten 353 Dec 27, 2022
Adaptive Graph Convolution for Point Cloud Analysis

Adaptive Graph Convolution for Point Cloud Analysis This repository contains the implementation of AdaptConv for point cloud analysis. Adaptive Graph

64 Dec 21, 2022
PyTorch implementation of Trust Region Policy Optimization

PyTorch implementation of TRPO Try my implementation of PPO (aka newer better variant of TRPO), unless you need to you TRPO for some specific reasons.

Ilya Kostrikov 366 Nov 15, 2022
Dynamic Neural Representational Decoders for High-Resolution Semantic Segmentation

Dynamic Neural Representational Decoders for High-Resolution Semantic Segmentation Requirements This repository needs mmsegmentation Training To train

20 May 28, 2022
Code release for the paper “Worldsheet Wrapping the World in a 3D Sheet for View Synthesis from a Single Image”, ICCV 2021.

Worldsheet: Wrapping the World in a 3D Sheet for View Synthesis from a Single Image This repository contains the code for the following paper: R. Hu,

Meta Research 37 Jan 04, 2023
The PyTorch implementation of Directed Graph Contrastive Learning (DiGCL), NeurIPS-2021

Directed Graph Contrastive Learning The PyTorch implementation of Directed Graph Contrastive Learning (DiGCL). In this paper, we present the first con

Tong Zekun 28 Jan 08, 2023
RCD: Relation Map Driven Cognitive Diagnosis for Intelligent Education Systems

RCD: Relation Map Driven Cognitive Diagnosis for Intelligent Education Systems This is our implementation for the paper: Weibo Gao, Qi Liu*, Zhenya Hu

BigData Lab @USTC 中科大大数据实验室 10 Oct 16, 2022
git《Learning Pairwise Inter-Plane Relations for Piecewise Planar Reconstruction》(ECCV 2020) GitHub:

Learning Pairwise Inter-Plane Relations for Piecewise Planar Reconstruction Code for the ECCV 2020 paper by Yiming Qian and Yasutaka Furukawa Getting

37 Dec 04, 2022
Dataset Cartography: Mapping and Diagnosing Datasets with Training Dynamics

Dataset Cartography Code for the paper Dataset Cartography: Mapping and Diagnosing Datasets with Training Dynamics at EMNLP 2020. This repository cont

AI2 125 Dec 22, 2022
[arXiv'22] Panoptic NeRF: 3D-to-2D Label Transfer for Panoptic Urban Scene Segmentation

Panoptic NeRF Project Page | Paper | Dataset Panoptic NeRF: 3D-to-2D Label Transfer for Panoptic Urban Scene Segmentation Xiao Fu*, Shangzhan zhang*,

Xiao Fu 111 Dec 16, 2022
Unofficial implementation of Proxy Anchor Loss for Deep Metric Learning

Proxy Anchor Loss for Deep Metric Learning Unofficial pytorch, tensorflow and mxnet implementations of Proxy Anchor Loss for Deep Metric Learning. Not

Geonmo Gu 3 Jun 09, 2021
Diffusion Probabilistic Models for 3D Point Cloud Generation (CVPR 2021)

Diffusion Probabilistic Models for 3D Point Cloud Generation [Paper] [Code] The official code repository for our CVPR 2021 paper "Diffusion Probabilis

Shitong Luo 323 Jan 05, 2023
The official implementation of the CVPR2021 paper: Decoupled Dynamic Filter Networks

Decoupled Dynamic Filter Networks This repo is the official implementation of CVPR2021 paper: "Decoupled Dynamic Filter Networks". Introduction DDF is

F.S.Fire 180 Dec 30, 2022
Offline Multi-Agent Reinforcement Learning Implementations: Solving Overcooked Game with Data-Driven Method

Overcooked-AI We suppose to apply traditional offline reinforcement learning technique to multi-agent algorithm. In this repository, we implemented be

Baek In-Chang 14 Sep 16, 2022
Weighted K Nearest Neighbors (kNN) algorithm implemented on python from scratch.

kNN_From_Scratch I implemented the k nearest neighbors (kNN) classification algorithm on python. This algorithm is used to predict the classes of new

1 Dec 14, 2021