PyTorch code for the paper "FIERY: Future Instance Segmentation in Bird's-Eye view from Surround Monocular Cameras"

Related tags

Deep Learningfiery
Overview

FIERY

This is the PyTorch implementation for inference and training of the future prediction bird's-eye view network as described in:

FIERY: Future Instance Segmentation in Bird's-Eye view from Surround Monocular Cameras

Anthony Hu, Zak Murez, Nikhil Mohan, Sofía Dudas, Jeffrey Hawke, ‪Vijay Badrinarayanan, Roberto Cipolla and Alex Kendall

preprint (2021)
Blog post

FIERY future prediction
Multimodal future predictions by our bird’s-eye view network.
Top two rows: RGB camera inputs. The predicted future trajectories and segmentations are projected to the ground plane in the images.
Bottom row: future instance prediction in bird’s-eye view in a 100m×100m capture size around the ego-vehicle, which is indicated by a black rectangle in the center.

If you find our work useful, please consider citing:

@inproceedings{fiery2021,
  title     = {{FIERY}: Future Instance Segmentation in Bird's-Eye view from Surround Monocular Cameras},
  author    = {Anthony Hu and Zak Murez and Nikhil Mohan and Sofía Dudas and 
               Jeffrey Hawke and Vijay Badrinarayanan and Roberto Cipolla and Alex Kendall},
  booktitle = {arXiv preprint},
  year = {2021}
}

Setup

  • Create the conda environment by running conda env create.

🏄 Prediction

🔥 Pre-trained models

All the configs are in the folder fiery/configs

Config Dataset Past context Future horizon BEV size IoU VPQ
baseline.yml NuScenes 1.0s 2.0s 100mx100m (50cm res.) 37.0 29.5
lyft/baseline.yml Lyft 0.8s 2.0s 100mx100m (50cm res.) 36.6 29.5
literature/pon_setting.yml NuScenes 0.0s 0.0s 100mx50m (25cm res.) 39.9 -
literature/lift_splat_setting.yml NuScenes 0.0s 0.0s 100mx100m (50cm res.) 36.7 -
literature/fishing_setting.yml NuScenes 1.0s 2.0s 32.0mx19.2m (10cm res.) 58.5 -

🏊 Training

To train the model from scratch on NuScenes:

  • Run python train.py --config fiery/configs/baseline.yml DATASET.DATAROOT ${NUSCENES_DATAROOT}

🙌 Credits

Big thanks to Piotr Sokólski (@pyetras) for the panoptic metric implementation, and to Hannes Liik (@hannesliik) for the awesome future trajectory visualisation on the ground plane.

Comments
  • loss < 0

    loss < 0

    Hi, thanks for your great work. I have a question about loss. When i trained model for my data, the loss < 0 at epoch_0, is this normal? Config: baseline.yaml in the project image

    opened by YiJiangYue 6
  • All losses become NaN after about 1 epoch of training

    All losses become NaN after about 1 epoch of training

    Hi,

    Thank you for sharing this great work!

    When I ran the training code, I got NaN for all losses after about 1 epoch of training. This problem is reproduced whenever I run the training code. (I have tested it three times.)

    I followed the same environment setting with anaconda, and also used the same hyper-parameters. (The only difference is that our PyTorch version is 1.7.1 and yours is 1.7.0, and all other modules are the same as yours.)

    Please share your idea about this problem, if you have any. Thanks!

    opened by jwookyoo 6
  • Question about the projection_to_birds_eye_view function

    Question about the projection_to_birds_eye_view function

    Congratulations on your great work!

    I want to follow your work for future research and I have some questions about your released code below:

    In the fiery.py file of your code, can you provide more details about the get_geometry function and the projection_to_birds_eye_view function? I'm so confused about how they actually work, especially, the code shown in the red box below. 112a26f400a5c24af541a6423977362

    Thank you very much. Looking forward to your reply!

    opened by taylover-pei 5
  • AttributeError: 'FigureCanvasTkAgg' object has no attribute 'renderer'

    AttributeError: 'FigureCanvasTkAgg' object has no attribute 'renderer'

    Hello, recently I found your great work and I want to try the "Visualisation" part locally to check the results, but after I run the command of python visualise.py --checkpoint ${CHECKPOINT_PATH} my terminal pop out an error like the following: image

    I try to solve it by searching on google but it does not help, could you help me if you know how to solve it. Many thanks.

    opened by Ianpengg 3
  • May I know where is the checkpoint getting saved?

    May I know where is the checkpoint getting saved?

    I dont see anywhere that the checkpoint is getting saved and while resuming the training, I am getting an error that "size mismatch for model.temporal_model.model.1.aggregation.0.conv.weight"

    opened by pranavi77 2
  • The result of fiery static

    The result of fiery static

    If I want to get the result of Fiery Static of Setting2 in Table I of your paper, should I use the config in "configs/single_timeframe.yml"? When I train the network using this config file from scratch, the IOU is 39.2 when I use the "evaluate.py". However, in the paper, the result is 35.8. Is there another parameter needed to be modified, when I want the network to be one frame as input and the segmentation result of the present frame as output?

    opened by DFLyan 2
  • Question about panoptic_metrics function

    Question about panoptic_metrics function

    Hi,

    Would you be able to explain how the panoptic_metrics function works? (Code linked here: https://github.com/wayveai/fiery/blob/master/fiery/metrics.py#L137) Especially, I wonder why 'void' is included for 'combine_mask', and why 'background' should be changed from 0 to 1.

    Also, It is hard to understand the code under the comment "# hack for bincounting 2 arrays together". (Code linked here: https://github.com/wayveai/fiery/blob/master/fiery/metrics.py#L168)

    Thank you!

    opened by jwookyoo 2
  • Is future_egopose necessary for inference?

    Is future_egopose necessary for inference?

    Thanks for your great work. I have a little question about future ego pose during inference? I may find a little tricky because flow prediction is a module before motion planning. In real cases, the flow prediction module has no chance of getting future ego pose. But the code may show future ego pose is irreplacable in inference. When I turn to None, the inference doesn't work.

    opened by synsin0 1
  • clarification evaluation

    clarification evaluation

    Hello and many thanks for your work and sharing your code.

    I have a question regarding the way you compute your IoU metric and how it compares against Lift-splat.

    You use stat_scores_multiple_classes from PLmetrics to compute the iou. Correct me if I am wrong, but by default the threshold of this method is 0.5

    On the other hand, in get_batch_iou of LFS they use a threshold of 0: pred = (preds > 0) https://github.com/nv-tlabs/lift-splat-shoot/blob/master/src/tools.py

    Wouldn't this have an impact on the evaluation results ,and thus, on how you compare to them ?

    opened by F-Barto 1
  • Question on deleting unused layers and self.downsample

    Question on deleting unused layers and self.downsample

    Hi, I couldn't understand how the self.downsample parameter was set (why 8 and 16 and how it affects upsampling_in_channels) and why delete_unused_layers is required in the encoder model. I tried to search the efficientnet-pytorch implementation and couldn't find any reference for this operation. Could you explain briefly why this is required? Thank you!

    opened by benhgm 1
  • question about instance_flow

    question about instance_flow

    Thanks for your excellent work! I have some questions about instance_flow. warped_instance_seg = {} # t0,f01-->t1; t1,f12-->t2; t2,f23-->t3 # t1,f10-->t0; t2,f21-->t1 for t in range(1, seq_len): warped_inst_t = warp_features(instance_img[t].unsqueeze(0).unsqueeze(1).float(), # 1, 1, 200, 200 future_egomotion_inv[t - 1].unsqueeze(0), mode='nearest', spatial_extent=spatial_extent) warped_instance_seg[t] = warped_inst_t[0, 0] In your paper, "Finally, we obtain feature flow labels by comparing the position of the instance centers of gravity between two consecutive timesteps".I think the code should convert t to t-1, not t-1 to t. How can it get the feature flow? I'm really confuesd about it. I'm looking forward your replying.

    opened by qfwysw 1
  • Bad results when evaluating pretrained checkpoints

    Bad results when evaluating pretrained checkpoints

    Hi. Thanks for your great work. I followed your instructions in README.md to extract nuscenes dataset. I ran evaluate.py with official pretrained checkpoint (https://github.com/wayveai/fiery/releases/download/v1.0/fiery.ckpt) but got the output as follows: iou 53.5 & 28.6 pq 39.8 & 18.0 sq 69.4 & 66.3 rq 57.4 & 27.1 Is there something wrong? It seems to be much lower than the results you got.

    opened by huangzhengxiang 1
  • Dear author,the total loss value <0 ,is it normal?

    Dear author,the total loss value <0 ,is it normal?

    Dear author, I just run the code without no change, during the training ,I got the total sum loss with the value <0 .

    It looks so weird. Is that caused by the setting of the "uncertainty" ? Is that normal? Really thanks.

    opened by emilyemliyM 0
  • Pytorch Lightning stuck the computer and finally killed

    Pytorch Lightning stuck the computer and finally killed

    Thanks for your great work. I'd like to reproduce the training process, but I encountered an error. That is when I use multi-GPU distributed training process, the logging information seems normal, but afterwards the remote server stuck and connection reset and finally the process is killed. My remote server is an independent machine with 4xRTX3090. Is there any issues with the pytorch lightning distributed training that may cause my failure?

    opened by synsin0 1
Tools for manipulating UVs in the Blender viewport.

UV Tool Suite for Blender A set of tools to make editing UVs easier in Blender. These tools can be accessed wither through the Kitfox - UV panel on th

35 Oct 29, 2022
PSTR: End-to-End One-Step Person Search With Transformers (CVPR2022)

PSTR (CVPR2022) This code is an official implementation of "PSTR: End-to-End One-Step Person Search With Transformers (CVPR2022)". End-to-end one-step

Jiale Cao 28 Dec 13, 2022
The Illinois repository for Climatehack (https://climatehack.ai/). We won 1st place!

Climatehack This is the repository for Illinois's Climatehack Team. We earned first place on the leaderboard with a final score of 0.87992. An overvie

Jatin Mathur 20 Jun 09, 2022
The implementation of PEMP in paper "Prior-Enhanced Few-Shot Segmentation with Meta-Prototypes"

Prior-Enhanced network with Meta-Prototypes (PEMP) This is the PyTorch implementation of PEMP. Overview of PEMP Meta-Prototypes & Adaptive Prototypes

Jianwei ZHANG 8 Oct 14, 2021
Combining Reinforcement Learning and Constraint Programming for Combinatorial Optimization

Hybrid solving process for combinatorial optimization problems Combinatorial optimization has found applications in numerous fields, from aerospace to

117 Dec 13, 2022
ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection

ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection This repository contains implementation of the

Visual Understanding Lab @ Samsung AI Center Moscow 190 Dec 30, 2022
ByteTrack: Multi-Object Tracking by Associating Every Detection Box

ByteTrack ByteTrack is a simple, fast and strong multi-object tracker. ByteTrack: Multi-Object Tracking by Associating Every Detection Box Yifu Zhang,

Yifu Zhang 2.9k Jan 04, 2023
Fermi Problems: A New Reasoning Challenge for AI

Fermi Problems: A New Reasoning Challenge for AI Fermi Problems are questions whose answer is a number that can only be reasonably estimated as a prec

AI2 15 May 28, 2022
TANL: Structured Prediction as Translation between Augmented Natural Languages

TANL: Structured Prediction as Translation between Augmented Natural Languages Code for the paper "Structured Prediction as Translation between Augmen

98 Dec 15, 2022
Provided is code that demonstrates the training and evaluation of the work presented in the paper: "On the Detection of Digital Face Manipulation" published in CVPR 2020.

FFD Source Code Provided is code that demonstrates the training and evaluation of the work presented in the paper: "On the Detection of Digital Face M

88 Nov 22, 2022
DeepSpamReview: Detection of Fake Reviews on Online Review Platforms using Deep Learning Architectures. Summer Internship project at CoreView Systems.

Detection of Fake Reviews on Online Review Platforms using Deep Learning Architectures Dataset: https://s3.amazonaws.com/fast-ai-nlp/yelp_review_polar

Ashish Salunkhe 37 Dec 17, 2022
The codes and related files to reproduce the results for Image Similarity Challenge Track 1.

ISC-Track1-Submission The codes and related files to reproduce the results for Image Similarity Challenge Track 1. Required dependencies To begin with

Wenhao Wang 115 Jan 02, 2023
WarpRNNT loss ported in Numba CPU/CUDA for Pytorch

RNNT loss in Pytorch - Numba JIT compiled (warprnnt_numba) Warp RNN Transducer Loss for ASR in Pytorch, ported from HawkAaron/warp-transducer and a re

Somshubra Majumdar 15 Oct 22, 2022
Distributed Asynchronous Hyperparameter Optimization better than HyperOpt.

UltraOpt : Distributed Asynchronous Hyperparameter Optimization better than HyperOpt. UltraOpt is a simple and efficient library to minimize expensive

98 Aug 16, 2022
SASM - simple crossplatform IDE for NASM, MASM, GAS and FASM assembly languages

SASM (SimpleASM) - простая кроссплатформенная среда разработки для языков ассемблера NASM, MASM, GAS, FASM с подсветкой синтаксиса и отладчиком. В SA

Dmitriy Manushin 5.6k Jan 06, 2023
Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)

Skyformer This repository is the official implementation of Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr"om Method (NeurIPS 2021).

Qi Zeng 46 Sep 20, 2022
A torch implementation of "Pixel-Level Domain Transfer"

Pixel Level Domain Transfer A torch implementation of "Pixel-Level Domain Transfer". based on dcgan.torch. Dataset The dataset used is "LookBook", fro

Fei Xia 260 Sep 02, 2022
an implementation of 3D Ken Burns Effect from a Single Image using PyTorch

3d-ken-burns This is a reference implementation of 3D Ken Burns Effect from a Single Image [1] using PyTorch. Given a single input image, it animates

Simon Niklaus 1.4k Dec 28, 2022
Dataset for the Research2Clinics @ NeurIPS 2021 Paper: What Do You See in this Patient? Behavioral Testing of Clinical NLP Models

Behavioral Testing of Clinical NLP Models This repository contains code for testing the behavior of clinical prediction models based on patient letter

Betty van Aken 2 Sep 20, 2022
IAST: Instance Adaptive Self-training for Unsupervised Domain Adaptation (ECCV 2020)

This repo is the official implementation of our paper "Instance Adaptive Self-training for Unsupervised Domain Adaptation". The purpose of this repo is to better communicate with you and respond to y

CVSM Group - email: <a href=[email protected]"> 84 Dec 12, 2022