Model Zoo of BDD100K Dataset

Overview

BDD100K Model Zoo

In this repository, we provide popular models for each task in the BDD100K dataset.

teaser

For each task in the BDD100K dataset, we make publicly available the model weights, evaluation results, predictions, visualizations, as well as scripts to performance evaluation and visualization. The goal is to provide a set of competitive baselines to facilitate research and provide a common benchmark for comparison.

The number of pre-trained models in this zoo is 1️⃣ 1️⃣ 5️⃣ . You can include your models in this repo as well! See contribution instructions.

This repository currently supports the tasks listed below. For more information about each task, click on the task name. We plan to support all tasks in the BDD100K dataset eventually; see the roadmap for our plan and progress.

If you have any questions, please go to the BDD100K discussions.

Roadmap

  • Lane marking
  • Panoptic segmentation
  • Pose estimation

Dataset

Please refer to the dataset preparation instructions for how to prepare and use the BDD100K dataset with the models.

Maintainers

Citation

To cite the BDD100K dataset in your paper,

@InProceedings{bdd100k,
    author = {Yu, Fisher and Chen, Haofeng and Wang, Xin and Xian, Wenqi and Chen,
              Yingying and Liu, Fangchen and Madhavan, Vashisht and Darrell, Trevor},
    title = {BDD100K: A Diverse Driving Dataset for Heterogeneous Multitask Learning},
    booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month = {June},
    year = {2020}
}
Comments
  • Using the models to predict on other Images

    Using the models to predict on other Images

    Hi,

    can i use the models under "bdd100k-models/det/" to make predictions on other images ?

    When i followed the "Usage"-Section, it seems that the models can only be used to evaluate the Test/Val Images.

    opened by askppp 5
  • Drivable Segmentation Model inference stuck

    Drivable Segmentation Model inference stuck

    When I am running Deeplabv3+ model by using: python ./test.py configs/drivable/deeplabv3plus_r50-d8_512x1024_40k_drivable_bdd100k.py --format-only --format-dir output It just stuck in around 1490 step image I have tried several different configs, they all have the same issue.

    opened by danielzhangau 4
  • Generate semantic segmentation output as png

    Generate semantic segmentation output as png

    Hello,

    I'm generating semantic segmentation using the following command.

    python ./test.py ~/config.py --show-dir ~/Documents/bdd100k-models/data/bdd100k/labels/seg_track_20/val --opacity 1
    

    This generates the colormaps for the images, however, the output produced is in .jpg format which results in blur within the labels (as shown below.) How can I update the script so that it generates the labels in png format. My input images are from the MOTS 2020 Images dataset, which are in jpg format.

    image

    opened by digvijayad 2
  • Sem_Seg Inference Error - RuntimeError: DataLoader worker is killed by signal: Segmentation fault.

    Sem_Seg Inference Error - RuntimeError: DataLoader worker is killed by signal: Segmentation fault.

    Error when running Sem_seg model inference Command Run: python ./test.py ./configs/sem_seg/deeplabv3+_r50-d8_512x1024_40k_sem_seg_bdd100k.py --format-only --format-dir ./outputs

    ERROR:

    workers per gpu=2
    /home/lunet/codsn/.conda/envs/bdd100k-mmseg/lib/python3.8/site-packages/mmseg/models/losses/cross_entropy_loss.py:235: UserWarning: Default ``avg_non_ignore`` is False, if you would like to ignore the certain label and average loss over non-ignore labels, which is the same with PyTorch official cross_entropy, set ``avg_non_ignore=True``.
      warnings.warn(
    load checkpoint from http path: https://dl.cv.ethz.ch/bdd100k/sem_seg/models/deeplabv3+_r50-d8_512x1024_40k_sem_seg_bdd100k.pth
    'CLASSES' not found in meta, use dataset.CLASSES instead
    'PALETTE' not found in meta, use dataset.PALETTE instead
    [                                                  ] 0/1000, elapsed: 0s, ETA:ERROR: Unexpected segmentation fault encountered in worker.
    ERROR: Unexpected segmentation fault encountered in worker.
    ERROR: Unexpected segmentation fault encountered in worker.
    Traceback (most recent call last):
      File "/home/lunet/codsn/.conda/envs/bdd100k-mmseg/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1011, in _try_get_data
        data = self._data_queue.get(timeout=timeout)
      File "/home/lunet/codsn/.conda/envs/bdd100k-mmseg/lib/python3.8/queue.py", line 179, in get
        self.not_empty.wait(remaining)
      File "/home/lunet/codsn/.conda/envs/bdd100k-mmseg/lib/python3.8/threading.py", line 306, in wait
        gotit = waiter.acquire(True, timeout)
      File "/home/lunet/codsn/.conda/envs/bdd100k-mmseg/lib/python3.8/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
        _error_if_any_worker_fails()
    RuntimeError: DataLoader worker (pid 15796) is killed by signal: Segmentation fault. 
    
    The above exception was the direct cause of the following exception:
    
    Traceback (most recent call last):
      File "./test.py", line 174, in <module>
        main()
      File "./test.py", line 150, in main
        outputs = single_gpu_test(
      File "/home/lunet/codsn/.conda/envs/bdd100k-mmseg/lib/python3.8/site-packages/mmseg/apis/test.py", line 89, in single_gpu_test
        for batch_indices, data in zip(loader_indices, data_loader):
      File "/home/lunet/codsn/.conda/envs/bdd100k-mmseg/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 530, in __next__
        data = self._next_data()
      File "/home/lunet/codsn/.conda/envs/bdd100k-mmseg/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1207, in _next_data
        idx, data = self._get_data()
      File "/home/lunet/codsn/.conda/envs/bdd100k-mmseg/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1163, in _get_data
        success, data = self._try_get_data()
      File "/home/lunet/codsn/.conda/envs/bdd100k-mmseg/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1024, in _try_get_data
        raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) from e
    RuntimeError: DataLoader worker (pid(s) 15796) exited unexpectedly
    
    opened by digvijayad 2
  • red traffic lights

    red traffic lights

    Hello, thanks for your marvelous contribution.I would like to know that the category of red traffic lights is not available on bdd, have you re-labeled it on the bdd dataset?

    opened by liluxing153 1
  • tagging:finetune possibilities

    tagging:finetune possibilities

    hi hi, thanks for your marvelous contribution. I am very impressed. Now I want to apply this pretrain model(tagging road type and weather) on my own dataset, do you have any codebase for finetuning?

    opened by anran1231 1
  • Semantic segmetation ;common settings  MMSegmentation link not working

    Semantic segmetation ;common settings MMSegmentation link not working

    https://github.com/open-mmlab/mmsegmentation/blob/master/docs/model_zoo.md#common-settings The above link is not working

    I would like to know the settings under which the segmentation models are trained , so that i can replicate the results . thank you.

    opened by 100daggers 1
  • Issue in converting the instance segmentation mask encoding from bdd100k to coco

    Issue in converting the instance segmentation mask encoding from bdd100k to coco

    Hello,

    I am trying to convert the bdd100k instance segmentation using this command: python3 -m bdd100k.label.to_coco -m ins_seg --only-mask -i ./bdd100k/labels/ins_seg/bitmasks/val -o ./ins_seg_val_cocofmt_v2.json

    Also, tried this: python3 -m bdd100k.label.to_coco -m ins_seg -i ./bdd100k/labels/ins_seg/polygons/ins_seg_val.json -o ./ins_seg_val_cocofmt_v3.json -mb ./bdd100k/labels/ins_seg/bitmasks/val

    The conversion is successful in both cases and the annotation looks like this

    Screen Shot 2022-01-07 at 11 46 36 AM ** that's not how coco annotations are.

    Now, if you see the segmentation field above there's string encoding of the masks. Now, I am unsure if that's expected or not.

    Further, assuming it's correct, I tried to load the annotations using loader from DETR https://github.com/facebookresearch/detr/blob/091a817eca74b8b97e35e4531c1c39f89fbe38eb/datasets/coco.py#L36

    The line I have mentioned above is supposed to do the conversion but I am getting an error from the pycocotools that it's not expecting a string in the mask. Screen Shot 2022-01-07 at 11 53 51 AM

    So, I am unsure where the problem is? Is the conversion correct to coco then the loader should work? Note: I tried to convert the detections and they worked fine.

    Thank you for any help you can provide.

    opened by sfarkya04 1
  • How to train on my own gpu?

    How to train on my own gpu?

    Hello! thank you for your work~~but i wonder if i could train these models on my own gpu? i wonder if there are som instructions or usages? plz ,thank u!

    opened by StefanYz 1
Releases(v1.1.0)
  • v1.1.0(Dec 2, 2021)

    BDD100K Models 1.1.0 Release

    teaser

    • Highlights
    • New Task: Pose Estimation
    • New Models

    Highlights

    In this release, we provide over 20 pre-trained models for the new pose estimation task in BDD100K, along with evaluation and visualization tools. We also provide over 30 new models for object detection, instance segmentation, semantic segmentation, and drivable area.

    New Task: Pose Estimation

    With the release of 2D human pose estimation data in BDD100K, we provide pre-trained models in this repo.

    • Pose estimation
      • ResNet, MobileNetV2, HRNet, and more.

    New Models

    We provide additional models for previous tasks

    • Object detection
      • Libra R-CNN, HRNet.
    • Instance segmentation
      • GCNet, HRNet.
    • Semantic segmentation / drivable area
      • NLNet, PointRend.
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0(Oct 29, 2021)

    BDD100K Models 1.0.0 Release

    teaser

    • Highlights
    • Tasks
    • Models
    • Contribution

    Highlights

    The model zoo for BDD100K, the largest driving video dataset, is open for business! It contains more than 100 pre-trained models for 7 tasks. Each model also comes with results and visualization on val and test sets. We also provide documentation for community contribution so that everyone can include their models in this repo.

    Tasks

    We currently support 7 tasks

    • Image Tagging
    • Object Detection
    • Instance Segmentation
    • Semantic Segmentation
    • Drivable Area
    • Multiple Object Tracking (MOT)
    • Multiple Object Tracking and Segmentation (MOTS)

    Each task includes

    • Official evaluation results, model weights, predictions, and visualizations.
    • Detailed instructions for evaluation and visualization.

    Models

    We include popular network models for each task

    • Image tagging
      • VGG, ResNet, and DLA.
    • Object detection
      • Cascade R-CNN, Sparse R-CNN, Deformable ConvNets v2, and more.
    • Instance segmentation
      • Mask R-CNN, Cascade Mask R-CNN, HRNet, and more.
    • Semantic segmentation / drivable area
      • Deeplabv3+, CCNet, DNLNet, and more.
    • Multiple object tracking (MOT)
      • QDTrack.
    • Multiple object tracking and segmentation (MOTS)
      • PCAN.

    Contribution

    We encourage the BDD100K dataset users to contribute their models to this repo, so that all the info can be used for further result reproduction and analysis. The detailed instruction and model submission template are at the contribution page.

    Source code(tar.gz)
    Source code(zip)
Owner
ETH VIS Group
Visual Intelligence and Systems Group at ETH Zürich
ETH VIS Group
Multi Agent Path Finding Algorithms

MATP-solver Simulator collision check path step random initial states or given states Traditional method Seperate A* algorithem Confict-based Search S

30 Dec 12, 2022
Improving Query Representations for DenseRetrieval with Pseudo Relevance Feedback:A Reproducibility Study.

APR The repo for the paper Improving Query Representations for DenseRetrieval with Pseudo Relevance Feedback:A Reproducibility Study. Environment setu

ielab 8 Nov 26, 2022
It helps user to learn Pick-up lines and share if he has a better one

Pick-up-Lines-Generator(Open Source) It helps user to learn Pick-up lines Share and Add one or many to the DataBase Unique SQLite DataBase AI Undercon

knock_nott 0 May 04, 2022
Flexible Networks for Learning Physical Dynamics of Deformable Objects (2021)

Flexible Networks for Learning Physical Dynamics of Deformable Objects (2021) By Jinhyung Park, Dohae Lee, In-Kwon Lee from Yonsei University (Seoul,

Jinhyung Park 0 Jan 09, 2022
Complex-Valued Neural Networks (CVNN)Complex-Valued Neural Networks (CVNN)

Complex-Valued Neural Networks (CVNN) Done by @NEGU93 - J. Agustin Barrachina Using this library, the only difference with a Tensorflow code is that y

youceF 1 Nov 12, 2021
An experimentation and research platform to investigate the interaction of automated agents in an abstract simulated network environments.

CyberBattleSim April 8th, 2021: See the announcement on the Microsoft Security Blog. CyberBattleSim is an experimentation research platform to investi

Microsoft 1.5k Dec 25, 2022
Author: Wenhao Yu ([email protected]). ACL 2022. Commonsense Reasoning on Knowledge Graph for Text Generation

Diversifying Commonsense Reasoning Generation on Knowledge Graph Introduction -- This is the pytorch implementation of our ACL 2022 paper "Diversifyin

DM2 Lab @ ND 61 Dec 30, 2022
Simple ONNX operation generator. Simple Operation Generator for ONNX.

sog4onnx Simple ONNX operation generator. Simple Operation Generator for ONNX. https://github.com/PINTO0309/simple-onnx-processing-tools Key concept V

Katsuya Hyodo 6 May 15, 2022
Reinforcement learning library(framework) designed for PyTorch, implements DQN, DDPG, A2C, PPO, SAC, MADDPG, A3C, APEX, IMPALA ...

Automatic, Readable, Reusable, Extendable Machin is a reinforcement library designed for pytorch. Build status Platform Status Linux Windows Supported

Iffi 348 Dec 24, 2022
Myia prototyping

Myia Myia is a new differentiable programming language. It aims to support large scale high performance computations (e.g. linear algebra) and their g

Mila 456 Nov 07, 2022
Efficiently Disentangle Causal Representations

Efficiently Disentangle Causal Representations Install dependency pip install -r requirements.txt Main experiments Causality direction prediction cd

4 Apr 01, 2022
Setup and customize deep learning environment in seconds.

Deepo is a series of Docker images that allows you to quickly set up your deep learning research environment supports almost all commonly used deep le

Ming 6.3k Jan 06, 2023
Reinforcement learning for self-driving in a 3D simulation

SelfDrive_AI Reinforcement learning for self-driving in a 3D simulation (Created using UNITY-3D) 1. Requirements for the SelfDrive_AI Gym You need Pyt

Surajit Saikia 17 Dec 14, 2021
a dnn ai project to classify which food people are eating on audio recordings

Deep Learning - EAT Challenge About This project is part of an AI challenge of the DeepLearning course 2021 at the University of Augsburg. The objecti

Marco Tröster 1 Oct 24, 2021
Implementation of Convolutional LSTM in PyTorch.

ConvLSTM_pytorch This file contains the implementation of Convolutional LSTM in PyTorch made by me and DavideA. We started from this implementation an

Andrea Palazzi 1.3k Dec 29, 2022
Parallel and High-Fidelity Text-to-Lip Generation; AAAI 2022 ; Official code

Parallel and High-Fidelity Text-to-Lip Generation This repository is the official PyTorch implementation of our AAAI-2022 paper, in which we propose P

Zhying 77 Dec 21, 2022
[NeurIPS 2021] Shape from Blur: Recovering Textured 3D Shape and Motion of Fast Moving Objects

[NeurIPS 2021] Shape from Blur: Recovering Textured 3D Shape and Motion of Fast Moving Objects YouTube | arXiv Prerequisites Kaolin is available here:

Denys Rozumnyi 107 Dec 26, 2022
Picasso: a methods for embedding points in 2D in a way that respects distances while fitting a user-specified shape.

Picasso Code to generate Picasso embeddings of any input matrix. Picasso maps the points of an input matrix to user-defined, n-dimensional shape coord

Pachter Lab 45 Dec 23, 2022
Pytorch implementation for Semantic Segmentation/Scene Parsing on MIT ADE20K dataset

Semantic Segmentation on MIT ADE20K dataset in PyTorch This is a PyTorch implementation of semantic segmentation models on MIT ADE20K scene parsing da

MIT CSAIL Computer Vision 4.5k Jan 08, 2023
Misc YOLOL scripts for use in the Starbase space sandbox videogame

starbase-misc Misc YOLOL scripts for use in the Starbase space sandbox videogame. Each directory contains standalone YOLOL scripts. They don't really

4 Oct 17, 2021