Model Zoo of BDD100K Dataset

Overview

BDD100K Model Zoo

In this repository, we provide popular models for each task in the BDD100K dataset.

teaser

For each task in the BDD100K dataset, we make publicly available the model weights, evaluation results, predictions, visualizations, as well as scripts to performance evaluation and visualization. The goal is to provide a set of competitive baselines to facilitate research and provide a common benchmark for comparison.

The number of pre-trained models in this zoo is 1️⃣ 1️⃣ 5️⃣ . You can include your models in this repo as well! See contribution instructions.

This repository currently supports the tasks listed below. For more information about each task, click on the task name. We plan to support all tasks in the BDD100K dataset eventually; see the roadmap for our plan and progress.

If you have any questions, please go to the BDD100K discussions.

Roadmap

  • Lane marking
  • Panoptic segmentation
  • Pose estimation

Dataset

Please refer to the dataset preparation instructions for how to prepare and use the BDD100K dataset with the models.

Maintainers

Citation

To cite the BDD100K dataset in your paper,

@InProceedings{bdd100k,
    author = {Yu, Fisher and Chen, Haofeng and Wang, Xin and Xian, Wenqi and Chen,
              Yingying and Liu, Fangchen and Madhavan, Vashisht and Darrell, Trevor},
    title = {BDD100K: A Diverse Driving Dataset for Heterogeneous Multitask Learning},
    booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month = {June},
    year = {2020}
}
Comments
  • Using the models to predict on other Images

    Using the models to predict on other Images

    Hi,

    can i use the models under "bdd100k-models/det/" to make predictions on other images ?

    When i followed the "Usage"-Section, it seems that the models can only be used to evaluate the Test/Val Images.

    opened by askppp 5
  • Drivable Segmentation Model inference stuck

    Drivable Segmentation Model inference stuck

    When I am running Deeplabv3+ model by using: python ./test.py configs/drivable/deeplabv3plus_r50-d8_512x1024_40k_drivable_bdd100k.py --format-only --format-dir output It just stuck in around 1490 step image I have tried several different configs, they all have the same issue.

    opened by danielzhangau 4
  • Generate semantic segmentation output as png

    Generate semantic segmentation output as png

    Hello,

    I'm generating semantic segmentation using the following command.

    python ./test.py ~/config.py --show-dir ~/Documents/bdd100k-models/data/bdd100k/labels/seg_track_20/val --opacity 1
    

    This generates the colormaps for the images, however, the output produced is in .jpg format which results in blur within the labels (as shown below.) How can I update the script so that it generates the labels in png format. My input images are from the MOTS 2020 Images dataset, which are in jpg format.

    image

    opened by digvijayad 2
  • Sem_Seg Inference Error - RuntimeError: DataLoader worker is killed by signal: Segmentation fault.

    Sem_Seg Inference Error - RuntimeError: DataLoader worker is killed by signal: Segmentation fault.

    Error when running Sem_seg model inference Command Run: python ./test.py ./configs/sem_seg/deeplabv3+_r50-d8_512x1024_40k_sem_seg_bdd100k.py --format-only --format-dir ./outputs

    ERROR:

    workers per gpu=2
    /home/lunet/codsn/.conda/envs/bdd100k-mmseg/lib/python3.8/site-packages/mmseg/models/losses/cross_entropy_loss.py:235: UserWarning: Default ``avg_non_ignore`` is False, if you would like to ignore the certain label and average loss over non-ignore labels, which is the same with PyTorch official cross_entropy, set ``avg_non_ignore=True``.
      warnings.warn(
    load checkpoint from http path: https://dl.cv.ethz.ch/bdd100k/sem_seg/models/deeplabv3+_r50-d8_512x1024_40k_sem_seg_bdd100k.pth
    'CLASSES' not found in meta, use dataset.CLASSES instead
    'PALETTE' not found in meta, use dataset.PALETTE instead
    [                                                  ] 0/1000, elapsed: 0s, ETA:ERROR: Unexpected segmentation fault encountered in worker.
    ERROR: Unexpected segmentation fault encountered in worker.
    ERROR: Unexpected segmentation fault encountered in worker.
    Traceback (most recent call last):
      File "/home/lunet/codsn/.conda/envs/bdd100k-mmseg/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1011, in _try_get_data
        data = self._data_queue.get(timeout=timeout)
      File "/home/lunet/codsn/.conda/envs/bdd100k-mmseg/lib/python3.8/queue.py", line 179, in get
        self.not_empty.wait(remaining)
      File "/home/lunet/codsn/.conda/envs/bdd100k-mmseg/lib/python3.8/threading.py", line 306, in wait
        gotit = waiter.acquire(True, timeout)
      File "/home/lunet/codsn/.conda/envs/bdd100k-mmseg/lib/python3.8/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
        _error_if_any_worker_fails()
    RuntimeError: DataLoader worker (pid 15796) is killed by signal: Segmentation fault. 
    
    The above exception was the direct cause of the following exception:
    
    Traceback (most recent call last):
      File "./test.py", line 174, in <module>
        main()
      File "./test.py", line 150, in main
        outputs = single_gpu_test(
      File "/home/lunet/codsn/.conda/envs/bdd100k-mmseg/lib/python3.8/site-packages/mmseg/apis/test.py", line 89, in single_gpu_test
        for batch_indices, data in zip(loader_indices, data_loader):
      File "/home/lunet/codsn/.conda/envs/bdd100k-mmseg/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 530, in __next__
        data = self._next_data()
      File "/home/lunet/codsn/.conda/envs/bdd100k-mmseg/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1207, in _next_data
        idx, data = self._get_data()
      File "/home/lunet/codsn/.conda/envs/bdd100k-mmseg/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1163, in _get_data
        success, data = self._try_get_data()
      File "/home/lunet/codsn/.conda/envs/bdd100k-mmseg/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1024, in _try_get_data
        raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) from e
    RuntimeError: DataLoader worker (pid(s) 15796) exited unexpectedly
    
    opened by digvijayad 2
  • red traffic lights

    red traffic lights

    Hello, thanks for your marvelous contribution.I would like to know that the category of red traffic lights is not available on bdd, have you re-labeled it on the bdd dataset?

    opened by liluxing153 1
  • tagging:finetune possibilities

    tagging:finetune possibilities

    hi hi, thanks for your marvelous contribution. I am very impressed. Now I want to apply this pretrain model(tagging road type and weather) on my own dataset, do you have any codebase for finetuning?

    opened by anran1231 1
  • Semantic segmetation ;common settings  MMSegmentation link not working

    Semantic segmetation ;common settings MMSegmentation link not working

    https://github.com/open-mmlab/mmsegmentation/blob/master/docs/model_zoo.md#common-settings The above link is not working

    I would like to know the settings under which the segmentation models are trained , so that i can replicate the results . thank you.

    opened by 100daggers 1
  • Issue in converting the instance segmentation mask encoding from bdd100k to coco

    Issue in converting the instance segmentation mask encoding from bdd100k to coco

    Hello,

    I am trying to convert the bdd100k instance segmentation using this command: python3 -m bdd100k.label.to_coco -m ins_seg --only-mask -i ./bdd100k/labels/ins_seg/bitmasks/val -o ./ins_seg_val_cocofmt_v2.json

    Also, tried this: python3 -m bdd100k.label.to_coco -m ins_seg -i ./bdd100k/labels/ins_seg/polygons/ins_seg_val.json -o ./ins_seg_val_cocofmt_v3.json -mb ./bdd100k/labels/ins_seg/bitmasks/val

    The conversion is successful in both cases and the annotation looks like this

    Screen Shot 2022-01-07 at 11 46 36 AM ** that's not how coco annotations are.

    Now, if you see the segmentation field above there's string encoding of the masks. Now, I am unsure if that's expected or not.

    Further, assuming it's correct, I tried to load the annotations using loader from DETR https://github.com/facebookresearch/detr/blob/091a817eca74b8b97e35e4531c1c39f89fbe38eb/datasets/coco.py#L36

    The line I have mentioned above is supposed to do the conversion but I am getting an error from the pycocotools that it's not expecting a string in the mask. Screen Shot 2022-01-07 at 11 53 51 AM

    So, I am unsure where the problem is? Is the conversion correct to coco then the loader should work? Note: I tried to convert the detections and they worked fine.

    Thank you for any help you can provide.

    opened by sfarkya04 1
  • How to train on my own gpu?

    How to train on my own gpu?

    Hello! thank you for your work~~but i wonder if i could train these models on my own gpu? i wonder if there are som instructions or usages? plz ,thank u!

    opened by StefanYz 1
Releases(v1.1.0)
  • v1.1.0(Dec 2, 2021)

    BDD100K Models 1.1.0 Release

    teaser

    • Highlights
    • New Task: Pose Estimation
    • New Models

    Highlights

    In this release, we provide over 20 pre-trained models for the new pose estimation task in BDD100K, along with evaluation and visualization tools. We also provide over 30 new models for object detection, instance segmentation, semantic segmentation, and drivable area.

    New Task: Pose Estimation

    With the release of 2D human pose estimation data in BDD100K, we provide pre-trained models in this repo.

    • Pose estimation
      • ResNet, MobileNetV2, HRNet, and more.

    New Models

    We provide additional models for previous tasks

    • Object detection
      • Libra R-CNN, HRNet.
    • Instance segmentation
      • GCNet, HRNet.
    • Semantic segmentation / drivable area
      • NLNet, PointRend.
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0(Oct 29, 2021)

    BDD100K Models 1.0.0 Release

    teaser

    • Highlights
    • Tasks
    • Models
    • Contribution

    Highlights

    The model zoo for BDD100K, the largest driving video dataset, is open for business! It contains more than 100 pre-trained models for 7 tasks. Each model also comes with results and visualization on val and test sets. We also provide documentation for community contribution so that everyone can include their models in this repo.

    Tasks

    We currently support 7 tasks

    • Image Tagging
    • Object Detection
    • Instance Segmentation
    • Semantic Segmentation
    • Drivable Area
    • Multiple Object Tracking (MOT)
    • Multiple Object Tracking and Segmentation (MOTS)

    Each task includes

    • Official evaluation results, model weights, predictions, and visualizations.
    • Detailed instructions for evaluation and visualization.

    Models

    We include popular network models for each task

    • Image tagging
      • VGG, ResNet, and DLA.
    • Object detection
      • Cascade R-CNN, Sparse R-CNN, Deformable ConvNets v2, and more.
    • Instance segmentation
      • Mask R-CNN, Cascade Mask R-CNN, HRNet, and more.
    • Semantic segmentation / drivable area
      • Deeplabv3+, CCNet, DNLNet, and more.
    • Multiple object tracking (MOT)
      • QDTrack.
    • Multiple object tracking and segmentation (MOTS)
      • PCAN.

    Contribution

    We encourage the BDD100K dataset users to contribute their models to this repo, so that all the info can be used for further result reproduction and analysis. The detailed instruction and model submission template are at the contribution page.

    Source code(tar.gz)
    Source code(zip)
Owner
ETH VIS Group
Visual Intelligence and Systems Group at ETH Zürich
ETH VIS Group
U-Net implementation in PyTorch for FLAIR abnormality segmentation in brain MRI

U-Net for brain segmentation U-Net implementation in PyTorch for FLAIR abnormality segmentation in brain MRI based on a deep learning segmentation alg

562 Jan 02, 2023
利用python脚本实现微信、支付宝账单的合并,并保存到excel文件实现自动记账,可查看可视化图表。

KeepAccounts_v2.0 KeepAccounts.exe和其配套表格能够实现微信、支付宝官方导出账单的读取合并,为每笔帐标记类型,并按月份和类型生成可视化图表。再也不用消费一笔记一笔,每月仅需10分钟,记好所有的帐。 作者: MickLife Bilibili: https://spac

159 Jan 01, 2023
PyTorch implementation for "Sharpness-aware Quantization for Deep Neural Networks".

Sharpness-aware Quantization for Deep Neural Networks This is the official repository for our paper: Sharpness-aware Quantization for Deep Neural Netw

Zhuang AI Group 30 Dec 19, 2022
PyTorch version implementation of DORN

DORN_PyTorch This is a PyTorch version implementation of DORN Reference H. Fu, M. Gong, C. Wang, K. Batmanghelich and D. Tao: Deep Ordinal Regression

Zilin.Zhang 3 Apr 27, 2022
Logsig-RNN: a novel network for robust and efficient skeleton-based action recognition

GCN_LogsigRNN This repository holds the codebase for the paper: Logsig-RNN: a novel network for robust and efficient skeleton-based action recognition

7 Oct 14, 2022
SAT: 2D Semantics Assisted Training for 3D Visual Grounding, ICCV 2021 (Oral)

SAT: 2D Semantics Assisted Training for 3D Visual Grounding SAT: 2D Semantics Assisted Training for 3D Visual Grounding by Zhengyuan Yang, Songyang Zh

Zhengyuan Yang 22 Nov 30, 2022
Weight estimation in CT by multi atlas techniques

maweight A Python package for multi-atlas based weight estimation for CT images, including segmentation by registration, feature extraction and model

György Kovács 0 Dec 24, 2021
The implementation of PEMP in paper "Prior-Enhanced Few-Shot Segmentation with Meta-Prototypes"

Prior-Enhanced network with Meta-Prototypes (PEMP) This is the PyTorch implementation of PEMP. Overview of PEMP Meta-Prototypes & Adaptive Prototypes

Jianwei ZHANG 8 Oct 14, 2021
Doge-Prediction - Coding Club prediction ig

Doge-Prediction Coding Club prediction ig Basically: Create an application that

1 Jan 10, 2022
DARTS-: Robustly Stepping out of Performance Collapse Without Indicators

[ICLR'21] DARTS-: Robustly Stepping out of Performance Collapse Without Indicators [openreview] Authors: Xiangxiang Chu, Xiaoxing Wang, Bo Zhang, Shun

55 Nov 01, 2022
Annotated notes and summaries of the TensorFlow white paper, along with SVG figures and links to documentation

TensorFlow White Paper Notes Features Notes broken down section by section, as well as subsection by subsection Relevant links to documentation, resou

Sam Abrahams 437 Oct 09, 2022
This repository contains the code for our paper VDA (public in EMNLP2021 main conference)

Virtual Data Augmentation: A Robust and General Framework for Fine-tuning Pre-trained Models This repository contains the code for our paper VDA (publ

RUCAIBox 13 Aug 06, 2022
(NeurIPS '21 Spotlight) IQ-Learn: Inverse Q-Learning for Imitation

Inverse Q-Learning (IQ-Learn) Official code base for IQ-Learn: Inverse soft-Q Learning for Imitation, NeurIPS '21 Spotlight IQ-Learn is an easy-to-use

Divyansh Garg 102 Dec 20, 2022
Collection of common code that's shared among different research projects in FAIR computer vision team.

fvcore fvcore is a light-weight core library that provides the most common and essential functionality shared in various computer vision frameworks de

Meta Research 1.5k Jan 07, 2023
Tackling data scarcity in Speech Translation using zero-shot multilingual Machine Translation techniques

Tackling data scarcity in Speech Translation using zero-shot multilingual Machine Translation techniques This repository is derived from the NMTGMinor

Tu Anh Dinh 1 Sep 07, 2022
Numenta Platform for Intelligent Computing is an implementation of Hierarchical Temporal Memory (HTM), a theory of intelligence based strictly on the neuroscience of the neocortex.

NuPIC Numenta Platform for Intelligent Computing The Numenta Platform for Intelligent Computing (NuPIC) is a machine intelligence platform that implem

Numenta 6.3k Dec 30, 2022
Establishing Strong Baselines for TripClick Health Retrieval; ECIR 2022

TripClick Baselines with Improved Training Data Welcome 🙌 to the hub-repo of our paper: Establishing Strong Baselines for TripClick Health Retrieval

Sebastian Hofstätter 3 Nov 03, 2022
Testbed of AI Systems Quality Management

qunomon Description A testbed for testing and managing AI system qualities. Demo Sorry. Not deployment public server at alpha version. Requirement Ins

AIST AIRC 15 Nov 27, 2021
Code for Multiple Instance Active Learning for Object Detection, CVPR 2021

Language: 简体中文 | English Introduction This is the code for Multiple Instance Active Learning for Object Detection, CVPR 2021. Installation A Linux pla

Tianning Yuan 269 Dec 21, 2022
The repository includes the code for training cell counting applications. (Keras + Tensorflow)

cell_counting_v2 The repository includes the code for training cell counting applications. (Keras + Tensorflow) Dataset can be downloaded here : http:

Weidi 113 Oct 06, 2022