Real-time Object Detection for Streaming Perception, CVPR 2022

Overview

StreamYOLO

Real-time Object Detection for Streaming Perception

Jinrong Yang, Songtao Liu, Zeming Li, Xiaoping Li, Sun Jian
Real-time Object Detection for Streaming Perception, CVPR 2022 (Oral)
Paper

Bestsoftwarechoose

Benchmark

Model size velocity sAP
0.5:0.95
sAP50 sAP75 weights COCO pretrained weights
StreamYOLO-s 600×960 1x 29.8 50.3 29.8 github github
StreamYOLO-m 600×960 1x 33.7 54.5 34.0 github github
StreamYOLO-l 600×960 1x 36.9 58.1 37.5 github github
StreamYOLO-l 600×960 2x 34.6 56.3 34.7 github github
StreamYOLO-l 600×960 still 39.4 60.0 40.2 github github

Quick Start

Dataset preparation

You can download Argoverse-1.1 full dataset and annotation from HERE and unzip it.

The folder structure should be organized as follows before our processing.

StreamYOLO
├── exps
├── tools
├── yolox
├── data
│   ├── Argoverse-1.1
│   │   ├── annotations
│   │       ├── tracking
│   │           ├── train
│   │           ├── val
│   │           ├── test
│   ├── Argoverse-HD
│   │   ├── annotations
│   │       ├── test-meta.json
│   │       ├── train.json
│   │       ├── val.json

The hash strings represent different video sequences in Argoverse, and ring_front_center is one of the sensors for that sequence. Argoverse-HD annotations correspond to images from this sensor. Information from other sensors (other ring cameras or LiDAR) is not used, but our framework can be also extended to these modalities or to a multi-modality setting.

Installation
# basic python libraries
conda create --name streamyolo python=3.7

pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html

pip3 install yolox==0.3
git clone [email protected]:yancie-yjr/StreamYOLO.git

cd StreamYOLO/

# add StreamYOLO to PYTHONPATH and add this line to ~/.bashrc or ~/.zshrc (change the file accordingly)
ADDPATH=$(pwd)
echo export PYTHONPATH=$PYTHONPATH:$ADDPATH >> ~/.bashrc
source ~/.bashrc

# Installing `mmcv` for the official sAP evaluation:
# Please replace `{cu_version}` and ``{torch_version}`` with the versions you are currently using.
# You will get import or runtime errors if the versions are incorrect.
pip install mmcv-full==1.1.5 -f https://download.openmmlab.com/mmcv/dist/{cu_version}/{torch_version}/index.html
Reproduce our results on Argoverse-HD

Step1. Prepare COCO dataset

cd <StreamYOLO_HOME>
ln -s /path/to/your/Argoverse-1.1 ./data/Argoverse-1.1
ln -s /path/to/your/Argoverse-HD ./data/Argoverse-HD

Step2. Reproduce our results on Argoverse:

python tools/train.py -f cfgs/m_s50_onex_dfp_tal_flip.py -d 8 -b 32 -c [/path/to/your/coco_pretrained_path] -o --fp16
  • -d: number of gpu devices.
  • -b: total batch size, the recommended number for -b is num-gpu * 8.
  • --fp16: mixed precision training.
  • -c: model checkpoint path.
Offline Evaluation

We support batch testing for fast evaluation:

python tools/eval.py -f  cfgs/l_s50_onex_dfp_tal_flip.py -c [/path/to/your/model_path] -b 64 -d 8 --conf 0.01 [--fp16] [--fuse]
  • --fuse: fuse conv and bn.
  • -d: number of GPUs used for evaluation. DEFAULT: All GPUs available will be used.
  • -b: total batch size across on all GPUs.
  • -c: model checkpoint path.
  • --conf: NMS threshold. If using 0.001, the performance will further improve by 0.2~0.3 sAP.
Online Evaluation

We modify the online evaluation from sAP

Please use 1 V100 GPU to test the performance since other GPUs with low computing power will trigger non-real-time results!!!!!!!!

cd sAP/streamyolo
bash streamyolo.sh

Citation

Please cite the following paper if this repo helps your research:

@InProceedings{streamyolo,
    author    = {Yang, Jinrong and Liu, Songtao and Li, Zeming and Li, Xiaoping and Sun, Jian},
    title     = {Real-time Object Detection for Streaming Perception},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
    year      = {2022}
}

License

This repo is released under the Apache 2.0 license. Please see the LICENSE file for more information.

Comments
  • when will the readme document be completed

    when will the readme document be completed

    Hi, @GOATmessi7 @yancie-yjr great wokrs. Can you enrich the readme about datasets preparing、how to training & validation and so on. hope to finish it soon. thanks

    opened by SmallMunich 1
  • ModuleNotFoundError: No module named 'exps'

    ModuleNotFoundError: No module named 'exps'

    hi everyone, I got this issue ...File "cfgs/m_s50_onex_dfp_tal_flip.py", line 189, in get_trainer from exps.train_utils.double_trainer import Trainer ModuleNotFoundError: No module named 'exps'

    Actually I ran code on local I got this error but when I try "echo export PYTHONPATH=$PYTHONPATH:$ADDPATH >> " it worked. But as you can guess my local GPU didn't enough for training. And I established everything on colab but this time "echo export..." didn't save me.

    opened by Tezcan98 3
  • A small bug in README about Dataset Prep.

    A small bug in README about Dataset Prep.

    For Developers

    Hi! When reproducing your results on Argoverse-HD, I found that the directory structure you provided in Quick Start - Dataset preparation section doesn't match the original directory structure of Argoverse-HD dataset, as well as your code required. The directory structure in Quick Start - Dataset preparation section:

    StreamYOLO
    ├── exps
    ├── tools
    ├── yolox
    ├── data
    │   ├── Argoverse-1.1
    │   │   ├── annotations
    │   │       ├── tracking
    │   │           ├── train
    │   │           ├── val
    │   │           ├── test
    │   ├── Argoverse-HD
    │   │   ├── annotations
    │   │       ├── test-meta.json
    │   │       ├── train.json
    │   │       ├── val.json
    

    should be edited as:

    StreamYOLO
    ├── exps
    ├── tools
    ├── yolox
    ├── data
    │   ├── Argoverse-1.1
    │   │   ├── tracking
    │   │       ├── train
    │   │       ├── val
    │   │       ├── test
    │   ├── Argoverse-HD
    │   │   ├── annotations
    │   │       ├── test-meta.json
    │   │       ├── train.json
    │   │       ├── val.json
    

    which matches the directory structure of the Argoverse-HD dataset: Screenshot 2022-09-21 151703.png

    For Stargazers

    BTW, if anyone manually modifies the directory structure to fit the one provided in README, an AssertionError will occur: (some parts of file path was edited)

    AssertionError: Caught AssertionError in DataLoader worker process 0.
    Original Traceback (most recent call last):
      File "%HOME%\anaconda3\envs\streamyolo\lib\site-packages\torch\utils\data\_utils\worker.py", line 198, in _worker_loop
        data = fetcher.fetch(index)
      File "%HOME%\anaconda3\envs\streamyolo\lib\site-packages\torch\utils\data\_utils\fetch.py", line 44, in fetch
        data = [self.dataset[idx] for idx in possibly_batched_index]
      File "%HOME%\anaconda3\envs\streamyolo\lib\site-packages\torch\utils\data\_utils\fetch.py", line 44, in <listcomp>
        data = [self.dataset[idx] for idx in possibly_batched_index]
      File "%HOME%\anaconda3\envs\streamyolo\lib\site-packages\yolox\data\datasets\datasets_wrapper.py", line 110, in wrapper
        ret_val = getitem_fn(self, index)
      File "%WORKSPACE%\StreamYOLO\exps\data\tal_flip_mosaicdetection.py", line 255, in __getitem__
        img, support_img, label, support_label, img_info, id_ = self._dataset.pull_item(idx)
      File "%WORKSPACE%\StreamYOLO\exps\dataset\tal_flip_one_future_argoversedataset.py", line 227, in pull_item
        img = self.load_resized_img(index)
      File "%WORKSPACE%\StreamYOLO\exps\dataset\tal_flip_one_future_argoversedataset.py", line 180, in load_resized_img
        img = self.load_image(index)
      File "%WORKSPACE%\StreamYOLO\exps\dataset\tal_flip_one_future_argoversedataset.py", line 196, in load_image
        assert img is not None
    AssertionError
    

    If anyone gets the similar error message, the content in For Developers may be helpful.

    opened by jingwenchong 6
  • Figure 2 in the paper

    Figure 2 in the paper

    Hi, I have read your paper.

    I have a question in figure 2.

    On the page3 in the paper, you wrote the expression "the output y1 of the frame F1 is matched and evaluated with the ground truth of F3 and the result of F2 is missed" about Figure 2.

    I understood like that expression mean y1 is the output of the none-real-time detectors of frame F1.

    But, before the frame F3 is received, the frame F2 is received in first.

    So I can't understand that point and I also want to ask when the output of the frame f0 come out.

    opened by wpdlatm1452 1
  • How can i save the detection result?

    How can i save the detection result?

    Hi, thank you for suggesting your nice code.

    I trained the model using Argoverse dataset following your readme.

    I want to run demo and save detection results (image or video), how can i do that?

    thank you.

    opened by daminlee1 0
Owner
Jinrong Yang
Research: Object detection, Deep learning
Jinrong Yang
A Tensorflow implementation of BicycleGAN.

BicycleGAN implementation in Tensorflow As part of the implementation series of Joseph Lim's group at USC, our motivation is to accelerate (or sometim

Cognitive Learning for Vision and Robotics (CLVR) lab @ USC 97 Dec 02, 2022
Using pytorch to implement unet network for liver image segmentation.

Using pytorch to implement unet network for liver image segmentation.

zxq 1 Dec 17, 2021
Codebase for "ProtoAttend: Attention-Based Prototypical Learning."

Codebase for "ProtoAttend: Attention-Based Prototypical Learning." Authors: Sercan O. Arik and Tomas Pfister Paper: Sercan O. Arik and Tomas Pfister,

47 2 May 17, 2022
YOLOv5 Series Multi-backbone, Pruning and quantization Compression Tool Box.

YOLOv5-Compression Update News Requirements 环境安装 pip install -r requirements.txt Evaluation metric Visdrone Model mAP ZhangYuan 719 Jan 02, 2023

A Fast Knowledge Distillation Framework for Visual Recognition

FKD: A Fast Knowledge Distillation Framework for Visual Recognition Official PyTorch implementation of paper A Fast Knowledge Distillation Framework f

Zhiqiang Shen 129 Dec 24, 2022
Object Depth via Motion and Detection Dataset

ODMD Dataset ODMD is the first dataset for learning Object Depth via Motion and Detection. ODMD training data are configurable and extensible, with ea

Brent Griffin 172 Dec 21, 2022
Artificial intelligence technology inferring issues and logically supporting facts from raw text

개요 비정형 텍스트를 학습하여 쟁점별 사실과 논리적 근거 추론이 가능한 인공지능 원천기술 Artificial intelligence techno

6 Dec 29, 2021
🍅🍅🍅YOLOv5-Lite: lighter, faster and easier to deploy. Evolved from yolov5 and the size of model is only 1.7M (int8) and 3.3M (fp16). It can reach 10+ FPS on the Raspberry Pi 4B when the input size is 320×320~

YOLOv5-Lite:lighter, faster and easier to deploy Perform a series of ablation experiments on yolov5 to make it lighter (smaller Flops, lower memory, a

pogg 1.5k Jan 05, 2023
Neural Architecture Search Powered by Swarm Intelligence 🐜

Neural Architecture Search Powered by Swarm Intelligence 🐜 DeepSwarm DeepSwarm is an open-source library which uses Ant Colony Optimization to tackle

288 Oct 28, 2022
🦕 NanoSaur is a little tracked robot ROS2 enabled, made for an NVIDIA Jetson Nano

🦕 nanosaur NanoSaur is a little tracked robot ROS2 enabled, made for an NVIDIA Jetson Nano Website: nanosaur.ai Do you need an help? Discord For tech

NanoSaur 162 Dec 09, 2022
Predict the latency time of the deep learning models

Deep Neural Network Prediction Step 1. Genernate random parameters and Run them sequentially : $ python3 collect_data.py -gp -ep -pp -pl pooling -num

QAQ 1 Nov 12, 2021
Release of the ConditionalQA dataset

ConditionalQA Datasets accompanying the paper ConditionalQA: A Complex Reading Comprehension Dataset with Conditional Answers. Disclaimer This dataset

14 Oct 17, 2022
Source code for "MusCaps: Generating Captions for Music Audio" (IJCNN 2021)

MusCaps: Generating Captions for Music Audio Ilaria Manco1 2, Emmanouil Benetos1, Elio Quinton2, Gyorgy Fazekas1 1 Queen Mary University of London, 2

Ilaria Manco 57 Dec 07, 2022
gtfs2vec - Learning GTFS Embeddings for comparing PublicTransport Offer in Microregions

gtfs2vec This is a companion repository for a gtfs2vec - Learning GTFS Embeddings for comparing PublicTransport Offer in Microregions publication. Vis

Politechnika Wrocławska - repozytorium dla informatyków 5 Oct 10, 2022
Dynamic Neural Representational Decoders for High-Resolution Semantic Segmentation

Dynamic Neural Representational Decoders for High-Resolution Semantic Segmentation Requirements This repository needs mmsegmentation Training To train

Adelaide Intelligent Machines (AIM) Group 7 Sep 12, 2022
AQP is a modular pipeline built to enable the comparison and testing of different quality metric configurations.

Audio Quality Platform - AQP An Open Modular Python Platform for Objective Speech and Audio Quality Metrics AQP is a highly modular pipeline designed

Jack Geraghty 24 Oct 01, 2022
PyTorch implementation of the method described in the paper VoiceLoop: Voice Fitting and Synthesis via a Phonological Loop.

VoiceLoop PyTorch implementation of the method described in the paper VoiceLoop: Voice Fitting and Synthesis via a Phonological Loop. VoiceLoop is a n

Meta Archive 873 Dec 15, 2022
SoK: Vehicle Orientation Representations for Deep Rotation Estimation

SoK: Vehicle Orientation Representations for Deep Rotation Estimation Raymond H. Tu, Siyuan Peng, Valdimir Leung, Richard Gao, Jerry Lan This is the o

FIRE Capital One Machine Learning of the University of Maryland 12 Oct 07, 2022
Encoding Causal Macrovariables

Encoding Causal Macrovariables Data Natural climate data ('El Nino') Self-generated data ('Simulated') Experiments Detecting macrovariables through th

Benedikt Höltgen 3 Jul 31, 2022
A curated list of awesome projects and resources related fastai

A curated list of awesome projects and resources related fastai

Tanishq Abraham 138 Dec 22, 2022