MegEngine implementation of YOLOX

Overview

Introduction

YOLOX is an anchor-free version of YOLO, with a simpler design but better performance! It aims to bridge the gap between research and industrial communities. For more details, please refer to our report on Arxiv.

This repo is an implementation of MegEngine version YOLOX, there is also a PyTorch implementation.

Updates!!

  • 【2021/08/05】 We release MegEngine version YOLOX.

Comming soon

  • Faster YOLOX training speed.
  • More models of megEngine version.
  • AMP training of megEngine.

Benchmark

Light Models.

Model size mAPval
0.5:0.95
Params
(M)
FLOPs
(G)
weights
YOLOX-Tiny 416 32.2 5.06 6.45 github

Standard Models.

Comming soon!

Quick Start

Installation

Step1. Install YOLOX.

git clone [email protected]:MegEngine/YOLOX.git
cd YOLOX
pip3 install -U pip && pip3 install -r requirements.txt
pip3 install -v -e .  # or  python3 setup.py develop

Step2. Install pycocotools.

pip3 install cython; pip3 install 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'
Demo

Step1. Download a pretrained model from the benchmark table.

Step2. Use either -n or -f to specify your detector's config. For example:

python tools/demo.py image -n yolox-tiny -c /path/to/your/yolox_tiny.pkl --path assets/dog.jpg --conf 0.25 --nms 0.45 --tsize 416 --save_result --device [cpu/gpu]

or

python tools/demo.py image -f exps/default/yolox_tiny.py -c /path/to/your/yolox_tiny.pkl --path assets/dog.jpg --conf 0.25 --nms 0.45 --tsize 416 --save_result --device [cpu/gpu]

Demo for video:

python tools/demo.py video -n yolox-s -c /path/to/your/yolox_s.pkl --path /path/to/your/video --conf 0.25 --nms 0.45 --tsize 416 --save_result --device [cpu/gpu]
Reproduce our results on COCO

Step1. Prepare COCO dataset

cd <YOLOX_HOME>
ln -s /path/to/your/COCO ./datasets/COCO

Step2. Reproduce our results on COCO by specifying -n:

python tools/train.py -n yolox-tiny -d 8 -b 128
  • -d: number of gpu devices
  • -b: total batch size, the recommended number for -b is num-gpu * 8

When using -f, the above commands are equivalent to:

python tools/train.py -f exps/default/yolox-tiny.py -d 8 -b 128
Evaluation

We support batch testing for fast evaluation:

python tools/eval.py -n  yolox-tiny -c yolox_tiny.pkl -b 64 -d 8 --conf 0.001 [--fuse]
  • --fuse: fuse conv and bn
  • -d: number of GPUs used for evaluation. DEFAULT: All GPUs available will be used.
  • -b: total batch size across on all GPUs

To reproduce speed test, we use the following command:

python tools/eval.py -n  yolox-tiny -c yolox_tiny.pkl -b 1 -d 1 --conf 0.001 --fuse
Tutorials

MegEngine Deployment

MegEngine in C++

Dump mge file

NOTE: result model is dumped with optimize_for_inference and enable_fuse_conv_bias_nonlinearity.

python3 tools/export_mge.py -n yolox-tiny -c yolox_tiny.pkl --dump_path yolox_tiny.mge

Benchmark

  • Model Info: yolox-s @ input(1,3,640,640)

  • Testing Devices

    • x86_64 -- Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz
    • AArch64 -- xiamo phone mi9
    • CUDA -- 1080TI @ cuda-10.1-cudnn-v7.6.3-TensorRT-6.0.1.5.sh @ Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz
[email protected] +fastrun +weight_preprocess (msec) 1 thread 2 thread 4 thread 8 thread
x86_64(fp32) 516.245 318.29 253.273 222.534
x86_64(fp32+chw88) 362.020 NONE NONE NONE
aarch64(fp32+chw44) 555.877 351.371 242.044 NONE
aarch64(fp16+chw) 439.606 327.356 255.531 NONE
CUDA @ CUDA (msec) 1 batch 2 batch 4 batch 8 batch 16 batch 32 batch 64 batch
megengine(fp32+chw) 8.137 13.2893 23.6633 44.470 86.491 168.95 334.248

Third-party resources

Cite YOLOX

If you use YOLOX in your research, please cite our work by using the following BibTeX entry:

 @article{yolox2021,
  title={YOLOX: Exceeding YOLO Series in 2021},
  author={Ge, Zheng and Liu, Songtao and Wang, Feng and Li, Zeming and Sun, Jian},
  journal={arXiv preprint arXiv:2107.08430},
  year={2021}
}
Comments
  • Why the yolox_tiny can not load the pretrain model correctly?

    Why the yolox_tiny can not load the pretrain model correctly?

    When i used this repo on MegStudio and tried to train yolox_tiny with the pretrained model, an error occurred. The detail log are as follow.

    2021-09-15 13:11:11 | INFO | yolox.core.trainer:247 - loading checkpoint for fine tuning 2021-09-15 13:11:11 | ERROR | main:93 - An error has been caught in function '', process 'MainProcess' (359), thread 'MainThread' (139974572922688): Traceback (most recent call last):

    File "tools/train.py", line 93, in main(exp, args) │ │ └ Namespace(batch_size=16, ckpt='yolox_tiny.pkl', devices=1, exp_file='exps/default/yolox_tiny.py', experiment_name='yolox_tiny... │ └ ╒══════════════════╤═════════════════════════════════════════════════════════════════════════════════════════════════════════... └ <function main at 0x7f4e5d7308c0>

    File "tools/train.py", line 73, in main trainer.train() │ └ <function Trainer.train at 0x7f4dec68b680> └ <yolox.core.trainer.Trainer object at 0x7f4d9a68a7d0>

    File "/home/megstudio/workspace/YOLOX/yolox/core/trainer.py", line 46, in train self.before_train() │ └ <function Trainer.before_train at 0x7f4d9a6f55f0> └ <yolox.core.trainer.Trainer object at 0x7f4d9a68a7d0>

    File "/home/megstudio/workspace/YOLOX/yolox/core/trainer.py", line 107, in before_train model = self.resume_train(model) │ │ └ YOLOX( │ │ (backbone): YOLOPAFPN( │ │ (backbone): CSPDarknet( │ │ (stem): Focus( │ │ (conv): BaseConv( │ │ (conv): ... │ └ <function Trainer.resume_train at 0x7f4d9a70c0e0> └ <yolox.core.trainer.Trainer object at 0x7f4d9a68a7d0>

    File "/home/megstudio/workspace/YOLOX/yolox/core/trainer.py", line 249, in resume_train ckpt = mge.load(ckpt_file, map_location="cpu")["model"] │ │ └ 'yolox_tiny.pkl' │ └ <function load at 0x7f4df6c46680> └ <module 'megengine' from '/home/megstudio/.miniconda/envs/xuan/lib/python3.7/site-packages/megengine/init.py'>

    KeyError: 'model'

    opened by qunyuanchen 4
  • AssertionError: Torch not compiled with CUDA enabled

    AssertionError: Torch not compiled with CUDA enabled

     python tools/demo.py image -n yolox-tiny -c /path/to/your/yolox_tiny.pkl --path assets/dog.jpg --conf 0.25 --nms 0.45 --tsize 416 --save_result --device gpu
    2021-09-07 18:45:49.600 | INFO     | __main__:main:250 - Args: Namespace(camid=0, ckpt='/path/to/your/yolox_tiny.pkl', conf=0.25, demo='image', device='gpu', exp_file=None, experiment_name='yolox_tiny', fp16=False, fuse=False, legacy=False, name='yolox-tiny', nms=0.45, path='assets/dog.jpg', save_result=True, trt=False, tsize=416)
    E:\anaconda3\envs\YOLOX\lib\site-packages\torch\nn\functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at  ..\c10/core/TensorImpl.h:1156.)
      return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
    2021-09-07 18:45:49.791 | INFO     | __main__:main:260 - Model Summary: Params: 5.06M, Gflops: 6.45
    Traceback (most recent call last):
      File "tools/demo.py", line 306, in <module>
        main(exp, args)
      File "tools/demo.py", line 263, in main
        model.cuda()
      File "E:\anaconda3\envs\YOLOX\lib\site-packages\torch\nn\modules\module.py", line 637, in cuda
        return self._apply(lambda t: t.cuda(device))
      File "E:\anaconda3\envs\YOLOX\lib\site-packages\torch\nn\modules\module.py", line 530, in _apply
        module._apply(fn)
      File "E:\anaconda3\envs\YOLOX\lib\site-packages\torch\nn\modules\module.py", line 530, in _apply
        module._apply(fn)
      File "E:\anaconda3\envs\YOLOX\lib\site-packages\torch\nn\modules\module.py", line 530, in _apply
        module._apply(fn)
      [Previous line repeated 2 more times]
      File "E:\anaconda3\envs\YOLOX\lib\site-packages\torch\nn\modules\module.py", line 552, in _apply
        param_applied = fn(param)
      File "E:\anaconda3\envs\YOLOX\lib\site-packages\torch\nn\modules\module.py", line 637, in <lambda>
        return self._apply(lambda t: t.cuda(device))
      File "E:\anaconda3\envs\YOLOX\lib\site-packages\torch\cuda\__init__.py", line 166, in _lazy_init
        raise AssertionError("Torch not compiled with CUDA enabled")
    AssertionError: Torch not compiled with CUDA enabled
    
    
    

    环境 CUDA Version: 11.2 没问题

    按照官方的教程 报错

    opened by monkeycc 4
  • Shouldn't it be Xiaomi instead of

    Shouldn't it be Xiaomi instead of "xiamo" in the Benchmark -- Testing Devices section?

    Testing Devices

    x86_64 -- Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz AArch64 -- xiamo phone mi9 CUDA -- 1080TI @ cuda-10.1-cudnn-v7.6.3-TensorRT-6.0.1.5.sh @ Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz

    Shouldn't it be Xiaomi phone mi9?

    opened by Matt-Kou 2
  • fix bugs

    fix bugs

    1. img_info for VOC dataset is wrong.
    2. grid for yolo_head is wrong (Similar to https://github.com/MegEngine/YOLOX/issues/9). If the image has the same height and width, it will be ok. But, when height != weight, it will be wrong.
    opened by LZHgrla 2
  • RuntimeError: assertion `dtype == dst.dtype && dst.is_contiguous()'

    RuntimeError: assertion `dtype == dst.dtype && dst.is_contiguous()'

    当输入宽高不一致时报错, 在训练过程中报错,报错时机随缘: yolo_head.py", line 351, in get_assignments bboxes_preds_per_image = bboxes_preds_per_image[fg_mask] RuntimeError: assertion `dtype == dst.dtype && dst.is_contiguous()' failed at ../../../../../../dnn/src/common/elemwise/opr_impl.cpp:281: void megdnn::ElemwiseForward::check_layout_and_broadcast(const TensorLayoutPtrArray&, const megdnn::TensorLayout&)

    opened by amazingzby 1
Releases(0.0.1)
Owner
旷视天元 MegEngine
旷视天元 MegEngine
Godot RL Agents is a fully Open Source packages that allows video game creators

Godot RL Agents The Godot RL Agents is a fully Open Source packages that allows video game creators, AI researchers and hobbiest the opportunity to le

Edward Beeching 326 Dec 30, 2022
Code for MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks

MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks This is the code for the paper: MentorNet: Learning Data-Driven Curriculum fo

Google 302 Dec 23, 2022
[ACM MM 2021] Yes, "Attention is All You Need", for Exemplar based Colorization

Transformer for Image Colorization This is an implemention for Yes, "Attention Is All You Need", for Exemplar based Colorization, and the current soft

Wang Yin 30 Dec 07, 2022
Multi-Object Tracking in Satellite Videos with Graph-Based Multi-Task Modeling

TGraM Multi-Object Tracking in Satellite Videos with Graph-Based Multi-Task Modeling, Qibin He, Xian Sun, Zhiyuan Yan, Beibei Li, Kun Fu Abstract Rece

Qibin He 6 Nov 25, 2022
Measuring if attention is explanation with ROAR

NLP ROAR Interpretability Official code for: Evaluating the Faithfulness of Importance Measures in NLP by Recursively Masking Allegedly Important Toke

Andreas Madsen 19 Nov 13, 2022
​ This is the Pytorch implementation of Progressive Attentional Manifold Alignment.

PAMA This is the Pytorch implementation of Progressive Attentional Manifold Alignment. Requirements python 3.6 pytorch 1.2.0+ PIL, numpy, matplotlib C

98 Nov 15, 2022
Official Codes for Graph Modularity:Towards Understanding the Cross-Layer Transition of Feature Representations in Deep Neural Networks.

Dynamic-Graphs-Construction Official Codes for Graph Modularity:Towards Understanding the Cross-Layer Transition of Feature Representations in Deep Ne

11 Dec 14, 2022
Unoffical reMarkable AddOn for Firefox.

reMarkable for Firefox (Download) This repo converts the offical reMarkable Chrome Extension into a Firefox AddOn published here under the name "Unoff

Jelle Schutter 45 Nov 28, 2022
FAST-RIR: FAST NEURAL DIFFUSE ROOM IMPULSE RESPONSE GENERATOR

This is the official implementation of our neural-network-based fast diffuse room impulse response generator (FAST-RIR) for generating room impulse responses (RIRs) for a given acoustic environment.

Anton Jeran Ratnarajah 89 Dec 22, 2022
Age Progression/Regression by Conditional Adversarial Autoencoder

Age Progression/Regression by Conditional Adversarial Autoencoder (CAAE) TensorFlow implementation of the algorithm in the paper Age Progression/Regre

Zhifei Zhang 603 Dec 22, 2022
Deep Learning for humans

Keras: Deep Learning for Python Under Construction In the near future, this repository will be used once again for developing the Keras codebase. For

Keras 57k Jan 09, 2023
[NeurIPS 2021] Official implementation of paper "Learning to Simulate Self-driven Particles System with Coordinated Policy Optimization".

Code for Coordinated Policy Optimization Webpage | Code | Paper | Talk (English) | Talk (Chinese) Hi there! This is the source code of the paper “Lear

DeciForce: Crossroads of Machine Perception and Autonomy 81 Dec 19, 2022
This repository contains the database and code used in the paper Embedding Arithmetic for Text-driven Image Transformation

This repository contains the database and code used in the paper Embedding Arithmetic for Text-driven Image Transformation (Guillaume Couairon, Holger

Meta Research 31 Oct 17, 2022
Think Big, Teach Small: Do Language Models Distil Occam’s Razor?

Think Big, Teach Small: Do Language Models Distil Occam’s Razor? Software related to the paper "Think Big, Teach Small: Do Language Models Distil Occa

0 Dec 07, 2021
Contextual Attention Localization for Offline Handwritten Text Recognition

CALText This repository contains the source code for CALText model introduced in "CALText: Contextual Attention Localization for Offline Handwritten T

0 Feb 17, 2022
A study project using the AA-RMVSNet to reconstruct buildings from multiple images

3d-building-reconstruction This is part of a study project using the AA-RMVSNet to reconstruct buildings from multiple images. Introduction It is exci

17 Oct 17, 2022
Code for "Multi-Time Attention Networks for Irregularly Sampled Time Series", ICLR 2021.

Multi-Time Attention Networks (mTANs) This repository contains the PyTorch implementation for the paper Multi-Time Attention Networks for Irregularly

The Laboratory for Robust and Efficient Machine Learning 68 Dec 17, 2022
A small library for creating and manipulating custom JAX Pytree classes

Treeo A small library for creating and manipulating custom JAX Pytree classes Light-weight: has no dependencies other than jax. Compatible: Treeo Tree

Cristian Garcia 58 Nov 23, 2022
[CVPR 2021] VirTex: Learning Visual Representations from Textual Annotations

VirTex: Learning Visual Representations from Textual Annotations Karan Desai and Justin Johnson University of Michigan CVPR 2021 arxiv.org/abs/2006.06

Karan Desai 533 Dec 24, 2022
MOpt-AFL provided by the paper "MOPT: Optimized Mutation Scheduling for Fuzzers"

MOpt-AFL 1. Description MOpt-AFL is a AFL-based fuzzer that utilizes a customized Particle Swarm Optimization (PSO) algorithm to find the optimal sele

172 Dec 18, 2022