Official code of the paper "Expanding Low-Density Latent Regions for Open-Set Object Detection" (CVPR 2022)

Overview

OpenDet

Expanding Low-Density Latent Regions for Open-Set Object Detection (CVPR2022)
Jiaming Han, Yuqiang Ren, Jian Ding, Xingjia Pan, Ke Yan, Gui-Song Xia.
arXiv preprint.

OpenDet2: OpenDet is implemented based on detectron2.

Setup

The code is based on detectron2 v0.5.

  • Installation

Here is a from-scratch setup script.

conda create -n opendet2 python=3.8 -y
conda activate opendet2

conda install pytorch=1.8.1 torchvision cudatoolkit=10.1 -c pytorch -y
pip install detectron2==0.5 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/torch1.8/index.html
git clone https://github.com/csuhan/opendet2.git
cd opendet2
pip install -v -e .
  • Prepare datasets

Please follow datasets/README.md for dataset preparation. Then we generate VOC-COCO datasets.

bash datasets/opendet2_utils/prepare_openset_voc_coco.sh
# using data splits provided by us.
cp datasets/voc_coco_ann datasets/voc_coco -rf

Model Zoo

We report the results on VOC and VOC-COCO-20, and provide pretrained models. Please refer to the corresponding log file for full results.

  • Faster R-CNN
Method backbone mAPK↑(VOC) WI AOSE mAPK↑ APU↑ Download
FR-CNN R-50 80.06 19.50 16518 58.36 0 config model
PROSER R-50 79.42 20.44 14266 56.72 16.99 config model
ORE R-50 79.80 18.18 12811 58.25 2.60 config model
DS R-50 79.70 16.76 13062 58.46 8.75 config model
OpenDet R-50 80.02 12.50 10758 58.64 14.38 config model
OpenDet Swin-T 83.29 10.76 9149 63.42 16.35 config model
  • RetinaNet
Method mAPK↑(VOC) WI AOSE mAPK↑ APU↑ Download
RetinaNet 79.63 14.16 36531 57.32 0 config model
Open-RetinaNet 79.64 10.74 17208 57.32 10.55 config model

Note:

  • You can also download the pre-trained models at github release or BaiduYun with extracting code ABCD.
  • The above results are reimplemented. Therefore, they are slightly different from our paper.
  • The official code of ORE is at OWOD. So we do not plan to include ORE in our code.

Online Demo

Try our online demo at huggingface space.

Train and Test

  • Testing

First, you need to download pretrained weights in the model zoo, e.g., OpenDet.

Then, run the following command:

python tools/train_net.py --num-gpus 8 --config-file configs/faster_rcnn_R_50_FPN_3x_opendet.yaml \
        --eval-only MODEL.WEIGHTS output/faster_rcnn_R_50_FPN_3x_opendet/model_final.pth
  • Training

The training process is the same as detectron2.

python tools/train_net.py --num-gpus 8 --config-file configs/faster_rcnn_R_50_FPN_3x_opendet.yaml

To train with the Swin-T backbone, please download swin_tiny_patch4_window7_224.pth and convert it to detectron2's format using tools/convert_swin_to_d2.py.

wget https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_tiny_patch4_window7_224.pth
python tools/convert_swin_to_d2.py swin_tiny_patch4_window7_224.pth swin_tiny_patch4_window7_224_d2.pth

Citation

If you find our work useful for your research, please consider citing:

@InProceedings{han2022opendet,
    title     = {Expanding Low-Density Latent Regions for Open-Set Object Detection},
    author    = {Han, Jiaming and Ren, Yuqiang and Ding, Jian and Pan, Xingjia and Yan, Ke and Xia, Gui-Song},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    year      = {2022}
}
Comments
  • 关于论文中图2 tsne可视化的问题 ’

    关于论文中图2 tsne可视化的问题 ’

    @csuhan 您好,我想请教论文中的一个点。 Figure 2. t-SNE visualization of latent features. 这里提到彩色的点是VOC类别(已知),而黑色三角点则是非VOC类别(未知,取自COCO)

    在您的工作中,您将未知类别设定为数量为1的1个类别(而不是更多数量),这样训出来的模型,我们就很自然地认为,未知类别也会聚成一个簇,就像图2 (b)中一样。同时还有一些离散点,分散在各个已知类别簇中。

    但是实际上来看,1个未知类别,它应该蕴含了众多潜在的类别,例如COCO类别数量-VOC类别数量=80-20=60,也就是1个未知类别可能就蕴含了潜在的60个类别。而将60个类别的特征聚集在了一起,形成了图2 (b),这是不是有点奇怪?也就是想问,只有1个未知类的类中心,是不是不太合理?

    想请教您的看法,谢谢!

    opened by ChibisukeDragon 4
  • Question about loss_cls_ic

    Question about loss_cls_ic

    Nice job! I am trying to reproduce your work. But I find that loss_cls_ic is 0 for most of the time after the training started. Is it normal? (I set batch_size=4 because of the limited computational resources.) Thanks.

    opened by Yifei-Y 3
  • error: Multiple top-level packages discovered in a flat-layout: ['demo', 'configs', 'opendet2', 'datasets', 'detectron2'].

    error: Multiple top-level packages discovered in a flat-layout: ['demo', 'configs', 'opendet2', 'datasets', 'detectron2'].

    When I followed the README to install opendet2, I got some trouble. Here is my command. I have several rtx3090 gpus.

    # CUDA V11.1
    # torch 1.9.0
    # python 3.8
    conda create -n opendet2 python=3.8 -y
    conda activate opendet2
    # get pytorch 1.9.0. I got RuntimeError: CUDA error: device-side assert triggered when using torch 1.8.1
    pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
    # build opencv
    pip install opencv-python
    pip install opencv-contrib-python
    # build detectron2. DO NOT build detectron2 from latest SOURCE. In the latest version, some named methods have been removed.
    pip install detectron2==0.5 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu111/torch1.9/index.html
    # build opendet2
    cd opendet2
    pip install -v -e .
    

    When I run the last command pip install -v -e ., I got these error message:

    (opendet2) [email protected]:~/opendet/opendet2$ pip install -v -e .
    Using pip 21.2.4 from /home/yupeng/anaconda3/envs/opendet2/lib/python3.8/site-packages/pip (python 3.8)
    Looking in indexes: https://mirrors.bfsu.edu.cn/pypi/web/simple/
    Obtaining file:///home/yupeng/opendet/opendet2
        Running command python setup.py egg_info
        error: Multiple top-level packages discovered in a flat-layout: ['demo', 'configs', 'opendet2', 'datasets', 'detectron2'].
    
        To avoid accidental inclusion of unwanted files or directories,
        setuptools will not proceed with this build.
    
        If you are trying to create a single distribution with multiple packages
        on purpose, you should not rely on automatic discovery.
        Instead, consider the following options:
    
        1. set up custom discovery (`find` directive with `include` or `exclude`)
        2. use a `src-layout`
        3. explicitly set `py_modules` or `packages` with a list of names
    
        To find more information, look for "package discovery" on setuptools docs.
    WARNING: Discarding file:///home/yupeng/opendet/opendet2. Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
    ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
    (opendet2) [email protected]:~/opendet/opendet2$
    

    I could use pip install setuptools==58.2.0 and retry pip install -v -e ., then everything is fine. It seems there are some problems by using the lateset setuptools>=61.0. Maybe you can find more information in the link below: https://github.com/pypa/setuptools/issues/3197 https://github.com/pypa/setuptools/issues/3227 https://github.com/facebookresearch/detectron2/issues/3943 https://github.com/facebookresearch/detectron2/issues/3811 Good luck.

    opened by ChibisukeDragon 2
  • 8GPU训练发生死锁

    8GPU训练发生死锁

    使用基本的resnet backbone的faster rcnn会发生死锁。我简单的把Base_RCNN_FPN.yaml换成了detectron2中的Base_RCNN_C4.yaml。 使用readme中示例代码训练时卡在训练第一个batch的地方,GPU占用率100%,但是显存只占了2400M,一夜过去14小时还是卡在该位置,没有任何输出或报错。改为单GPU训练正常,可以提供一些帮助吗?

    opened by buaali 0
  • CUDA error: device-side assert triggered

    CUDA error: device-side assert triggered

    When I run train_net.py, I get this issue after loading R-50.pkl. I want to know how to solve that. Thanks a lot. My environment: CUDA 11.1, python 3.7, torch 1.8.1

    opened by Millielele 0
  • 对bias参数weight_decay的处理

    对bias参数weight_decay的处理

    opendet2/solver/build.py line 39: 注释和代码不符,注释中bias的weight_decay为默认值,实际代码中被设为None,导致以下错误: TypeError: add(): argument 'alpha' must be Number, not NoneType

    opened by sunxuhao 2
  • Reproducibility issue

    Reproducibility issue

    Hi,

    Amazing work on open-set detection! I trained the model after doing the dataset separation steps you suggest, and with exact same configs. The only difference is that I used 1 GPU instead of 8 GPUs, and these are the results I obtained. Interestingly, WI and AOSE metrics are worse, but AP is better. Do you think this much difference is expected just from using fewer GPUs, or is there some other issue I need to look for? Thanks in advance.

    VOC-COCO-20 Result | WI ↓ | AOSE ↓ | AP u↑ -- | -- | -- | -- Paper | 14.95 | 11286 | 14.93 Reproduced | 20.68 | 13370 | 21.36

    VOC-COCO-0.5n Result | WI ↓ | AOSE ↓ | AP u↑ -- | -- | -- | -- Paper | 6.44 | 3944 | 9.05 Reproduced | 55 | 5369 | 18.09

    opened by misraya 3
  • Running command python setup.py egg_info     error: Multiple top-level packages discovered in a flat-layout: ['data', 'engine', 'solver', 'config', 'modeling', 'evaluation'].

    Running command python setup.py egg_info error: Multiple top-level packages discovered in a flat-layout: ['data', 'engine', 'solver', 'config', 'modeling', 'evaluation'].

    Running command python setup.py egg_info error: Multiple top-level packages discovered in a flat-layout: ['data', 'engine', 'solver', 'config', 'modeling', 'evaluation'].

    opened by roywang021 1
Few-shot Relation Extraction via Bayesian Meta-learning on Relation Graphs

Few-shot Relation Extraction via Bayesian Meta-learning on Relation Graphs This is an implemetation of the paper Few-shot Relation Extraction via Baye

MilaGraph 36 Nov 22, 2022
On Evaluation Metrics for Graph Generative Models

On Evaluation Metrics for Graph Generative Models Authors: Rylee Thompson, Boris Knyazev, Elahe Ghalebi, Jungtaek Kim, Graham Taylor This is the offic

13 Jan 07, 2023
Code for paper Novel View Synthesis via Depth-guided Skip Connections

Novel View Synthesis via Depth-guided Skip Connections Code for paper Novel View Synthesis via Depth-guided Skip Connections @InProceedings{Hou_2021_W

8 Mar 14, 2022
A PyTorch Implementation of SphereFace.

SphereFace A PyTorch Implementation of SphereFace. The code can be trained on CASIA-Webface and the best accuracy on LFW is 99.22%. SphereFace: Deep H

carwin 685 Dec 09, 2022
A tool to analyze leveraged liquidity mining and find optimal option combination for hedging.

LP-Option-Hedging Description A Python program to analyze leveraged liquidity farming/mining and find the optimal option combination for hedging imper

Aureliano 18 Dec 19, 2022
A large-scale face dataset for face parsing, recognition, generation and editing.

CelebAMask-HQ [Paper] [Demo] CelebAMask-HQ is a large-scale face image dataset that has 30,000 high-resolution face images selected from the CelebA da

switchnorm 1.7k Dec 26, 2022
Real-Time and Accurate Full-Body Multi-Person Pose Estimation&Tracking System

News! Aug 2020: v0.4.0 version of AlphaPose is released! Stronger tracking! Include whole body(face,hand,foot) keypoints! Colab now available. Dec 201

Machine Vision and Intelligence Group @ SJTU 6.7k Dec 28, 2022
The source code of the ICCV2021 paper "PIRenderer: Controllable Portrait Image Generation via Semantic Neural Rendering"

Website | ArXiv | Get Start | Video PIRenderer The source code of the ICCV2021 paper "PIRenderer: Controllable Portrait Image Generation via Semantic

Ren Yurui 261 Jan 09, 2023
Code for A Volumetric Transformer for Accurate 3D Tumor Segmentation

VT-UNet This repo contains the supported pytorch code and configuration files to reproduce 3D medical image segmentaion results of VT-UNet. Environmen

Himashi Amanda Peiris 114 Dec 20, 2022
RAFT-Stereo: Multilevel Recurrent Field Transforms for Stereo Matching

RAFT-Stereo: Multilevel Recurrent Field Transforms for Stereo Matching This repository contains the source code for our paper: RAFT-Stereo: Multilevel

Princeton Vision & Learning Lab 328 Jan 09, 2023
This is the code of using DQN to play Sekiro .

Update for using DQN to play sekiro 2021.2.2(English Version) This is the code of using DQN to play Sekiro . I am very glad to tell that I have writen

144 Dec 25, 2022
Cleaned up code for DSTC 10: SIMMC 2.0 track: subtask 2: multimodal coreference resolution

UNITER-Based Situated Coreference Resolution with Rich Multimodal Input: arXiv MMCoref_cleaned Code for the MMCoref task of the SIMMC 2.0 dataset. Pre

Yichen (William) Huang 2 Dec 05, 2022
A list of Machine Learning Art Colabs

ML Visual Art Colabs A list of cool Colabs on Machine Learning Imagemaking or other artistic purposes 3D Ken Burns Effect Ken Burns Effect by Manuel R

Derrick Schultz (he/him) 789 Dec 12, 2022
Warning: This project does not have any current developer. See bellow.

Pylearn2: A machine learning research library Warning : This project does not have any current developer. We will continue to review pull requests and

Laboratoire d’Informatique des Systèmes Adaptatifs 2.7k Dec 26, 2022
2D Time independent Schrodinger equation solver for arbitrary shape of well

Schrodinger Well Python Python solver for timeless Schrodinger equation for well with arbitrary shape https://imgur.com/a/jlhK7OZ Pictures of circular

WeightAn 24 Nov 18, 2022
NanoDet-Plus⚡Super fast and lightweight anchor-free object detection model. 🔥Only 980 KB(int8) / 1.8MB (fp16) and run 97FPS on cellphone🔥

NanoDet-Plus⚡Super fast and lightweight anchor-free object detection model. 🔥Only 980 KB(int8) / 1.8MB (fp16) and run 97FPS on cellphone🔥

4.8k Jan 07, 2023
PyTorch implementation HoroPCA: Hyperbolic Dimensionality Reduction via Horospherical Projections

HoroPCA This code is the official PyTorch implementation of the ICML 2021 paper: HoroPCA: Hyperbolic Dimensionality Reduction via Horospherical Projec

HazyResearch 52 Nov 14, 2022
We utilize deep reinforcement learning to obtain favorable trajectories for visual-inertial system calibration.

Unified Data Collection for Visual-Inertial Calibration via Deep Reinforcement Learning Update: The lastest code will be updated in this branch. Pleas

ETHZ ASL 27 Dec 29, 2022
HMLLDB is a collection of LLDB commands to assist in the debugging of iOS apps.

HMLLDB is a collection of LLDB commands to assist in the debugging of iOS apps. 中文介绍 Features Non-intrusive. Your iOS project does not need to be modi

mao2020 47 Oct 22, 2022