ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection

Overview

PWC PWC

ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection

This repository contains implementation of the monocular/multi-view 3D object detector ImVoxelNet, introduced in our paper:

ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection
Danila Rukhovich, Anna Vorontsova, Anton Konushin
Samsung AI Center Moscow
https://arxiv.org/abs/2106.01178

drawing

Installation

For convenience, we provide a Dockerfile. Alternatively, you can install all required packages manually.

This implementation is based on mmdetection3d framework. Please refer to the original installation guide install.md. Also, rotated_iou should be installed.

Most of the ImVoxelNet-related code locates in the following files: detectors/imvoxelnet.py, necks/imvoxelnet.py, dense_heads/imvoxel_head.py, pipelines/multi_view.py.

Datasets

We support three benchmarks based on the SUN RGB-D dataset.

  • For the VoteNet benchmark with 10 object categories, you should follow the instructions in sunrgbd.
  • For the PerspectiveNet benchmark with 30 object categories, the same instructions can be applied; you only need to pass --dataset sunrgbd_monocular when running create_data.py.
  • The Total3DUnderstanding benchmark implies detecting objects of 37 categories along with camera pose and room layout estimation. Download the preprocessed data as train.json and val.json and put it to ./data/sunrgbd. Then run:
    python tools/data_converter/sunrgbd_total.py

ScanNet. Please follow instructions in scannet. Note that create_data.py works with point clouds, not RGB images; thus, you should do some preprocessing before running create_data.py.

  1. First, you should obtain RGB images. We recommend using a script from SensReader.
  2. Then, put the camera poses and JPG images in the folder with other ScanNet data:
scannet
├── sens_reader
│   ├── scans
│   │   ├── scene0000_00
│   │   │   ├── out
│   │   │   │   ├── frame-000001.color.jpg
│   │   │   │   ├── frame-000001.pose.txt
│   │   │   │   ├── frame-000002.color.jpg
│   │   │   │   ├── ....
│   │   ├── ...

Now, you may run create_data.py with --dataset scannet_monocular.

For KITTI and nuScenes, please follow instructions in getting_started.md. For nuScenes, set --dataset nuscenes_monocular.

Getting Started

Please see getting_started.md for basic usage examples.

Training

To start training, run dist_train with ImVoxelNet configs:

bash tools/dist_train.sh configs/imvoxelnet/imvoxelnet_kitti.py 8

Testing

Test pre-trained model using dist_test with ImVoxelNet configs:

bash tools/dist_test.sh configs/imvoxelnet/imvoxelnet_kitti.py \
    work_dirs/imvoxelnet_kitti/latest.pth 8 --eval mAP

Visualization

Visualizations can be created with test script. For better visualizations, you may set score_thr in configs to 0.15 or more:

python tools/test.py configs/imvoxelnet/imvoxelnet_kitti.py \
    work_dirs/imvoxelnet_kitti/latest.pth --show

Models

Dataset Object Classes Download Link Log
SUN RGB-D 37 from Total3dUnderstanding total_sunrgbd.pth total_sunrgbd.log
SUN RGB-D 30 from PerspectiveNet perspective_sunrgbd.pth perspective_sunrgbd.log
SUN RGB-D 10 from VoteNet sunrgbd.pth sunrgbd.log
ScanNet 18 from VoteNet scannet.pth scannet.log
KITTI Car kitti.pth kitti.log
nuScenes Car nuscenes.pth nuscenes.log

Example Detections

drawing

Citation

If you find this work useful for your research, please cite our paper:

@article{rukhovich2021imvoxelnet,
  title={ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection},
  author={Danila Rukhovich, Anna Vorontsova, Anton Konushin},
  journal={arXiv preprint arXiv:2106.01178},
  year={2021}
}
Comments
  • Why do I need to download and use KITTI velodyne data?

    Why do I need to download and use KITTI velodyne data?

    Hello @filaPro

    I was reading your paper and trying to implement your method on an RGB dataset that I have collected.

    While trying to test your code, it looks like you also need KITTI Velodyne data to be downloaded. Does your method use lidar point cloud or else you are using point cloud dataset for some other purpose.

    Thank you for sharing the code and your help.

    bug 
    opened by chetanmreddy 13
  • Question about MMCV

    Question about MMCV

    When I installed the mmcv-full=1.3.8, the error is below. Traceback (most recent call last): File "tools/test.py", line 9, in <module> from mmdet3d.apis import single_gpu_test File "/newnfs/zzwu/08_3d_code/imvoxelnet03/mmdet3d/__init__.py", line 26, in <module> f'MMCV=={mmcv.__version__} is used but incompatible. ' \ AssertionError: MMCV==1.3.8 is used but incompatible. Please install mmcv>=1.1.5, <=1.3.0. When I install the mmcv-full=1.3.1, the error is below. Traceback (most recent call last): File "tools/test.py", line 9, in <module> from mmdet3d.apis import single_gpu_test File "/newnfs/zzwu/08_3d_code/imvoxelnet03/mmdet3d/__init__.py", line 3, in <module> import mmdet File "/home/CN/zizhang.wu/anaconda3/envs/imvoxelnet03/lib/python3.7/site-packages/mmdet/__init__.py", line 25, in <module> f'MMCV=={mmcv.__version__} is used but incompatible. ' \ AssertionError: MMCV==1.3.1 is used but incompatible. Please install mmcv>=1.3.8, <=1.4.0.

    opened by rockywind 9
  • General Questions - ImVoxelNet with custom dataset

    General Questions - ImVoxelNet with custom dataset

    Hello,

    Thank you for your work. I have a few questions I wish to have clarified. Context: I am creating a dataset in SUN-RGBD format, and so I would like to understand the format structure.

    1. Looks like the "calib" file (once you run the matlab files in SUN-RGBD folder) contains two rows. The first, is the camera extrinsic. However, it is named "Rt" which in my mind should be a 3x4 matrix, but it is stored as a column-major 3x3 matrix. Which coordinates system does this extrinsic parameter transform? From what I understand it rotates from depth coordinate system to camera coordinate system. Then in the ground truth labeling the translation and yaw angle will take care of the bounding box position and orientation. Is this understanding correct?

    2. In MMDetection3D there is a "browse_dataset" file that allows you to view your ground truths of your dataset to confirm it is correct before training. I was wondering if there is one for the SUN-RGBD in ImVoxelNet, as it would be helpful to see if my custom labels in SUN-RGBD format is correct.

    3. I am trying to use the Dockerfile provided, however my machine runs CUDA version 11.1 (RTX 3090 so from my understanding i cannot downgrade to 10.1), which means pytorch>=1.8.0. I change the mmcv-full and mmdet to compatible, most recent versions, but i run into Runtime error "... is not complied with GPU support". Any suggestions here to make the Dockerfile compatible with cuda 11.1? (Running with provided dockerfile gives "CUDA error: no kernel image is available for execution on the device")

    Again, thank you for your time!

    opened by Steven-m2ai 8
  • Can not visualize the output results based on SUN RGB-D!

    Can not visualize the output results based on SUN RGB-D!

    Can not visualize the output results based on SUN RGB-D!

    python tools/test.py configs/imvoxelnet/imvoxelnet_sunrgbd.py work_dirs/epoch_12.pth --show --show-dir work_dirs/imvoxelnet_sunrgbd/results

    The output is an original image.

    opened by lihua213 8
  • about create_data.py script?

    about create_data.py script?

    Thanks for sharing the codes. I met the error, but I never modified anything. The script is : python tools/create_data.py kitti --root-path ./data/kitti --out-dir ./data/kitti --extra-tag kitti The result is: Traceback (most recent call last): File "tools/create_data.py", line 4, in <module> from tools.data_converter import indoor_converter as indoor ModuleNotFoundError: No module named 'tools.data_converter'

    opened by rockywind 8
  • 0 Loss and AP

    0 Loss and AP

    Hi. Thank you for your great work! I have successfully trained and tested your model using KITTI dataset and it works.

    I currently trying to train a custom dataset (which is not car), but somehow the loss and AP is 0 image I have correctly set the image size and the dataset was already validated and it is good. do you have any suggestion? I might miss some config.

    opened by alfinnurhalim 7
  • transformation in 'create_nuscenes_monocular_infos'

    transformation in 'create_nuscenes_monocular_infos'

    opened by Jiayi719 7
  • met one error when run test.py for scannet dataset. KeyError: Caught KeyError in DataLoader worker process 0. KeyError: 'image_paths'

    met one error when run test.py for scannet dataset. KeyError: Caught KeyError in DataLoader worker process 0. KeyError: 'image_paths'

    Hi, when i run test.py for scannet dataset, i met one error. please help, thanks very much.

    [email protected]:/mmdetection3d# python tools/test.py configs/imvoxelnet/imvoxelnet_scannet.py ./data/checkpoints/scannet.pth --show --show-dir ./data/scannet/show-dir/ Use load_from_local loader [ ] 0/6, elapsed: 0s, ETA:Traceback (most recent call last): File "tools/test.py", line 153, in main() File "tools/test.py", line 129, in main outputs = single_gpu_test(model, data_loader, args.show, args.show_dir) File "/mmdetection3d/mmdet3d/apis/test.py", line 27, in single_gpu_test for i, data in enumerate(data_loader): File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 363, in next data = self._next_data() File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 989, in _next_data return self._process_data(data) File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1014, in _process_data data.reraise() File "/opt/conda/lib/python3.7/site-packages/torch/_utils.py", line 395, in reraise raise self.exc_type(msg)

    Original Traceback (most recent call last): File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 185, in _worker_loop data = fetcher.fetch(index) File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/mmdetection3d/mmdet3d/datasets/custom_3d.py", line 292, in getitem return self.prepare_test_data(idx) File "/mmdetection3d/mmdet3d/datasets/custom_3d.py", line 166, in prepare_test_data input_dict = self.get_data_info(index) File "/mmdetection3d/mmdet3d/datasets/scannet_monocular_dataset.py", line 19, in get_data_info for i in range(len(info['image_paths'])): KeyError: 'image_paths'

    bug 
    opened by jasmine202106 7
  • About the train/val splits for SUN RGB-D dataset

    About the train/val splits for SUN RGB-D dataset

    Hello, Thanks for your excellent work! I noticed that you have processed the annotation for SUN RGB-D to coco format, could you please tell me your data processing method and the basis of splits. I have generated the visiualzation for val part, but I cannnot find the samples showed in the paper of Total3D, Is it because you divided the data set differently?

    Best, Harvey

    opened by Harvey-Mei 6
  • subprocess.CalledProcessError: Command '['/home/xxx/bin/python3', '-u', 'tools/test.py', '--local_rank=0', 'configs/imvoxelnet/imvoxelnet_kitti.py', 'work_dirs/20210503_214214.pth', '--launcher', 'pytorch', '--eval', 'mAP']' returned non-zero exit status 1.`

    subprocess.CalledProcessError: Command '['/home/xxx/bin/python3', '-u', 'tools/test.py', '--local_rank=0', 'configs/imvoxelnet/imvoxelnet_kitti.py', 'work_dirs/20210503_214214.pth', '--launcher', 'pytorch', '--eval', 'mAP']' returned non-zero exit status 1.`

    run

    $ bash tools/dist_test.sh configs/imvoxelnet/imvoxelnet_kitti.py work_dirs/20210503_214214.pth 1 --eval mAP
    

    and report:

    Traceback (most recent call last):
      File "tools/test.py", line 9, in <module>
        from mmdet3d.apis import single_gpu_test
      File "/home/xxx/imvoxelnet-master/mmdet3d/apis/__init__.py", line 1, in <module>
        from .inference import inference_detector, init_detector, show_result_meshlab
      File "/home/xxx/imvoxelnet-master/mmdet3d/apis/inference.py", line 8, in <module>
        from mmdet3d.core import Box3DMode, show_result
      File "/home/xxx/imvoxelnet-master/mmdet3d/core/__init__.py", line 2, in <module>
        from .bbox import *  # noqa: F401, F403
      File "/home/xxx/imvoxelnet-master/mmdet3d/core/bbox/__init__.py", line 4, in <module>
        from .iou_calculators import (AxisAlignedBboxOverlaps3D, BboxOverlaps3D,
      File "/home/xxx/imvoxelnet-master/mmdet3d/core/bbox/iou_calculators/__init__.py", line 1, in <module>
        from .iou3d_calculator import (AxisAlignedBboxOverlaps3D, BboxOverlaps3D,
      File "/home/xxx/imvoxelnet-master/mmdet3d/core/bbox/iou_calculators/iou3d_calculator.py", line 5, in <module>
        from ..structures import get_box_type
      File "/home/xxx/imvoxelnet-master/mmdet3d/core/bbox/structures/__init__.py", line 1, in <module>
        from .base_box3d import BaseInstance3DBoxes
      File "/home/xxx/imvoxelnet-master/mmdet3d/core/bbox/structures/base_box3d.py", line 5, in <module>
        from mmdet3d.ops.iou3d import iou3d_cuda
      File "/home/xxx/imvoxelnet-master/mmdet3d/ops/__init__.py", line 5, in <module>
        from .ball_query import ball_query
      File "/home/xxx/imvoxelnet-master/mmdet3d/ops/ball_query/__init__.py", line 1, in <module>
        from .ball_query import ball_query
      File "/home/xxx/imvoxelnet-master/mmdet3d/ops/ball_query/ball_query.py", line 4, in <module>
        from . import ball_query_ext
    ImportError: cannot import name 'ball_query_ext' from 'mmdet3d.ops.ball_query' (/home/xxx/imvoxelnet-master/mmdet3d/ops/ball_query/__init__.py)
    Traceback (most recent call last):
      File "/home/xxx/lib/python3.7/runpy.py", line 193, in _run_module_as_main
        "__main__", mod_spec)
      File "/home/xxx/lib/python3.7/runpy.py", line 85, in _run_code
        exec(code, run_globals)
      File "/home/xxx/lib/python3.7/site-packages/torch/distributed/launch.py", line 261, in <module>
        main()
      File "/home/xxx/lib/python3.7/site-packages/torch/distributed/launch.py", line 257, in main
        cmd=cmd)
    subprocess.CalledProcessError: Command '['/home/xxx/bin/python3', '-u', 'tools/test.py', '--local_rank=0', 'configs/imvoxelnet/imvoxelnet_kitti.py', 'work_dirs/20210503_214214.pth', '--launcher', 'pytorch', '--eval', 'mAP']' returned non-zero exit status 1.
    
    opened by Light-- 6
  • How to make imVoxelNet support multi-classes in nuScenes dataset?

    How to make imVoxelNet support multi-classes in nuScenes dataset?

    Hi. @filaPro Thanks for sharing the code. I noticed that your original paper only mentioned results of "car" in nuScenes. I want to see how it performs under multi-classes. I modified this line to make the network output support 10 classes. https://github.com/saic-vul/imvoxelnet/blob/3512e89ca98e48aebb21a4c9e9fbe5037220b3a4/configs/imvoxelnet/imvoxelnet_nuscenes.py#L26

    I modified it to num_classes=10, But still I only get results for single class "car". The other classes are all 0 for mAP. Did you tired this before? Can you help me?

    opened by XinchaoGou 6
  • Directly install saic-vul/imvoxelnet or first install open-mmlab/mmdetection3d? How to replace?

    Directly install saic-vul/imvoxelnet or first install open-mmlab/mmdetection3d? How to replace?

    Hi, I have question about the installation, you mentioned "replacing open-mmlab/mmdetection3d with saic-vul/imvoxelnet", what does this mean? Should we first install mmdetection3d? Or just install saic-vul/imvoxelnet?

    Thanks!

    opened by gyhandy 5
  • Some problems about val

    Some problems about val

    hello, I find some problems about 'val', I found that there was a problem with the code in line 144 'val_dataset.pipeline = cfg.data.train.pipeline', and it needs to be changed to this val_dataset.pipeline = cfg.data.train.dataset.pipeline. right?

    opened by wuwangzhuanwan 3
  • How can I train on a dataset without lidar2cam matrix?

    How can I train on a dataset without lidar2cam matrix?

    Hi filaPro, Thank you for your brilliant work in 3D detection. I'm trying to train imvoxelnet on my own dataset which only has world2cam matrix, but no lidar2cam matrix compared with kitti dataset. Is lidar2cam matrix is necessary for training? If so, can I train with world2cam matrix? Thank you!

    opened by Italian-PAO 3
  • Voxel Size

    Voxel Size

    Hello,

    I am experimenting with a custom dataset on ImVoxelNet. My dataset is ~2000 images, and i am running into extreme overfitting issues. For example the prediction on the validation image is in the same pattern as some of the training image predictions.

    I was looking through what could be the case, I guess I could try playing with the lr and scheduler. however, I was also looking into voxel size and number. Do you think this could have any affect on the outcome? Any other advice? Thanks!

    opened by Steven-m2ai 8
  • How to use outputs of layout / angles from a pretrained model?

    How to use outputs of layout / angles from a pretrained model?

    I'm playing with the SUN RGB-D model for (v3 | [email protected]: 43.7 which uses 20211007_105247.pth and imvoxelnet_total_sunrgbd_fast.py).

    For each image I'm testing, I have the RGB and a 3x3 intrinsic matrix only which goes from camera space to screen.

    I've been able to follow the demo code in general so far! Perhaps I'm missing it, but the flow and pipeline for the available demo's appear to not use the outputs for layout / angles? However, the visualized images elsewhere seem to have layout or room tilt predictions applied along with the per-object yaw angles.

    I want to make sure that I'm using the SUN RBG-D model correctly. Are there any examples I can follow to make sure I can apply the room tilts to the objects? E.g., say if my end goal is a 8 vertex mesh per object that is in camera coordinates?

    For instance, show_result and _write_oriented_bbox seem to only use the yaw angle. It seems like those are the two main functions for visualizing (unless I'm missing some code).

    To be clear, the predictions are definitely being made as expected. It's only the exact steps for applying them that are ambiguous to me

    opened by garrickbrazil 5
Releases(v1.2)
Owner
Visual Understanding Lab @ Samsung AI Center Moscow
Visual Understanding Lab @ Samsung AI Center Moscow
Code and project page for ICCV 2021 paper "DisUnknown: Distilling Unknown Factors for Disentanglement Learning"

DisUnknown: Distilling Unknown Factors for Disentanglement Learning See introduction on our project page Requirements PyTorch = 1.8.0 torch.linalg.ei

Sitao Xiang 24 May 16, 2022
A PyTorch implementation of "From Two to One: A New Scene Text Recognizer with Visual Language Modeling Network" (ICCV2021)

From Two to One: A New Scene Text Recognizer with Visual Language Modeling Network The official code of VisionLAN (ICCV2021). VisionLAN successfully a

81 Dec 12, 2022
Real-Time Social Distance Monitoring tool using Computer Vision

Social Distance Detector A Real-Time Social Distance Monitoring Tool Table of Contents Motivation YOLO Theory Detection Output Tech Stack Functionalit

Pranav B 13 Oct 14, 2022
Turning pixels into virtual points for multimodal 3D object detection.

Multimodal Virtual Point 3D Detection Turning pixels into virtual points for multimodal 3D object detection. Multimodal Virtual Point 3D Detection, Ti

Tianwei Yin 204 Jan 08, 2023
Double pendulum simulator using a symplectic Euler's method and Hamiltonian mechanics

Symplectic Double Pendulum Simulator Double pendulum simulator using a symplectic Euler's method. The program calculates the momentum and position of

Scott Marino 1 Jan 12, 2022
Official repo for BMVC2021 paper ASFormer: Transformer for Action Segmentation

ASFormer: Transformer for Action Segmentation This repo provides training & inference code for BMVC 2021 paper: ASFormer: Transformer for Action Segme

42 Dec 23, 2022
Minecraft Hack Detection With Python

Minecraft Hack Detection An attempt to try and use crowd sourced replays to find

Kuleen Sasse 3 Mar 26, 2022
Personal implementation of paper "Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval"

Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval This repo provides personal implementation of paper Approximate Ne

John 8 Oct 07, 2022
Unsupervised clustering of high content screen samples

Microscopium Unsupervised clustering and dataset exploration for high content screens. See microscopium in action Public dataset BBBC021 from the Broa

60 Dec 05, 2022
POCO: Point Convolution for Surface Reconstruction

POCO: Point Convolution for Surface Reconstruction by: Alexandre Boulch and Renaud Marlet Abstract Implicit neural networks have been successfully use

valeo.ai 93 Dec 29, 2022
Feedback is important: response-aware feedback mechanism for background based conversation

RFM The code for the paper: "Feedback is important: response-aware feedback mechanism for background based conversation." Requirements python 3.7 pyto

Jiatao Chen 2 Sep 29, 2022
Unsupervised Representation Learning via Neural Activation Coding

Neural Activation Coding This repository contains the code for the paper "Unsupervised Representation Learning via Neural Activation Coding" published

yookoon park 5 May 26, 2022
AttGAN: Facial Attribute Editing by Only Changing What You Want (IEEE TIP 2019)

News 11 Jan 2020: We clean up the code to make it more readable! The old version is here: v1. AttGAN TIP Nov. 2019, arXiv Nov. 2017 TensorFlow impleme

Zhenliang He 568 Dec 14, 2022
MixRNet(Using mixup as regularization and tuning hyper-parameters for ResNets)

MixRNet(Using mixup as regularization and tuning hyper-parameters for ResNets) Using mixup data augmentation as reguliraztion and tuning the hyper par

Bhanu 2 Jan 16, 2022
Codebase for Inducing Causal Structure for Interpretable Neural Networks

Interchange Intervention Training (IIT) Codebase for Inducing Causal Structure for Interpretable Neural Networks Release Notes 12/01/2021: Code and Pa

Zen 6 Oct 10, 2022
rastrainer is a QGIS plugin to training remote sensing semantic segmentation model based on PaddlePaddle.

rastrainer rastrainer is a QGIS plugin to training remote sensing semantic segmentation model based on PaddlePaddle. UI TODO Init UI. Add Block. Add l

deepbands 5 Mar 04, 2022
This is a Image aid classification software based on python TK library development

This is a Image aid classification software based on python TK library development.

EasonChan 1 Jan 17, 2022
This is a clean and robust Pytorch implementation of DQN and Double DQN.

DQN/DDQN-Pytorch This is a clean and robust Pytorch implementation of DQN and Double DQN. Here is the training curve: All the experiments are trained

XinJingHao 15 Dec 27, 2022
Code for “ACE-HGNN: Adaptive Curvature ExplorationHyperbolic Graph Neural Network”

ACE-HGNN: Adaptive Curvature Exploration Hyperbolic Graph Neural Network This repository is the implementation of ACE-HGNN in PyTorch. Environment pyt

9 Nov 28, 2022
NAVER BoostCamp Final Project

CV 14조 final project Super Resolution and Deblur module Inference code & Pretrained weight Repo SwinIR Deblur 실행 방법 streamlit run WebServer/Server_SRD

JiSeong Kim 5 Sep 06, 2022