SceneCollisionNet This repo contains the code for "Object Rearrangement Using Learned Implicit Collision Functions", an ICRA 2021 paper. For more info

Overview

SceneCollisionNet

This repo contains the code for "Object Rearrangement Using Learned Implicit Collision Functions", an ICRA 2021 paper. For more information, please visit the project website.

License

This repo is released under NVIDIA source code license. For business inquiries, please contact [email protected]. For press and other inquiries, please contact Hector Marinez at [email protected]

Install and Setup

Clone and install the repo (we recommend a virtual environment, especially if training or benchmarking, to avoid dependency conflicts):

git clone --recursive https://github.com/mjd3/SceneCollisionNet.git
cd SceneCollisionNet
pip install -e .

These commands install the minimum dependencies needed for generating a mesh dataset and then training/benchmarking using Docker. If you instead wish to train or benchmark without using Docker, please first install an appropriate version of PyTorch and corresponding version of PyTorch Scatter for your system. Then, execute these commands:

git clone --recursive https://github.com/mjd3/SceneCollisionNet.git
cd SceneCollisionNet
pip install -e .[train]

If benchmarking, replace train in the last command with bench.

To rollout the object rearrangement MPPI policy in a simulated tabletop environment, first download Isaac Gym and place it in the extern folder within this repo. Next, follow the previous installation instructions for training, but replace the train option with policy.

To download the pretrained weights for benchmarking or policy rollout, run bash scripts/download_weights.sh.

Generating a Mesh Dataset

To save time during training/benchmarking, meshes are preprocessed and mesh stable poses are calculated offline. SceneCollisionNet was trained using the ACRONYM dataset. To use this dataset for training or benchmarking, download the ShapeNetSem meshes here (note: you must first register for an account) and the ACRONYM grasps here. Next, build Manifold (an external library included as a submodule):

./scripts/install_manifold.sh

Then, use the following script to generate a preprocessed version of the ACRONYM dataset:

python tools/generate_acronym_dataset.py /path/to/shapenetsem/meshes /path/to/acronym datasets/shapenet

If you have your own set of meshes, run:

python tools/generate_mesh_dataset.py /path/to/meshes datasets/your_dataset_name

Note that this dataset will not include grasp data, which is not needed for training or benchmarking SceneCollisionNet, but is be used for rolling out the MPPI policy.

Training/Benchmarking with Docker

First, install Docker and nvidia-docker2 following the instructions here. Pull the SceneCollisionNet docker image from DockerHub (tag scenecollisionnet) or build locally using the provided Dockerfile (docker build -t scenecollisionnet .). Then, use the appropriate configuration .yaml file in cfg to set training or benchmarking parameters (note that cfg file paths are relative to the Docker container, not the local machine) and run one of the commands below (replacing paths with your local paths as needed; -v requires absolute paths).

Train a SceneCollisionNet

Edit cfg/train_scenecollisionnet.yaml, then run:

docker run --gpus all --rm -it -v /path/to/dataset:/dataset:ro -v /path/to/models:/models:rw -v /path/to/cfg:/cfg:ro scenecollisionnet /SceneCollisionNet/scripts/train_scenecollisionnet_docker.sh

Train a RobotCollisionNet

Edit cfg/train_robotcollisionnet.yaml, then run:

docker run --gpus all --rm -it -v /path/to/models:/models:rw -v /path/to/cfg:/cfg:ro scenecollisionnet /SceneCollisionNet/scripts/train_robotcollisionnet_docker.sh

Benchmark a SceneCollisionNet

Edit cfg/benchmark_scenecollisionnet.yaml, then run:

docker run --gpus all --rm -it -v /path/to/dataset:/dataset:ro -v /path/to/models:/models:ro -v /path/to/cfg:/cfg:ro -v /path/to/benchmark_results:/benchmark:rw scenecollisionnet /SceneCollisionNet/scripts/benchmark_scenecollisionnet_docker.sh

Benchmark a RobotCollisionNet

Edit cfg/benchmark_robotcollisionnet.yaml, then run:

docker run --gpus all --rm -it -v /path/to/models:/models:rw -v /path/to/cfg:/cfg:ro -v /path/to/benchmark_results:/benchmark:rw scenecollisionnet /SceneCollisionNet/scripts/train_robotcollisionnet_docker.sh

Loss Plots

To get loss plots while training, run:

docker exec -d <container_name> python3 tools/loss_plots.py /models/<model_name>/log.csv

Benchmark FCL or SDF Baselines

Edit cfg/benchmark_baseline.yaml, then run:

docker run --gpus all --rm -it -v /path/to/dataset:/dataset:ro -v /path/to/benchmark_results:/benchmark:rw -v /path/to/cfg:/cfg:ro scenecollisionnet /SceneCollisionNet/scripts/benchmark_baseline_docker.sh

Training/Benchmarking without Docker

First, install system dependencies. The system dependencies listed assume an Ubuntu 18.04 install with NVIDIA drivers >= 450.80.02 and CUDA 10.2. You can adjust the dependencies accordingly for different driver/CUDA versions. Note that the NVIDIA drivers come packaged with EGL, which is used during training and benchmarking for headless rendering on the GPU.

System Dependencies

See Dockerfile for a full list. For training/benchmarking, you will need:

python3-dev
python3-pip
ninja-build
libcudnn8=8.1.1.33-1+cuda10.2
libcudnn8-dev=8.1.1.33-1+cuda10.2
libsm6
libxext6
libxrender-dev
freeglut3-dev
liboctomap-dev
libfcl-dev
gifsicle
libfreetype6-dev
libpng-dev

Python Dependencies

Follow the instructions above to install the necessary dependencies for your use case (either the train, bench, or policy options).

Train a SceneCollisionNet

Edit cfg/train_scenecollisionnet.yaml, then run:

PYOPENGL_PLATFORM=egl python tools/train_scenecollisionnet.py

Train a RobotCollisionNet

Edit cfg/train_robotcollisionnet.yaml, then run:

python tools/train_robotcollisionnet.py

Benchmark a SceneCollisionNet

Edit cfg/benchmark_scenecollisionnet.yaml, then run:

PYOPENGL_PLATFORM=egl python tools/benchmark_scenecollisionnet.py

Benchmark a RobotCollisionNet

Edit cfg/benchmark_robotcollisionnet.yaml, then run:

python tools/benchmark_robotcollisionnet.py

Benchmark FCL or SDF Baselines

Edit cfg/benchmark_baseline.yaml, then run:

PYOPENGL_PLATFORM=egl python tools/benchmark_baseline.py

Policy Rollout

To view a rearrangement MPPI policy rollout in a simulated Isaac Gym tabletop environment, run the following command (note that this requires a local machine with an available GPU and display):

python tools/rollout_policy.py --self-coll-nn weights/self_coll_nn --scene-coll-nn weights/scene_coll_nn --control-frequency 1

There are many possible options for this command that can be viewed using the --help command line argument and set with the appropriate argument. If you get RuntimeError: CUDA out of memory, try reducing the horizon (--mppi-horizon, default 40), number of trajectories (--mppi-num-rollouts, default 200) or collision steps (--mppi-collision-steps, default 10). Note that this may affect policy performance.

Citation

If you use this code in your own research, please consider citing:

@inproceedings{danielczuk2021object,
  title={Object Rearrangement Using Learned Implicit Collision Functions},
  author={Danielczuk, Michael and Mousavian, Arsalan and Eppner, Clemens and Fox, Dieter},
  booktitle={Proc. IEEE Int. Conf. Robotics and Automation (ICRA)},
  year={2021}
}
Owner
NVIDIA Research Projects
NVIDIA Research Projects
Make OpenCV camera loops less of a chore by skipping the boilerplate and getting right to the interesting stuff

camloop Forget the boilerplate from OpenCV camera loops and get to coding the interesting stuff Table of Contents Usage Install Quickstart More advanc

Gabriel Lefundes 9 Nov 12, 2021
Reference Code for AAAI-20 paper "Multi-Stage Self-Supervised Learning for Graph Convolutional Networks on Graphs with Few Labels"

Reference Code for AAAI-20 paper "Multi-Stage Self-Supervised Learning for Graph Convolutional Networks on Graphs with Few Labels" Please refer to htt

Ke Sun 1 Feb 14, 2022
Handwritten Number Recognition using CNN and Character Segmentation

Handwritten-Number-Recognition-With-Image-Segmentation Info About this repository This Repository is aimed at reading handwritten images of numbers an

Sparsha Saha 17 Aug 25, 2022
Let's explore how we can extract text from forms

Form Segmentation Let's explore how we can extract text from any forms / scanned pages. Objectives The goal is to find an algorithm that can extract t

Philip Doxakis 42 Jun 05, 2022
A collection of resources (including the papers and datasets) of OCR (Optical Character Recognition).

OCR Resources This repository contains a collection of resources (including the papers and datasets) of OCR (Optical Character Recognition). Contents

Zuming Huang 363 Jan 03, 2023
This is a Computer vision package that makes its easy to run Image processing and AI functions. At the core it uses OpenCV and Mediapipe libraries.

CVZone This is a Computer vision package that makes its easy to run Image processing and AI functions. At the core it uses OpenCV and Mediapipe librar

CVZone 648 Dec 30, 2022
DouZero is a reinforcement learning framework for DouDizhu - 斗地主AI

[ICML 2021] DouZero: Mastering DouDizhu with Self-Play Deep Reinforcement Learning | 斗地主AI

Kwai 3.1k Jan 05, 2023
Application that instantly translates sign-language to letters.

Sign Language Translator Project Description The main purpose of project is translating sign-language to letters. In accordance with this purpose we d

3 Sep 29, 2022
PianoVisuals - Create background videos synced with piano music using opencv

Steps Record piano video Use Neural Network to do body segmentation (video matti

Solbiati Alessandro 4 Jan 24, 2022
Text recognition (optical character recognition) with deep learning methods.

What Is Wrong With Scene Text Recognition Model Comparisons? Dataset and Model Analysis | paper | training and evaluation data | failure cases and cle

Clova AI Research 3.2k Jan 04, 2023
This is the open source implementation of the ICLR2022 paper "StyleNeRF: A Style-based 3D-Aware Generator for High-resolution Image Synthesis"

StyleNeRF: A Style-based 3D-Aware Generator for High-resolution Image Synthesis StyleNeRF: A Style-based 3D-Aware Generator for High-resolution Image

Meta Research 840 Dec 26, 2022
Crop regions in napari manually

napari-crop Crop regions in napari manually Usage Create a new shapes layer to annotate the region you would like to crop: Use the rectangle tool to a

Robert Haase 4 Sep 29, 2022
SCOUTER: Slot Attention-based Classifier for Explainable Image Recognition

SCOUTER: Slot Attention-based Classifier for Explainable Image Recognition PDF Abstract Explainable artificial intelligence has been gaining attention

87 Dec 26, 2022
The virtual calculator will be above the live streaming from your camera

The virtual calculator is above the live streaming from my camera usb , the program first detect my hand and in each frame calculate the distance between two finger ,if the distance is lower than the

gasbaoui mohammed al amine 5 Jul 01, 2022
Automatically download multiple papers by keywords in CVPR

CVFPaperHelper Automatically download multiple papers by keywords in CVPR Install mkdir PapersToRead cd PaperToRead pip install requests tqdm git clon

46 Jun 08, 2022
Detect and fix skew in images containing text

Alyn Skew detection and correction in images containing text Image with skew Image after deskew Install and use via pip! Recommended way(using virtual

Kakul 230 Dec 21, 2022
Roboflow makes managing, preprocessing, augmenting, and versioning datasets for computer vision seamless.

Roboflow makes managing, preprocessing, augmenting, and versioning datasets for computer vision seamless. This is the official Roboflow python package that interfaces with the Roboflow API.

Roboflow 52 Dec 23, 2022
One Metrics Library to Rule Them All!

onemetric Installation Install onemetric from PyPI (recommended): pip install onemetric Install onemetric from the GitHub source: git clone https://gi

Piotr Skalski 49 Jan 03, 2023
Multi-Oriented Scene Text Detection via Corner Localization and Region Segmentation

This is the official implementation of "Multi-Oriented Scene Text Detection via Corner Localization and Region Segmentation". For more details, please

Pengyuan Lyu 309 Dec 06, 2022
Pre-Recognize Library - library with algorithms for improving OCR quality.

PRLib - Pre-Recognition Library. The main aim of the library - prepare image for recogntion. Image processing can really help to improve recognition q

Alex 80 Dec 30, 2022