DiSECt: Differentiable Simulator for Robotic Cutting

Related tags

Deep LearningDiSECt
Overview

DiSECt: Differentiable Simulator for Robotic Cutting

Website | Paper | Dataset | Video | Blog post

Potato slicing

DiSECt is a simulator for the cutting of deformable materials. It uses the Finite Element Method (FEM) to simulate the deformation of the material, and leverages a virtual node algorithm to introduce springs between the two halves of the mesh being cut. These cutting springs are weakened in proportion to the knife forces acting on the material, yielding a continuous model of deformation and crack propagation. By leveraging source code transformation, the back-end of DiSECt automatically generates CUDA-accelerated kernels for the forward simulation and the gradients of the simulation inputs. Such gradient information can be used to optimize the simulation parameters to achieve accurate knife force predictions, optimize cutting actions, and more.

Prerequisites

  • Python 3.6 or higher
  • PyTorch 1.4.0 or higher
  • Pixar USD lib (for visualization)

Pre-built USD Python libraries can be downloaded from https://developer.nvidia.com/usd, once they are downloaded you should follow the instructions to add them to your PYTHONPATH environment variable. Besides using the provided basic visualizer implemented using pyvista, DiSECt can generate USD files for rendering, e.g. in NVIDIA Omniverse™ or usdview.

Using the built-in backend

By default, the simulation back-end uses the built-in PyTorch cpp-extensions mechanism to compile auto-generated simulation kernels.

  • Windows users should ensure they have Visual Studio 2019 installed

Installation

Dataset

To set up our dataset of meshes, simulated knife forces and nodal motion fields we recorded in the ANSYS LS-DYNA simulator, download this zip file (96 MB) and extract it in the project folder, such that the folder dataset is at the top level.

We provide a README.md file with more details on the contents of this dataset in the dataset folder. The dataset is released under the Creative Commons Attribution-NonCommercial 4.0 International License.

Python dependencies

Next, set up the Python dependencies listed in requirements.txt via

pip install -r requirements.txt

Mesh processing library

See meshing/README.md for instructions on how to install the recommended C++-based mesh cutting library that DiSECt relies on to process meshes.

Mesh discretization

For the mesh discretization we provide an example script in cutting/tetrahedralization.py based on the Wildmeshing Python API that can be used to generate a tetrahedral mesh from a triangle surface mesh, which allows it to be used in the FEM simulator.

Examples

The following demos are provided and can be executed via python examples/ .py .

Example Description
basic_cutting Cutting a prism shape with a knife following a slicing motion, running in the interactive pyvista 3D visualizer
render_usd Demonstrates how to generate a USD file from the simulation
optimize_slicing Constrained optimization via MDMM to find a slicing motion of the knife that minimizes force while adhering to blade length and knife height constraints
parameter_inference Optimizes simulation parameters to match a knife force profile from one of the measurements in our dataset

Citation

@INPROCEEDINGS{heiden2021disect,
    AUTHOR    = {Eric Heiden AND Miles Macklin AND Yashraj S Narang AND Dieter Fox AND Animesh Garg AND Fabio Ramos},
    TITLE     = {{DiSECt: A Differentiable Simulation Engine for Autonomous Robotic Cutting}},
    BOOKTITLE = {Proceedings of Robotics: Science and Systems},
    YEAR      = {2021},
    ADDRESS   = {Virtual},
    MONTH     = {July},
    DOI       = {10.15607/RSS.2021.XVII.067}
}

License

Copyright © 2021, NVIDIA Corporation. All rights reserved.

This work is made available under the NVIDIA Source Code License.

You might also like...
Get a Grip! - A robotic system for remote clinical environments.
Get a Grip! - A robotic system for remote clinical environments.

Get a Grip! Within clinical environments, sterilization is an essential procedure for disinfecting surgical and medical instruments. For our engineeri

Look Closer: Bridging Egocentric and Third-Person Views with Transformers for Robotic Manipulation
Look Closer: Bridging Egocentric and Third-Person Views with Transformers for Robotic Manipulation

Look Closer: Bridging Egocentric and Third-Person Views with Transformers for Robotic Manipulation Official PyTorch implementation for the paper Look

Axel - 3D printed robotic hands and they controll with Raspberry Pi and Arduino combo

Axel It's our graduation project about 3D printed robotic hands and they control

A robotic arm that mimics hand movement through MediaPipe tracking.

La-Z-Arm A robotic arm that mimics hand movement through MediaPipe tracking. Hardware NVidia Jetson Nano Sparkfun Pi Servo Shield Micro Servos Webcam

Building Ellee — A GPT-3 and Computer Vision Powered Talking Robotic Teddy Bear With Human Level Conversation Intelligence

Using an object detection and facial recognition system built on MobileNetSSDV2 and Dlib and running on an NVIDIA Jetson Nano, a GPT-3 model, Google Speech Recognition, Amazon Polly and servo motors, I built Ellee - a robotic teddy bear who can move her head and converse naturally.

TACTO: A Fast, Flexible and Open-source Simulator for High-Resolution Vision-based Tactile Sensors
TACTO: A Fast, Flexible and Open-source Simulator for High-Resolution Vision-based Tactile Sensors

TACTO: A Fast, Flexible and Open-source Simulator for High-Resolution Vision-based Tactile Sensors This package provides a simulator for vision-based

A data-driven maritime port simulator
A data-driven maritime port simulator

PySeidon - A Data-Driven Maritime Port Simulator 🌊 Extendable and modular software for maritime port simulation. This software uses entity-component

A TensorFlow implementation of SOFA, the Simulator for OFfline LeArning and evaluation.
A TensorFlow implementation of SOFA, the Simulator for OFfline LeArning and evaluation.

SOFA This repository is the implementation of SOFA, the Simulator for OFfline leArning and evaluation. Keeping Dataset Biases out of the Simulation: A

Customizable RecSys Simulator for OpenAI Gym
Customizable RecSys Simulator for OpenAI Gym

gym-recsys: Customizable RecSys Simulator for OpenAI Gym Installation | How to use | Examples | Citation This package describes an OpenAI Gym interfac

Comments
  • Error when reproduce demo on DiSECt

    Error when reproduce demo on DiSECt

    Hi, Heiden Thank you for sharing such awesome work of DiSECt, where the codebase is solid and cool. But I meet some bugs when try to python examples/basic_cutting.py

    The following is my bug info. It seems that cannot find kernels.so ` python examples/basic_cutting.py Rebuilding kernels /home/anabur/anaconda3/envs/robot/lib/python3.6/site-packages/torch/utils/cpp_extension.py:298: UserWarning:

                               !! WARNING !!
    

    !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Your compiler (clang++-9) is not compatible with the compiler Pytorch was built with for this platform, which is g++ on linux. Please use g++ to to compile your extension. Alternatively, you may compile PyTorch from source using clang++-9, and then you can also use clang++-9 to compile your extension.

    See https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md for help with compiling PyTorch from source. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

                              !! WARNING !!
    

    platform=sys.platform)) Detected CUDA files, patching ldflags Emitting ninja build file /media/anabur/E/robot_similation/DiSECt/dflex/kernels/build.ninja... Building extension module kernels... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) 1.10.2.git.kitware.jobserver-1 Loading extension module kernels... Traceback (most recent call last): File "examples/basic_cutting.py", line 20, in from cutting import load_settings, SlicingMotion, CuttingSim File "/media/anabur/E/robot_similation/DiSECt/cutting/init.py", line 10, in from .cutting_sim import * File "/media/anabur/E/robot_similation/DiSECt/cutting/cutting_sim.py", line 25, in from cutting.urdf_loader import load_urdf File "/media/anabur/E/robot_similation/DiSECt/cutting/urdf_loader.py", line 12, in import dflex as df File "/media/anabur/E/robot_similation/DiSECt/dflex/init.py", line 15, in kernel_init() File "/media/anabur/E/robot_similation/DiSECt/dflex/sim.py", line 47, in kernel_init kernels = df.compile() File "/media/anabur/E/robot_similation/DiSECt/dflex/adjoint.py", line 1934, in compile with_pytorch_error_handling=False) File "/home/anabur/anaconda3/envs/robot/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1285, in load_inline keep_intermediates=keep_intermediates) File "/home/anabur/anaconda3/envs/robot/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1362, in _jit_compile return _import_module_from_library(name, build_directory, is_python_module) File "/home/anabur/anaconda3/envs/robot/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1752, in _import_module_from_library module = importlib.util.module_from_spec(spec) ImportError: /media/anabur/E/robot_similation/DiSECt/dflex/kernels/kernels.so: cannot open shared object file: No such file or directory `

    opened by ANABUR920 2
  • Error on `examples/render_usd.py` with leaf `Variable` and gradients

    Error on `examples/render_usd.py` with leaf `Variable` and gradients

    I have installed DiSECt on my system:

    • Ubuntu 18.04
    • NVIDIA GeForce RTX 3090 GPU
    • Conda environment with Python 3.7 (find the conda list here)
    • nvcc --version gives me CUDA 11.2.
    • The dataset/ folder is located in the home directory.

    Three of the example scripts seem to be running without errors or warnings.

    python examples/basic_cutting.py
    python examples/optimize_slicing.py
    python examples/parameter_inference.py
    

    The exception is this fourth example:

    (disect) [email protected]:~/DiSECt (main) $ python examples/render_usd.py 
    Using cached kernels
    Using log folder at "/home/seita/DiSECt/log".
    Converted Young's modulus 43000.0 and Poisson's ratio 0.49 to Lame parameters mu = 14429.530201342282 and lambda = 707046.9798657711
    PyANSYS MAPDL Result file object
    Title       : Cutting_v5--Static Structural (B5)
    Units       : User Defined
    Version     : 20.2
    Cyclic      : False
    Result Sets : 1
    Nodes       : 797
    Elements    : 3562
    
    
    Available Results:
    ENS : Nodal stresses
    ENG : Element energies and volume
    EEL : Nodal elastic strains
    EUL : Element euler angles
    EPT : Nodal temperatures
    NSL : Nodal displacements
    RF  : Nodal reaction forces
    
    ANSYS Mesh
      Number of Nodes:              797
      Number of Elements:           3562
      Number of Element Types:      2
      Number of Node Components:    1
      Number of Element Components: 0
    
    Loaded mesh with 797 vertices and 3472 tets.
    Creating free-floating knife
    cut_meshing_cpp took 2.58 ms
    224 cut springs have been inserted.
    /home/seita/DiSECt/dflex/model.py:2223: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at  /opt/conda/conda-bld/pytorch_1634272178570/work/torch/csrc/utils/tensor_new.cpp:201.)
      m.shape_transform = torch.tensor(transform_flatten_list(self.shape_transform), dtype=torch.float32, device=adapter)
    self.cut_edge_indices: (448, 2)
    self.cut_spring_indices: (224, 2)
    self.cut_virtual_tri_indices: (790, 3)
    self.cut_edge_indices: (448, 2)
    self.cut_spring_indices: (224, 2)
    self.cut_virtual_tri_indices: (790, 3)
    render_demo:   0%|                                                                             | 0/40 [00:00<?, ?it/s]
    Traceback (most recent call last):
      File "examples/render_usd.py", line 54, in <module>
        sim.simulate(render=True)
      File "/home/seita/DiSECt/cutting/cutting_sim.py", line 708, in simulate
        self.simulation_step()
      File "/home/seita/DiSECt/cutting/cutting_sim.py", line 650, in simulation_step
        update_mass_matrix=False)
      File "/home/seita/DiSECt/dflex/sim.py", line 2912, in forward
        state_in.joint_qdd.zero_()
    RuntimeError: a leaf Variable that requires grad is being used in an in-place operation.
    (disect) [email protected]:~/DiSECt (main) $ 
    

    This seems to be a PyTorch error (e.g., https://discuss.pytorch.org/t/leaf-variable-was-used-in-an-inplace-operation/308) but are we supposed to have other information stored or loaded?

    opened by DanielTakeshi 1
  • minor installation clarifications / tweaks

    minor installation clarifications / tweaks

    Hi @eric-heiden

    I added in a minor installation change. I was running a Python 3.7 conda env on Ubuntu 18, and was running your basic example after the installation steps (including pip install -r requirements.txt):

    (disect) [email protected]:~/DiSECt (main) $ python examples/basic_cutting.py 
    Rebuilding kernels
    Traceback (most recent call last):
      File "examples/basic_cutting.py", line 20, in <module>
        from cutting import load_settings, SlicingMotion, CuttingSim
      File "/home/seita/DiSECt/cutting/__init__.py", line 10, in <module>
        from .cutting_sim import *
      File "/home/seita/DiSECt/cutting/cutting_sim.py", line 25, in <module>
        from cutting.urdf_loader import load_urdf
      File "/home/seita/DiSECt/cutting/urdf_loader.py", line 12, in <module>
        import dflex as df
      File "/home/seita/DiSECt/dflex/__init__.py", line 15, in <module>
        kernel_init()
      File "/home/seita/DiSECt/dflex/sim.py", line 47, in kernel_init
        kernels = df.compile()
      File "/home/seita/DiSECt/dflex/adjoint.py", line 1934, in compile
        with_pytorch_error_handling=False)
      File "/home/seita/miniconda3/envs/disect/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1285, in load_inline
        keep_intermediates=keep_intermediates)
      File "/home/seita/miniconda3/envs/disect/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1347, in _jit_compile
        is_standalone=is_standalone)
      File "/home/seita/miniconda3/envs/disect/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1418, in _write_ninja_file_and_build_library
        verify_ninja_availability()
      File "/home/seita/miniconda3/envs/disect/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1474, in verify_ninja_availability
        raise RuntimeError("Ninja is required to load C++ extensions")
    RuntimeError: Ninja is required to load C++ extensions
    (disect) [email protected]:~/DiSECt (main) $
    

    The fix is to do a simple pip install ninja. I've put this in the requirements.txt. (If it would help, I can also write a more detailed overview of how I installed this to make it reproducible in case you don't experience this on your end.)

    I've also put a slight clarification into where the instructions are for installing USD libraries in README.md. It might not be as clear on a first glance.

    opened by DanielTakeshi 1
Releases(v1.1)
Owner
NVIDIA Research Projects
NVIDIA Research Projects
Python scripts form performing stereo depth estimation using the HITNET model in Tensorflow Lite.

TFLite-HITNET-Stereo-depth-estimation Python scripts form performing stereo depth estimation using the HITNET model in Tensorflow Lite. Stereo depth e

Ibai Gorordo 22 Oct 20, 2022
Our VMAgent is a platform for exploiting Reinforcement Learning (RL) on Virtual Machine (VM) scheduling tasks.

VMAgent is a platform for exploiting Reinforcement Learning (RL) on Virtual Machine (VM) scheduling tasks. VMAgent is constructed based on one month r

56 Dec 12, 2022
Using this codebase as a tool for my own research. Making some modifications to the original repo for my own purposes.

For SwapNet Create a list.txt file containing all the images to process. This can be done with the GNU find command: find path/to/input/folder -name '

Andrew Jong 2 Nov 10, 2021
Probabilistic Gradient Boosting Machines

PGBM Probabilistic Gradient Boosting Machines (PGBM) is a probabilistic gradient boosting framework in Python based on PyTorch/Numba, developed by Air

Olivier Sprangers 112 Dec 28, 2022
Codes for CVPR2021 paper "PWCLO-Net: Deep LiDAR Odometry in 3D Point Clouds Using Hierarchical Embedding Mask Optimization"

PWCLO-Net: Deep LiDAR Odometry in 3D Point Clouds Using Hierarchical Embedding Mask Optimization (CVPR 2021) This is the official implementation of PW

Intelligent Robotics and Machine Vision Lab 42 Dec 18, 2022
Multi-Scale Progressive Fusion Network for Single Image Deraining

Multi-Scale Progressive Fusion Network for Single Image Deraining (MSPFN) This is an implementation of the MSPFN model proposed in the paper (Multi-Sc

Kuijiang 128 Nov 21, 2022
Neural Cellular Automata + CLIP

🧠 Text-2-Cellular Automata Using Neural Cellular Automata + OpenAI CLIP (Work in progress) Examples Text Prompt: Cthulu is watching cthulu_is_watchin

Mainak Deb 21 Dec 19, 2022
Mae segmentation - Reproduction of semantic segmentation using masked autoencoder (mae)

ADE20k Semantic segmentation with MAE Getting started Install the mmsegmentation

97 Dec 17, 2022
Rotated Box Is Back : Accurate Box Proposal Network for Scene Text Detection

Rotated Box Is Back : Accurate Box Proposal Network for Scene Text Detection This material is supplementray code for paper accepted in ICDAR 2021 We h

NCSOFT 30 Dec 21, 2022
A Python library for differentiable optimal control on accelerators.

A Python library for differentiable optimal control on accelerators.

Google 80 Dec 21, 2022
This is the official implementation of the paper "Object Propagation via Inter-Frame Attentions for Temporally Stable Video Instance Segmentation".

ObjProp Introduction This is the official implementation of the paper "Object Propagation via Inter-Frame Attentions for Temporally Stable Video Insta

Anirudh S Chakravarthy 6 May 03, 2022
Investigating Attention Mechanism in 3D Point Cloud Object Detection (arXiv 2021)

Investigating Attention Mechanism in 3D Point Cloud Object Detection (arXiv 2021) This repository is for the following paper: "Investigating Attention

52 Nov 19, 2022
Multi-task yolov5 with detection and segmentation based on yolov5

YOLOv5DS Multi-task yolov5 with detection and segmentation based on yolov5(branch v6.0) decoupled head anchor free segmentation head README中文 Ablation

150 Dec 30, 2022
Official implementation of Self-supervised Image-to-text and Text-to-image Synthesis

Self-supervised Image-to-text and Text-to-image Synthesis This is the official implementation of Self-supervised Image-to-text and Text-to-image Synth

6 Jul 31, 2022
All the code and files related to the MI-Lab of UE19CS305 course in sem 5

Machine-Intelligence-Lab-CS305 The compilation of all the code an drelated files from MI-Lab UE19CS305 (of batch 2019-2023) offered by PES University

Arvind Krishna 3 Nov 10, 2022
wgan, wgan2(improved, gp), infogan, and dcgan implementation in lasagne, keras, pytorch

Generative Adversarial Notebooks Collection of my Generative Adversarial Network implementations Most codes are for python3, most notebooks works on C

tjwei 1.5k Dec 16, 2022
This repository provides the official implementation of 'Learning to ignore: rethinking attention in CNNs' accepted in BMVC 2021.

inverse_attention This repository provides the official implementation of 'Learning to ignore: rethinking attention in CNNs' accepted in BMVC 2021. Le

Firas Laakom 5 Jul 08, 2022
A PyTorch Implementation of FaceBoxes

FaceBoxes in PyTorch By Zisian Wong, Shifeng Zhang A PyTorch implementation of FaceBoxes: A CPU Real-time Face Detector with High Accuracy. The offici

Zi Sian Wong 797 Dec 17, 2022
GAN Image Generator and Characterwise Image Recognizer with python

MODEL SUMMARY 모델의 구조는 크게 6단계로 나뉩니다. STEP 0: Input Image Predict 할 이미지를 모델에 입력합니다. STEP 1: Make Black and White Image STEP 1 은 입력받은 이미지의 글자를 흑색으로, 배경을

Juwan HAN 1 Feb 09, 2022