Deep learned, hardware-accelerated 3D object pose estimation

Overview

Isaac ROS Pose Estimation

Overview

This repository provides NVIDIA GPU-accelerated packages for 3D object pose estimation. Using a deep learned pose estimation model and a monocular camera, the isaac_ros_dope and isaac_ros_centerpose package can estimate the 6DOF pose of a target object.

Packages in this repository rely on accelerated DNN model inference using Triton or TensorRT from Isaac ROS DNN Inference.

System Requirements

This Isaac ROS package is designed and tested to be compatible with ROS2 Foxy on Jetson hardware, in addition to on x86 systems with an Nvidia GPU. On x86 systems, packages are only supported when run in the provided Isaac ROS Dev Docker container.

Jetson

  • AGX Xavier or Xavier NX
  • JetPack 4.6

x86_64 (in Isaac ROS Dev Docker Container)

  • CUDA 11.1+ supported discrete GPU
  • VPI 1.1.11
  • Ubuntu 20.04+

Note: For best performance on Jetson, ensure that power settings are configured appropriately (Power Management for Jetson).

Docker

You need to use the Isaac ROS development Docker image from Isaac ROS Common, based on the version 21.08 image from Deep Learning Frameworks Containers.

You must first install the NVIDIA Container Toolkit to make use of the Docker container development/runtime environment.

Configure nvidia-container-runtime as the default runtime for Docker by editing /etc/docker/daemon.json to include the following:

    "runtimes": {
        "nvidia": {
            "path": "nvidia-container-runtime",
            "runtimeArgs": []
        }
    },
    "default-runtime": "nvidia"

and then restarting Docker: sudo systemctl daemon-reload && sudo systemctl restart docker

Run the following script in isaac_ros_common to build the image and launch the container on x86_64 or Jetson:

$ scripts/run_dev.sh <optional_path>

Dependencies

Setup

  1. Create a ROS2 workspace if one is not already prepared:

    mkdir -p your_ws/src
    

    Note: The workspace can have any name; this guide assumes you name it your_ws.

  2. Clone the Isaac ROS Pose Estimation, Isaac ROS DNN Inference, and Isaac ROS Common package repositories to your_ws/src. Check that you have Git LFS installed before cloning to pull down all large files:

    sudo apt-get install git-lfs
    
    cd your_ws/src   
    git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros/isaac_ros_pose_estimation
    git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros/isaac_ros_dnn_inference
    git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros/isaac_ros_common
    
  3. Start the Docker interactive workspace:

    isaac_ros_common/scripts/run_dev.sh your_ws
    

    After this command, you will be inside of the container at /workspaces/isaac_ros-dev. Running this command in different terminals will attach to the same container.

    Note: The rest of this README assumes that you are inside this container.

  4. Build and source the workspace:

    cd /workspaces/isaac_ros-dev
    colcon build && . install/setup.bash
    

    Note: We recommend rebuilding the workspace each time when source files are edited. To rebuild, first clean the workspace by running rm -r build install log.

  5. (Optional) Run tests to verify complete and correct installation:

    colcon test --executor sequential
    

Package Reference

isaac_ros_dope

Overview

The isaac_ros_dope package offers functionality for detecting objects of a specific object type in images and estimating these objects' 6 DOF (degrees of freedom) poses using a trained DOPE (Deep Object Pose Estimation) model. This package sets up pre-processing using the DNN Image Encoder node, inference on images by leveraging the TensorRT node and provides a decoder that converts the DOPE network's output into an array of 6 DOF poses.

The model provided is taken from the official DOPE Github repository published by NVIDIA Research. To get a model, visit the PyTorch DOPE model collection here, and use the script under isaac_ros_dope/scripts to convert the PyTorch model to ONNX, which can be ingested by the TensorRT node (this script can only be executed on an x86 machine). However, the package should also work if you train your own DOPE model that has an input image size of [480, 640]. For instructions to train your own DOPE model, check out the README in the official DOPE Github repository.

Package Dependencies

Available Components

Component Topics Subscribed Topics Published Parameters
DopeDecoderNode belief_map_array: The tensor that represents the belief maps, which are outputs from the DOPE network dope/pose_array: An array of poses of the objects detected by the DOPE network and interpreted by the DOPE decoder node. queue_size: The length of the subscription queues, which is rmw_qos_profile_default.depth by default
frame_id: The frame ID that the DOPE decoder node will write to the header of its output messages
configuration_file: The name of the configuration file to parse. Note: The node will look for that file name under isaac_ros_dope/config. By default there is a configuration file under that directory named dope_config.yaml.
object_name: The object class the DOPE network is detecting and the DOPE decoder is interpreting. This name should be listed in the configuration file along with its corresponding cuboid dimensions.

Configuration

You will need to specify an object type in the DopeDecoderNode that is listed in the dope_config.yaml file, so the DOPE decoder node will pick the right parameters to transform the belief maps from the inference node to object poses. The dope_config.yaml file uses the camera intrinsics of Realsense by default - if you are using a different camera, you will need to modify the camera_matrix field with the new, scaled (640x480) camera intrinsics.

isaac_ros_centerpose

Overview

The isaac_ros_centerpose package offers functionality for detecting objects of a specific class in images and estimating these objects' 6 DOF (degrees of freedom) poses using a trained CenterPose model. Just like DOPE, this package sets up pre-processing using the DNN Image Encoder node, inference on images by leveraging an inference node (either TensorRT or Triton node) and provides a decoder that converts the CenterPose network's output into an array of 6 DOF poses.

The model provided is taken from the official CenterPose Github repository published by NVIDIA Research. To get a model, visit the PyTorch CenterPose model collection here, and use the script under isaac_ros_centerpose/scripts to convert the PyTorch model to ONNX, which can be ingested by the TensorRT node. However, the package should also work if you train your own CenterPose model that has an input image size of [512, 512]. For instructions to train your own CenterPose model, check out the README in the official CenterPose Github repository.

Package Dependencies

Available Components

Component Topics Subscribed Topics Published Parameters
CenterPoseDecoderNode tensor_sub: The TensorList that contains the outputs of the CenterPose network object_poses: A MarkerArray representing the poses of objects detected by the CenterPose network and interpreted by the CenterPose decoder node. camera_matrix: A row-major array of 9 floats that represent the camera intrinsics matrix K.
original_image_size: An array of two floats that represent the size of the original image passed into the image encoder. The first element needs to be width, and the second element needs to be height.
output_field_size: An array of two integers that represent the size of the 2D keypoint decoding from the network output. The value by default is [128, 128].
height: This parameter is used to scale the cuboid used for calculating the size of the objects detected.
frame_id: The frame ID that the DOPE decoder node will write to the header of its output messages. The default value is set to centerpose.
marker_color: An array of 4 floats representing RGBA that will be used to define the color that will be used by RViz to visualize the marker. Each value should be between 0.0 and 1.0. The default value is set to (1.0, 0.0, 0.0, 1.0), which is red.

Configuration

The default parameters for the CenterPoseDecoderNode is defined in the decoders_param.yaml file under isaac_ros_centerpose/config. The dope_config.yaml file uses the camera intrinsics of Realsense by default - if you are using a different camera, you will need to modify the camera_matrix field.

Network Outputs

The CenterPose network has 7 different outputs:

Output Name Meaning
hm Object center heatmap
wh 2D bounding box size
hps Keypoint displacements
reg Sub-pixel offset
hm_hp Keypoint heatmaps
hp_offset Sub-pixel offsets for keypoints
scale Relative cuboid dimensions

For more context and explanation, you can find the corresponding outputs in Figure 2 of the CenterPose paper and refer to the paper.

Walkthroughs

Inference on DOPE using TensorRT

  1. Select a DOPE model by visiting the DOPE model collection available on the official DOPE GitHub repository here. For example, download Ketchup.pth into /tmp/models.

  2. In order to run PyTorch models with TensorRT, one option is to export the model into an ONNX file using the script provided under /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/scripts/dope_converter.py:

    python3 /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/scripts/dope_converter.py --format onnx --input /tmp/models/Ketchup.pth
    

    The output ONNX file will be located at /tmp/models/Ketchup.onnx.

    Note: The DOPE decoder currently works with the output of a DOPE network that has a fixed input size of 640 x 480, which are the default dimensions set in the script. In order to use input images of other sizes, make sure to crop/resize using ROS2 nodes from Isaac ROS Image Pipeline or similar packages.

  3. Modify the following values in the launch file /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/launch/isaac_ros_dope_tensor_rt.launch.py:

    'model_file_path': '/tmp/models/Ketchup.onnx'
    'object_name': 'Ketchup'
    

    Note: Modify parameters object_name and model_file_path in the launch file if you are using another model.object_name should correspond to one of the objects listed in the DOPE configuration file, and the specified model should be a DOPE model that is trained for that specific object.

  4. Rebuild and source isaac_ros_dope:

    cd /workspaces/isaac_ros-dev
    colcon build --packages-up-to isaac_ros_dope && . install/setup.bash
    
  5. Start isaac_ros_dope using the launch file:

    ros2 launch /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/launch/isaac_ros_dope_tensor_rt.launch.py
    
  6. Setup image_publisher package if not already installed.

    cd /workspaces/isaac_ros-dev/src 
    git clone --single-branch -b ros2 https://github.com/ros-perception/image_pipeline.git
    cd /workspaces/isaac_ros-dev
    colcon build --packages-up-to image_publisher && . install/setup.bash
    
  7. Start publishing images to topic /image using image_publisher, the topic that the encoder is subscribed to.

    ros2 run image_publisher image_publisher_node /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/resources/0002_rgb.jpg --ros-args -r image_raw:=image
    
  8. Open another terminal window. You should be able to get the poses of the objects in the images through ros2 topic echo:

    source /workspaces/isaac_ros-dev/install/setup.bash
    ros2 topic echo /poses
    

    We are echoing the topic /poses because we remapped the original topic name /dope/pose_array to /poses in our launch file.

  9. Launch rviz2. Click on Add button, select "By topic", and choose PoseArray under /poses. Update "Displays" parameters as shown in the following to see the axes of the object displayed.

Note: For best results, crop/resize input images to the same dimensions your DNN model is expecting.

Inference on DOPE using Triton

  1. Select a DOPE model by visiting the DOPE model collection available on the official DOPE GitHub repository here. For example, download Ketchup.pth into /tmp/models/Ketchup.

  2. Setup model repository.

    Create a models repository with version 1:

    mkdir -p /tmp/models/Ketchup/1
    

    Create a configuration file for this model at path /tmp/models/Ketchup/config.pbtxt. Note that name has to be the same as the model repository.

    name: "Ketchup"
    platform: "onnxruntime_onnx"
    max_batch_size: 0
    input [
      {
        name: "INPUT__0"
        data_type: TYPE_FP32
        dims: [ 1, 3, 480, 640 ]
      }
    ]
    output [
      {
        name: "OUTPUT__0"
        data_type: TYPE_FP32
        dims: [ 1, 25, 60, 80 ]
      }
    ]
    version_policy: {
      specific {
        versions: [ 1 ]
      }
    }
    
    • To run ONNX models with Triton, export the model into an ONNX file using the script provided under /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/scripts/dope_converter.py:

      python3 /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/scripts/dope_converter.py --format onnx --input /tmp/models/Ketchup/Ketchup.pth --output /tmp/models/Ketchup/1/model.onnx --input_name INPUT__0 --output_name OUTPUT__0
      

      Note: The DOPE decoder currently works with the output of a DOPE network that has a fixed input size of 640 x 480, which are the default dimensions set in the script. In order to use input images of other sizes, make sure to crop/resize using ROS2 nodes from Isaac ROS Image Pipeline or similar packages. The model name has to be model.onnx.

    • To run TensorRT engine plan file with Triton, export the ONNX model into an TensorRT engine plan file using the builtin TensorRT converter trtexec:

      /usr/src/tensorrt/bin/trtexec --onnx=/tmp/models/Ketchup/1/model.onnx --saveEngine=/tmp/models/Ketchup/1/model.plan
      

      Modify the following value in /tmp/models/Ketchup/config.pbtxt:

      platform: "tensorrt_plan"
      
    • To run PyTorch model with Triton (inferencing PyTorch model is supported for x86_64 platform only), the model needs to be saved using torch.jit.save(). The downloaded DOPE model is saved with torch.save(). Export the DOPE model using the script provided under /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/scripts/dope_converter.py:

      python3 /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/scripts/dope_converter.py --format pytorch --input /tmp/models/Ketchup/Ketchup.pth --output /tmp/models/Ketchup/1/model.pt
      

      Modify the following value in /tmp/models/Ketchup/config.pbtxt:

      platform: "pytorch_libtorch"
      
  3. Modify the following values in the launch file /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/launch/isaac_ros_dope_triton.launch.py:

    'model_name': 'Ketchup'
    'model_repository_paths': ['/tmp/models']
    'input_binding_names': ['INPUT__0']
    'output_binding_names': ['OUTPUT__0']
    'object_name': 'Ketchup'
    

    Note: object_name should correspond to one of the objects listed in the DOPE configuration file, and the specified model should be a DOPE model that is trained for that specific object.

  4. Rebuild and source isaac_ros_dope:

    cd /workspaces/isaac_ros-dev
    colcon build --packages-up-to isaac_ros_dope && . install/setup.bash
    
  5. Start isaac_ros_dope using the launch file:

    ros2 launch /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/launch/isaac_ros_dope_triton.launch.py
    
  6. Setup image_publisher package if not already installed.

    cd /workspaces/isaac_ros-dev/src
    git clone --single-branch -b ros2 https://github.com/ros-perception/image_pipeline.git
    cd /workspaces/isaac_ros-dev
    colcon build --packages-up-to image_publisher && . install/setup.bash
    
  7. Start publishing images to topic /image using image_publisher, the topic that the encoder is subscribed to.

    ros2 run image_publisher image_publisher_node /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/resources/0002_rgb.jpg --ros-args -r image_raw:=image
    
  8. Open another terminal window. You should be able to get the poses of the objects in the images through ros2 topic echo:

    source /workspaces/isaac_ros-dev/install/setup.bash
    ros2 topic echo /poses
    

    We are echoing the topic /poses because we remapped the original topic name /dope/pose_array to /poses in our launch file.

  9. Launch rviz2. Click on Add button, select "By topic", and choose PoseArray under /poses. Update "Displays" parameters to see the axes of the object displayed.

Note: For best results, crop/resize input images to the same dimensions your DNN model is expecting.

Inference on CenterPose using Triton

  1. Select a CenterPose model by visiting the CenterPose model collection available on the official CenterPose GitHub repository here. For example, download shoe_resnet_140.pth into /tmp/models/centerpose_shoe.

Note: The models in the root directory of the model collection listed above will NOT WORK with our inference nodes because they have custom layers not supported by TensorRT nor Triton. Make sure to use the PyTorch weights that have the string resnet in their file names.

  1. Setup model repository.

    Create a models repository with version 1:

    mkdir -p /tmp/models/centerpose_shoe/1
    
  2. Create a configuration file for this model at path /tmp/models/centerpose_shoe/config.pbtxt. Note that name has to be the same as the model repository name. Take a look at the example at isaac_ros_centerpose/test/models/centerpose_shoe/config.pbtxt and copy that file to /tmp/models/centerpose_shoe/config.pbtxt.

    cp /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_centerpose/test/models/centerpose_shoe/config.pbtxt /tmp/models/centerpose_shoe/config.pbtxt
    
  3. To run the TensorRT engine plan, convert the PyTorch model to ONNX first. Export the model into an ONNX file using the script provided under /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_centerpose/scripts/centerpose_pytorch2onnx.py:

    python3 /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_centerpose/scripts/centerpose_pytorch2onnx.py --input /tmp/models/centerpose_shoe/shoe_resnet_140.pth --output /tmp/models/centerpose_shoe/1/model.onnx
    
  4. To get a TensorRT engine plan file with Triton, export the ONNX model into an TensorRT engine plan file using the builtin TensorRT converter trtexec:

    /usr/src/tensorrt/bin/trtexec --onnx=/tmp/models/centerpose_shoe/1/model.onnx --saveEngine=/tmp/models/centerpose_shoe/1/model.plan
    
  5. Modify the isaac_ros_centerpose launch file located in /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_centerpose/launch/isaac_ros_centerpose.launch.py. You will need to update the following lines as:

    'model_name': 'centerpose_shoe',
    'model_repository_paths': ['/tmp/models'],
    

    Rebuild and source isaac_ros_centerpose:

    cd /workspaces/isaac_ros-dev
    colcon build --packages-up-to isaac_ros_centerpose && . install/setup.bash
    

    Start isaac_ros_centerpose using the launch file:

    ros2 launch isaac_ros_centerpose isaac_ros_centerpose.launch.py
    
  6. Setup image_publisher package if not already installed.

    cd /workspaces/isaac_ros-dev/src
    git clone --single-branch -b ros2 https://github.com/ros-perception/image_pipeline.git
    cd /workspaces/isaac_ros-dev
    colcon build --packages-up-to image_publisher && . install/setup.bash
    
  7. Start publishing images to topic /image using image_publisher, the topic that the encoder is subscribed to.

    ros2 run image_publisher image_publisher_node /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/resources/shoe.jpg --ros-args -r image_raw:=image
    
  8. Open another terminal window and attach to the same container. You should be able to get the poses of the objects in the images through ros2 topic echo:

    source /workspaces/isaac_ros-dev/install/setup.bash
    ros2 topic echo /object_poses
    
  9. Launch rviz2. Click on Add button, select "By topic", and choose MarkerArray under /object_poses. Set the fixed frame to centerpose. You'll be able to see the cuboid marker representing the object's pose detected!

Troubleshooting

Nodes crashed on initial launch reporting shared libraries have a file format not recognized

Many dependent shared library binary files are stored in git-lfs. These files need to be fetched in order for Isaac ROS nodes to function correctly.

Symptoms

/usr/bin/ld:/workspaces/isaac_ros-dev/ros_ws/src/isaac_ros_common/isaac_ros_nvengine/gxf/lib/gxf_jetpack46/core/libgxf_core.so: file format not recognized; treating as linker script
/usr/bin/ld:/workspaces/isaac_ros-dev/ros_ws/src/isaac_ros_common/isaac_ros_nvengine/gxf/lib/gxf_jetpack46/core/libgxf_core.so:1: syntax error
collect2: error: ld returned 1 exit status
make[2]: *** [libgxe_node.so] Error 1
make[1]: *** [CMakeFiles/gxe_node.dir/all] Error 2
make: *** [all] Error 2

Solution

Run git lfs pull in each Isaac ROS repository you have checked out, especially isaac_ros_common, to ensure all of the large binary files have been downloaded.

Updates

Date Changes
2021-10-20 Initial release
You might also like...
Single-Stage 6D Object Pose Estimation, CVPR 2020
Single-Stage 6D Object Pose Estimation, CVPR 2020

Overview This repository contains the code for the paper Single-Stage 6D Object Pose Estimation. Yinlin Hu, Pascal Fua, Wei Wang and Mathieu Salzmann.

Single-stage Keypoint-based Category-level Object Pose Estimation from an RGB Image
Single-stage Keypoint-based Category-level Object Pose Estimation from an RGB Image

CenterPose Overview This repository is the official implementation of the paper "Single-stage Keypoint-based Category-level Object Pose Estimation fro

PyTorch implemention of ICCV'21 paper SGPA: Structure-Guided Prior Adaptation for Category-Level 6D Object Pose Estimation
PyTorch implemention of ICCV'21 paper SGPA: Structure-Guided Prior Adaptation for Category-Level 6D Object Pose Estimation

SGPA: Structure-Guided Prior Adaptation for Category-Level 6D Object Pose Estimation This is the PyTorch implemention of ICCV'21 paper SGPA: Structure

[CVPR 2022] Pytorch implementation of
[CVPR 2022] Pytorch implementation of "Templates for 3D Object Pose Estimation Revisited: Generalization to New objects and Robustness to Occlusions" paper

template-pose Pytorch implementation of "Templates for 3D Object Pose Estimation Revisited: Generalization to New objects and Robustness to Occlusions

[CVPR 2022 Oral] EPro-PnP: Generalized End-to-End Probabilistic Perspective-n-Points for Monocular Object Pose Estimation
[CVPR 2022 Oral] EPro-PnP: Generalized End-to-End Probabilistic Perspective-n-Points for Monocular Object Pose Estimation

EPro-PnP EPro-PnP: Generalized End-to-End Probabilistic Perspective-n-Points for Monocular Object Pose Estimation In CVPR 2022 (Oral). [paper] Hanshen

Object detection on multiple datasets with an automatically learned unified label space.
Object detection on multiple datasets with an automatically learned unified label space.

Simple multi-dataset detection An object detector trained on multiple large-scale datasets with a unified label space; Winning solution of E

Lunar is a neural network aimbot that uses real-time object detection accelerated with CUDA on Nvidia GPUs.
Lunar is a neural network aimbot that uses real-time object detection accelerated with CUDA on Nvidia GPUs.

Lunar Lunar is a neural network aimbot that uses real-time object detection accelerated with CUDA on Nvidia GPUs. About Lunar can be modified to work

The project is an official implementation of our CVPR2019 paper
The project is an official implementation of our CVPR2019 paper "Deep High-Resolution Representation Learning for Human Pose Estimation"

Deep High-Resolution Representation Learning for Human Pose Estimation (CVPR 2019) News [2020/07/05] A very nice blog from Towards Data Science introd

Deep Dual Consecutive Network for Human Pose Estimation (CVPR2021)
Deep Dual Consecutive Network for Human Pose Estimation (CVPR2021)

Deep Dual Consecutive Network for Human Pose Estimation (CVPR2021) Introduction This is the official code of Deep Dual Consecutive Network for Human P

Comments
  • Question about deformable convolution

    Question about deformable convolution

    Hi, First of all Thank you. I have one query. Centre pose uses Deformable convolution layer. But in the "centerpose_pytorch2onnx.py" script there is no Deformable convolution. Does the converted ONNXmodel contain Deformable convolution layer ? Thank you .

    opened by Shaaa10 1
  • Can't use it with usb cameras with ros2 node

    Can't use it with usb cameras with ros2 node

    Hello,

    I was trying to use this repo inside docker. It runs well as you gave the demo ros bag but with my USB camera with the ros2 usb_cam node running, it gives the following result:

    [component_container_mt-1] [WARN] [1667762069.374893820] [dope_decoder]: [NitrosPublisherSubscriberBase] Failed to get timestamp from a NITROS message (eid=77309)
    

    Here is the node: https://github.com/ros-drivers/usb_cam

    git checkout ros2

    Note:

    • changed the image_raw topic to image to feed this rosnode and
    • changed image height to 1080 and image width to 1920
    • the encoding is rgb8
    needs info 
    opened by ArghyaChatterjee 3
Releases(v0.20.0-dp)
Owner
NVIDIA Isaac ROS
High-performance computing for robotics
NVIDIA Isaac ROS
DenseCLIP: Language-Guided Dense Prediction with Context-Aware Prompting

DenseCLIP: Language-Guided Dense Prediction with Context-Aware Prompting Created by Yongming Rao*, Wenliang Zhao*, Guangyi Chen, Yansong Tang, Zheng Z

Yongming Rao 321 Dec 27, 2022
Feature board for ERPNext

ERPNext Feature Board Feature board for ERPNext Development Prerequisites k3d kubectl helm bench Install K3d Cluster # export K3D_FIX_CGROUPV2=1 # use

Revant Nandgaonkar 16 Nov 09, 2022
Stochastic Tensor Optimization for Robot Motion - A GPU Robot Motion Toolkit

STORM Stochastic Tensor Optimization for Robot Motion - A GPU Robot Motion Toolkit [Install Instructions] [Paper] [Website] This package contains code

NVIDIA Research Projects 101 Dec 12, 2022
Network Enhancement implementation in pytorch

network_enahncement_pytorch Network Enhancement implementation in pytorch Research paper Network Enhancement: a general method to denoise weighted bio

Yen 1 Nov 12, 2021
TipToiDog - Tip Toi Dog With Python

TipToiDog Was ist dieses Projekt? Meine 5-jährige Tochter spielt sehr gerne das

1 Feb 07, 2022
Data manipulation and transformation for audio signal processing, powered by PyTorch

torchaudio: an audio library for PyTorch The aim of torchaudio is to apply PyTorch to the audio domain. By supporting PyTorch, torchaudio follows the

1.9k Dec 28, 2022
Enhancing Knowledge Tracing via Adversarial Training

Enhancing Knowledge Tracing via Adversarial Training This repository contains source code for the paper "Enhancing Knowledge Tracing via Adversarial T

Xiaopeng Guo 14 Oct 24, 2022
Pre-Training 3D Point Cloud Transformers with Masked Point Modeling

Point-BERT: Pre-Training 3D Point Cloud Transformers with Masked Point Modeling Created by Xumin Yu*, Lulu Tang*, Yongming Rao*, Tiejun Huang, Jie Zho

Lulu Tang 306 Jan 06, 2023
Hand tracking demo for DIY Smart Glasses with a remote computer doing the work

CameraStream This is a demonstration that streams the image from smartglasses to a pc, does the hand recognition on the remote pc and streams the proc

Teemu Laurila 20 Oct 13, 2022
Source Code and data for my paper titled Linguistic Knowledge in Data Augmentation for Natural Language Processing: An Example on Chinese Question Matching

Description The source code and data for my paper titled Linguistic Knowledge in Data Augmentation for Natural Language Processing: An Example on Chin

Zhengxiang Wang 3 Jun 28, 2022
Zero-shot Synthesis with Group-Supervised Learning (ICLR 2021 paper)

GSL - Zero-shot Synthesis with Group-Supervised Learning Figure: Zero-shot synthesis performance of our method with different dataset (iLab-20M, RaFD,

Andy_Ge 62 Dec 21, 2022
Speech Recognition using DeepSpeech2.

deepspeech.pytorch Implementation of DeepSpeech2 for PyTorch using PyTorch Lightning. The repo supports training/testing and inference using the DeepS

Sean Naren 2k Jan 04, 2023
Molecular Sets (MOSES): A Benchmarking Platform for Molecular Generation Models

Molecular Sets (MOSES): A benchmarking platform for molecular generation models Deep generative models are rapidly becoming popular for the discovery

MOSES 656 Dec 29, 2022
COVINS -- A Framework for Collaborative Visual-Inertial SLAM and Multi-Agent 3D Mapping

COVINS -- A Framework for Collaborative Visual-Inertial SLAM and Multi-Agent 3D Mapping Version 1.0 COVINS is an accurate, scalable, and versatile vis

ETHZ V4RL 183 Dec 27, 2022
Discriminative Region Suppression for Weakly-Supervised Semantic Segmentation

Discriminative Region Suppression for Weakly-Supervised Semantic Segmentation (AAAI 2021) Official pytorch implementation of our paper: Discriminative

Beom 74 Dec 27, 2022
CONditionals for Ordinal Regression and classification in tensorflow

Condor Ordinal regression in Tensorflow Keras Tensorflow Keras implementation of CONDOR Ordinal Regression (aka ordinal classification) by Garrett Jen

9 Jul 31, 2022
Official Keras Implementation for UNet++ in IEEE Transactions on Medical Imaging and DLMIA 2018

UNet++: A Nested U-Net Architecture for Medical Image Segmentation UNet++ is a new general purpose image segmentation architecture for more accurate i

Zongwei Zhou 1.8k Jan 07, 2023
Live Hand Tracking Using Python

Live-Hand-Tracking-Using-Python Project Description: In this project, we will be

Hassan Shahzad 2 Jan 06, 2022
CARL provides highly configurable contextual extensions to several well-known RL environments.

CARL (context adaptive RL) provides highly configurable contextual extensions to several well-known RL environments.

AutoML-Freiburg-Hannover 51 Dec 28, 2022