A modular, open and non-proprietary toolkit for core robotic functionalities by harnessing deep learning

Overview

A modular, open and non-proprietary toolkit for core robotic functionalities by harnessing deep learning


WebsiteAboutInstallationUsing OpenDR toolkitExamplesRoadmapLicense

License Test Suite (master)

About

The aim of OpenDR Project is to develop a modular, open and non-proprietary toolkit for core robotic functionalities by harnessing deep learning to provide advanced perception and cognition capabilities, meeting in this way the general requirements of robotics applications in the applications areas of healthcare, agri-food and agile production. OpenDR provides the means to link the robotics applications to software libraries (deep learning frameworks, e.g., PyTorch and Tensorflow) to the operating environment (ROS). OpenDR focuses on the AI and Cognition core technology in order to provide tools that make robotic systems cognitive, giving them the ability to:

  1. interact with people and environments by developing deep learning methods for human centric and environment active perception and cognition,
  2. learn and categorize by developing deep learning tools for training and inference in common robotics settings, and
  3. make decisions and derive knowledge by developing deep learning tools for cognitive robot action and decision making.

As a result, the developed OpenDR toolkit will also enable cooperative human-robot interaction as well as the development of cognitive mechatronics where sensing and actuation are closely coupled with cognitive systems thus contributing to another two core technologies beyond AI and Cognition. OpenDR aims to develop, train, deploy and evaluate deep learning models that improve the technical capabilities of the core technologies beyond the current state of the art.

Installing OpenDR Toolkit

OpenDR can be installed in the following ways:

  1. By cloning this repository (CPU/GPU support)
  2. Using pip (CPU only)
  3. Using docker (CPU/GPU support)

You can find detailed installation instruction in the documentation.

Using OpenDR toolkit

OpenDR provides an intuitive and easy to use Python interface, a C API for performance critical application, a wealth of usage examples and supporting tools, as well as ready-to-use ROS nodes. OpenDR is built to support Webots Open Source Robot Simulator, while it also extensively follows industry standards, such as ONNX model format and OpenAI Gym Interface. You can find detailed documentation in OpenDR wiki, as well as in the tools index.

Roadmap

OpenDR has the following roadmap:

  • v1.0 (2021): Baseline deep learning tools for core robotic functionalities
  • v2.0 (2022): Optimized lightweight and high-resolution deep learning tools for robotics
  • v3.0 (2023): Active perception-enabled deep learning tools for improved robotic perception

How to contribute

Please follow the instructions provided in the wiki.

Acknowledgments

OpenDR project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 871449.

Comments
  • Install scripts, bdist_wheel, x86 docker and instructions

    Install scripts, bdist_wheel, x86 docker and instructions

    This PR adds the following:

    • [x] Scripts to install OpenDR toolkit in clean Ubuntu 20.04 systems (even when running from a minimal image, e.g,. docker ones)
    • [x] setup.py to correctly install OpenDR package
    • [x] Corrected scripts to activate OpenDR venv environment
    • [x] Scripts to create bdist wheels for cpu only usage
    • [x] Dockerfile for assembling cpu-only OpenDR inference
    • [x] Readme listing different installation options
    • [x] Update wiki to reflect the changes made in this PR

    This PR also add missing __init__.py in toolkit and a __version__ variable according to typical python usage.

    test sources test tools 
    opened by passalis 38
  • Upgrade to CUDA 11.2 and improve GPU support

    Upgrade to CUDA 11.2 and improve GPU support

    This PR upgrades the toolkit to using CUDA 11.2. This also ensures that the toolkit will be compatible with the NVIDIA 30xx GPUs. For PyTorch we are using precompiled packages that bundle CUDA11.1. This does not affect the system-wide CUDA version.

    This PR also improves testing using GPUs, as well as fixes some documentation issues regarding the use of OPENDR_DEVICE variable.

    Tasks to be performed

    • [x] Change dockerfile to using CUDA 11.2
    • [x] Update PyTorch and mxnet
    • [x] Update detectron
    • [x] Update DCNv2
    • [x] Make sure that pip installation does not need any kind of update
    • [x] Update the documentation if needed

    We need to restore github branch in Dockerfile prior to merging.

    test sources test tools test release 
    opened by passalis 33
  • Synthetic multi view facial generator

    Synthetic multi view facial generator

    This is a PR for synthetic multi-view facial image generator which will be a standalone tool of OPENDR generating data (facial images) for procedures such as e.g. training.

    test sources test tools 
    opened by ekakalet 33
  • Mobile rl

    Mobile rl

    Hi everyone,

    This is an initial version with our approach on mobile manipulation based on our paper (https://arxiv.org/abs/2101.05325). It's not completely ready to be merged yet, but should already include all the main parts.

    • It implements the LearnerRL interface
    • Formatted according to PEP-8, .clang-format
    • Most unnecessary functionality should already be removed
    • Includes a first version of the documentation, including examples to train and to evaluate provided checkpoints

    But there are also a few questions from my side. Mainy because this is a project that relies on a python3-based interface for the user and an environment implemented in c++, which additionally draws on functionality from ROS (mainly moveit).

    • Atm I am keeping the c++ src and header files within the module, combined with it's own CMakeLists.txt. (i.e. within src/control/mobile_manipulation/). Is that appropriate?
    • How should I define ROS and c++ dependencies? The module provides environments for several robotic platforms (PR2, Tiago, HSR). This means to compile or run the module the user needs (i) a ROS installation (developped and tested for melodic) (ii) a separate catkin_ws for each robot (iii) launch a launchfile before running the python scripts. Due to this I feel like it makes sense to not require every user of other openDR modules to install these, but rather have them specified specific to this module. Some of robot specific dependencies should furthermore be compiled in separate catkin_workspaces. The model checkpoints are tiny (3x 3MB) and currently directly located in the git repo. Is that ok for such small files?
    • Licenses: this module includes slightly modified launchfiles from openly available ROS packages (~/robots_worlds/[pr2/hsr/tiago]). Do these have to be marked or treated specially somehow?
    • This was developped as part of WP5.2 Deep Navigation. As there is no navigation folder and this approach can be seen as a combination of navigation and control I have located it within control for now. Let me know in case I should move it elsewhere

    Any help on the above would be much appreciated. Other comments on what is already in here are also welcome as well.

    Some remaining todo's for myself to remember:

    • update checkpoints
    • test that gazebo evaluation works
    • test that examples in readme work
    test sources test tools 
    opened by dHonerkamp 31
  • Skeleton based action recognition

    Skeleton based action recognition

    This PR adds two learners (which train and evaluate a baseline model and three proposed models) for skeleton-based human action recognition.

    • A new data type named SkeletonSequence is added to the engine.data.py and a new target class named ActionCategory is added to engine.target.py.

    • The learners' implementation follows the provided template and sufficient tests are provided for all the functions which will be directly called by the user including, fit(), eval(), infer(), save(), load(), optimize(), multi_stream_eval(), network_builder().

    test sources test tools 
    opened by negarhdr 28
  • ROS2 workspace and example nodes

    ROS2 workspace and example nodes

    This PR contains a new ROS2 (Foxy-Fitzroy version) workspace located in the projects directory and, for now, it serves to ~test and finalize the structure, naming, etc.,~ gather and finalize all ROS2 nodes in a unified PR. Right now there are no ~docstrings~ (docstrings added), documentation or READMEs. This description will get updated for any additions.

    Contents:

    1. opendr_perception python package
      • Contains a pose estimation node
      • Contains fall detection node
      • Contains object detection 2d centernet/detr/ssd/yolov3 nodes
      • Contains face detection retinaface node
      • Contains face recognition node
      • Contains semantic segmentation bisenet node
      • ~Contains a subscriber tester node (tester), that subscribes to the messages published by the pose estimation node for testing~ Testing can be performed as described in steps 9 and 10 of Building and Running below.
    2. opendr_ros2_bridge python package
      • Contains bridge.py which includes a class with methods to convert images, poses, etc. from and to ROS2 messages
      • This uses cv_bridge which is included in the vision_opencv package
    3. opendr_ros2_messages CMake package

    The logic behind the structuring of the packages and nodes is similar to OpenDR's ROS1 packages/nodes.

    Below you can find instructions to install, build and run the nodes for testing. Note that i did everything on a system with ROS1 already installed.

    I faced many issues along the way that might reappear in a fresh install of ROS2, etc., so if any problems/errors occur following the instructions please get in touch with me, to possibly save you some time.

    Installation

    • To install ROS2 i followed this tutorial (section 2), which installs the 'foxy' release of ROS2. (Note that on '(7) configure environment variables', you need to replace dashing with foxy)
    • Edit: At this point you might need to run sudo apt-get install ros-foxy-vision-msgs as discussed below
    • Install colcon, basically just sudo apt install python3-colcon-common-extensions
    • Install ros2 usb cam to test with local webcam. In my case i use ros2 run usb_cam usb_cam_node_exe to run it after installation, which seems to work fine

    Building and Running

    1. Navigate to your OpenDR installation and activate it as usual
    2. Navigate to workspace root, opendr_ws_2 directory
    3. Install cv_bridge via the instructions in its README, excluding the last step(build). There seems to be no need to build it, as it will get built along with the rest of the packages later.
    4. Navigate to the workspace root (opendr_ws_2) as the previous step leaves you inside vision_opencv dir
    5. Run colcon build
    6. Run . install/setup.bash
    7. Run ros2 run opendr_perception pose_estimation to start the pose estimation node (or any other existing node)
    8. In a new terminal run ros2 run usb_cam usb_cam_node_exe to grab images from a webcam
    9. In a new terminal run ros2 run rqt_image_view rqt_image_view and select the corresponding topic to view the image result
    10. In a new terminal run ros2 topic echo opendr/poses to view the pose message. Note that it is not really human readable in that form, it should be read in another node and converted into an OpenDR pose object to have access to human-friendly print methods.

    * If you are using conda, check out Illia's comment down below. Thanks @iliiliiliili !

    To be added

    ROS2 nodes to be added according to what ROS1 nodes exist already:


    Perception package:

    • [x] Object detection 2D detr (update from original author) (#296)
    • [x] Video activity recognition (#323)
    • [x] RGBD hand gesture recognition (#341)
    • [x] Panoptic segmentation EfficientPS (#270)
    • [x] Heart anomaly detection(#337)
    • [x] Speech command recognition ( #340)
    • [x] Audiovisual emotion recognition (#342)
    • [x] Skeleton based action recognition (#344)
    • [x] Landmark-based facial expression recognition (#345)
    • [x] Image-based facial emotion estimation (new tool #264, #346)
    • [x] Object detection 2D gem (#295)
    • [x] Object detection 2D YOLOv5 (added in #360, I will directly add the ROS2 node on ros2 branch)
    • [x] Object detection 2D Nanodet (added in #278, I will directly add the ROS2 node on ros2 branch)
    • [x] Object tracking 2D SiamRPN (added in #367, ~I will directly add the~ WIP ROS2 node on ros2 branch)
    • [x] High resolution pose estimation (added in #356, I will directly add the ROS2 node on ros2 branch)
    • [x] Image dataset (#319)
    • [x] Point cloud dataset (#319)
    • [x] Object detection 3D voxel (#319)
    • [x] Object tracking 2D deep sort (#319)
    • [x] Object tracking 2D fair mot (#319)
    • [x] Object tracking 3D ab3dmot (#319)

    Data generation package:

    • [x] Synthetic facial recognition (#288)

    Simulation package:

    • [x] Human model generation client/service (#291)

    Planning package:

    • [x] End to end planner (this is new for ROS1 too) (~~#286, new PR will be opened for ROS2~~ #358)

    Edit1: Updated the last steps of the instructions as well as the contents list as per the latest changes. Edit2: Added information in contents list about the new opendr_ros2_messages. added TODO list for remaining nodes

    enhancement test sources test release 
    opened by tsampazk 24
  • Fer va estimation

    Fer va estimation

    This PR adds image-based facial expression recognition and valence-arousal estimation. It includes learner, unit-tests, demo, document, and ROS node. This is a replacement for the previous PR which had conflicts with other tools.

    test sources test tools 
    opened by negarhdr 23
  • Panoptic segmentation

    Panoptic segmentation

    Hi, this PR adds the EfficientPS network. The original repo can be found here.

    ~~Please also check issue #90.~~

    Todos:

    • [x] Upload pre-trained models to OpenDR server and adjust the URLs in efficient_ps_learner.py.
    • [x] Add unit tests
    • [x] Add documentation to /docs/reference
    • [x] Merge Heatmap implementation with version proposed in #100 ~~and updates pending in #98~~ ~~Install CUDA in GitHub CI~~ --> will not be resolvable since the code requires GPUs. See comment.

    Known issues:

    • ~~reason for failing tests: 3rd party dependencies assume an existing pytorch installation since they attempt to load torch in their setup.py~~
    test sources test tools 
    opened by vniclas 21
  • End to end planning

    End to end planning

    Hi All,

    This is an initial version of our method on end-to-end local planning. It's not completely ready to be merged yet, but should already include all the main parts.

    • It implements the LearnerRL interface
    • Formatted according to PEP-8
    • Includes a first version of the documentation

    Remaining todo's for myself:

    • Tests for code
    test sources test tools 
    opened by halil93ibrahim 21
  • Mobile rl 2

    Mobile rl 2

    Creating a new PR due to a forced pushed. See #68 for initial discussion. To recap the open points from the initial PR:

    • the license on the tiago urdf -> contacted PAL
    • the unittests -> currently blocked by the missing linting support for typing
    test sources test tools 
    opened by dHonerkamp 19
  • rosnode - rgbd_hand_gesture_recognition.py - parameter

    rosnode - rgbd_hand_gesture_recognition.py - parameter

    Parameters need to be consistent with other tools with agparser

    I am using opendr installed on my computer on the develop branch, I am feeding the rgb camera topic and the depth_image like this one: image

    I cannot get an output from the /opendr/gestures topic Is the depth_image topic different from the one that you are using?

    opened by thomaspeyrucain 16
  • Fix package creator

    Fix package creator

    The creation was successful, however the root package wasn't uploaded due to a missing new line in packages.txt. I've uploaded the missing one manually, this is just to ensure everything is fine

    test sources test release 
    opened by ad-daniel 1
  • C api implementations

    C api implementations

    The follow PR contains:

    1. More tools in C api.
    2. New data structures for tensors manipulation in C.
    3. Better Json parser with arrays and floats.
    4. Docs.
    5. Small changes in face_recognition and nanodet_jit (naming parameters as said in wiki).
    6. Tests in new data structures and tools.
    7. Python api bug fixes in open pose and fair mot for onnx optimizations.
    enhancement test sources 
    opened by ManosMpampis 1
  • Several tools have deprecation warnings, especially those relying on numpy

    Several tools have deprecation warnings, especially those relying on numpy

    As emerged here https://github.com/opendr-eu/opendr/pull/381 without the upper restriction on numpy, the version 1.24.0 might be installed and in it several deprecation warnings have expired. Even when things work, several deprecation warnings are printed when running the tests. Both issues should be addressed.

    bug 
    opened by ad-daniel 0
  • ROS1 Object Tracking 2D DeepSort error with input from webcam

    ROS1 Object Tracking 2D DeepSort error with input from webcam

    I was unable to find it documented so i am opening a new issue with the following error for the deepsort ROS1 node:

    [ERROR] [1671020211.255272]: bad callback: <bound method ObjectTracking2DDeepSortNode.callback of <__main__.ObjectTracking2DDeepSortNode object at 0x7edf5c2f28>>
    Traceback (most recent call last):
      File "/opt/ros/noetic/lib/python3/dist-packages/rospy/topics.py", line 750, in _invoke_callback
        cb(msg)
      File "/opendr/projects/opendr_ws/src/opendr_perception/scripts/object_tracking_2d_deep_sort_node.py", line 105, in callback
        tracking_boxes = self.learner.infer(image_with_detections, swap_left_top=True)
      File "/opendr/src/opendr/perception/object_tracking_2d/deep_sort/object_tracking_2d_deep_sort_learner.py", line 289, in infer
        result = self.tracker.infer(image, frame_id, swap_left_top=swap_left_top)
      File "/opendr/src/opendr/perception/object_tracking_2d/deep_sort/algorithm/deep_sort_tracker.py", line 81, in infer
        bbox_xywh[:, 3:] *= 1.2
    IndexError: too many indices for array: array is 1-dimensional, but 2 were indexed
    

    What i found the node works properly when provided with images from the image_dataset_node but throws this error when taking input from a webcam.

    bug 
    opened by tsampazk 2
  • ROS2 Node for EfficientLPS

    ROS2 Node for EfficientLPS

    Hi all,

    This PR is to add ROS2 node for EfficientLPS. This PR should be merged after #359 is merged. It includes EfficientLPS node and PointCloud2 Publisher node in it. It will remain as a draft until #359 is merged.

    test sources test tools 
    opened by aselimc 0
Releases(v2.0.0)
  • v2.0.0(Dec 30, 2022)

    Released on December, 31st, 2022.

    New Features:

    • Added YOLOv5 as an inference-only tool (#360).
    • Added Continual Transformer Encoders (#317).
    • Added Continual Spatio-Temporal Graph Convolutional Networks tool (#370).
    • Added AmbiguityMeasure utility tool (#361).
    • Added SiamRPN 2D tracking tool (#367).
    • Added Facial Emotion Estimation tool (#264).
    • Added High resolution pose estimation tool (#356).
    • Added ROS2 nodes for all included tools (#256).
    • Added missing ROS nodes and homogenized the interface across the tools (#305).

    Bug Fixes:

    • Fixed BoundingBoxList, TrackingAnnotationList, BoundingBoxList3D and TrackingAnnotationList3D confidence warnings (#365).
    • Fixed undefined image_id and segmentation for COCO BoundingBoxList (#365).
    • Fixed Continual X3D ONNX support (#372).
    • Fixed several issues with ROS nodes and improved performance (#305).
    Source code(tar.gz)
    Source code(zip)
  • v1.1.1(Jun 30, 2022)

  • v1.1(Jun 14, 2022)

    Released on June, 14th, 2022.

    New Features:

    • Added end-to-end planning tool (https://github.com/opendr-eu/opendr/pull/223).
    • Added seq2seq-nms module, along with other custom NMS implementations for 2D object detection.(https://github.com/opendr-eu/opendr/pull/232).

    Enhancements:

    • Added support for modular pip packages allowing tools to be installed separately (https://github.com/opendr-eu/opendr/pull/201).
    • Simplified the installation process for pip by including the appropriate post-installation scripts (https://github.com/opendr-eu/opendr/pull/201).
    • Improved the structure of the toolkit by moving io from utils to engine.helper (https://github.com/opendr-eu/opendr/pull/201).
    • Added support for post-install scripts and opendr dependencies in .ini files (https://github.com/opendr-eu/opendr/pull/201).
    • Updated toolkit to support CUDA 11.2 and improved GPU support (https://github.com/opendr-eu/opendr/pull/215).
    • Added a standalone pose-based fall detection tool (https://github.com/opendr-eu/opendr/pull/237)

    Bug Fixes:

    • updated wheel building pipeline to include missing files and removed unnecessary dependencies (https://github.com/opendr-eu/opendr/pull/200).
    • panoptic_segmentation/efficient_ps: updated dataset preparation scripts to create correct validation ground truth (https://github.com/opendr-eu/opendr/pull/221).
    • panoptic_segmentation/efficient_ps: added specific configuration files for the provided pretrained models (https://github.com/opendr-eu/opendr/pull/221).
    • c_api/face_recognition: pass key by const reference in json_get_key_string() (https://github.com/opendr-eu/opendr/pull/221).
    • pose_estimation/lightweight_open_pose: fixed height check on transformations.py according to original tool repo (https://github.com/opendr-eu/opendr/pull/242).
    • pose_estimation/lightweight_open_pose: fixed two bugs where ONNX optimization failed on specific learner parameterization (https://github.com/opendr-eu/opendr/pull/242).

    Dependency Updates:

    • heart anomaly detection: upgraded scikit-learn runtime dependency from 0.21.3 to 0.22 (https://github.com/opendr-eu/opendr/pull/198).
    • Relaxed all dependencies to allow future versions of non-critical tools to be used (https://github.com/opendr-eu/opendr/pull/201).
    Source code(tar.gz)
    Source code(zip)
  • v1.0(Dec 31, 2021)

    This is the first public version of OpenDR toolkit, which provides baseline deep learning tools for core robotic functionalities. The first version includes (among others):

    • an intuitive and easy-to-use Python interface
    • a wealth of usage examples and supporting tools
    • ready-to-use ROS nodes
    • a partial C API

    You can find detailed installation instructions in OpenDR repository, while detailed documentation can be found in OpenDR wiki.

    Source code(tar.gz)
    Source code(zip)
Owner
OpenDR
OpenDR H2020 Research Project
OpenDR
Out-of-Domain Human Mesh Reconstruction via Dynamic Bilevel Online Adaptation

DynaBOA Code repositoty for the paper: Out-of-Domain Human Mesh Reconstruction via Dynamic Bilevel Online Adaptation Shanyan Guan, Jingwei Xu, Michell

198 Dec 29, 2022
Code & Data for the Paper "Time Masking for Temporal Language Models", WSDM 2022

Time Masking for Temporal Language Models This repository provides a reference implementation of the paper: Time Masking for Temporal Language Models

Guy Rosin 12 Jan 06, 2023
pytorchのスライス代入操作をonnxに変換する際にScatterNDならないようにするサンプル

pytorch_remove_ScatterND pytorchのスライス代入操作をonnxに変換する際にScatterNDならないようにするサンプル。 スライスしたtensorにそのまま代入してしまうとScatterNDになるため、計算結果をcatで新しいtensorにする。 python ver

2 Dec 01, 2022
A simple rest api serving a deep learning model that classifies human gender based on their faces. (vgg16 transfare learning)

this is a simple rest api serving a deep learning model that classifies human gender based on their faces. (vgg16 transfare learning)

crispengari 5 Dec 09, 2021
Implementation of Kaneko et al.'s MaskCycleGAN-VC model for non-parallel voice conversion.

MaskCycleGAN-VC Unofficial PyTorch implementation of Kaneko et al.'s MaskCycleGAN-VC (2021) for non-parallel voice conversion. MaskCycleGAN-VC is the

86 Dec 25, 2022
Safe Bayesian Optimization

SafeOpt - Safe Bayesian Optimization This code implements an adapted version of the safe, Bayesian optimization algorithm, SafeOpt [1], [2]. It also p

Felix Berkenkamp 111 Dec 11, 2022
Level Based Customer Segmentation

level_based_customer_segmentation Level Based Customer Segmentation Persona Veri Seti kullanılarak müşteri segmentasyonu yapılmıştır. KOLONLAR : PRICE

Buse Yıldırım 6 Dec 21, 2021
Additional code for Stable-baselines3 to load and upload models from the Hub.

Hugging Face x Stable-baselines3 A library to load and upload Stable-baselines3 models from the Hub. Installation With pip Examples [Todo: add colab t

Hugging Face 34 Dec 10, 2022
Contrastive learning of Class-agnostic Activation Map for Weakly Supervised Object Localization and Semantic Segmentation (CVPR 2022)

CCAM (Unsupervised) Code repository for our paper "CCAM: Contrastive learning of Class-agnostic Activation Map for Weakly Supervised Object Localizati

Computer Vision Insitute, SZU 113 Dec 27, 2022
Numenta published papers code and data

Numenta research papers code and data This repository contains reproducible code for selected Numenta papers. It is currently under construction and w

Numenta 293 Jan 06, 2023
PyTorch code for the paper "FIERY: Future Instance Segmentation in Bird's-Eye view from Surround Monocular Cameras"

FIERY This is the PyTorch implementation for inference and training of the future prediction bird's-eye view network as described in: FIERY: Future In

Wayve 406 Dec 24, 2022
Retinal Vessel Segmentation with Pixel-wise Adaptive Filters (ISBI 2022)

Retinal Vessel Segmentation with Pixel-wise Adaptive Filters (ISBI 2022) Introdu

anonymous 14 Oct 27, 2022
Towards Boosting the Accuracy of Non-Latin Scene Text Recognition

Convolutional Recurrent Neural Network + CTCLoss | STAR-Net Code for paper "Towards Boosting the Accuracy of Non-Latin Scene Text Recognition" Depende

Sanjana Gunna 7 Aug 07, 2022
Implementation of "Deep Implicit Templates for 3D Shape Representation"

Deep Implicit Templates for 3D Shape Representation Zerong Zheng, Tao Yu, Qionghai Dai, Yebin Liu. arXiv 2020. This repository is an implementation fo

Zerong Zheng 144 Dec 07, 2022
MemStream: Memory-Based Anomaly Detection in Multi-Aspect Streams with Concept Drift

MemStream Implementation of MemStream: Memory-Based Anomaly Detection in Multi-Aspect Streams with Concept Drift . Siddharth Bhatia, Arjit Jain, Shivi

Stream-AD 61 Dec 02, 2022
TensorFlow implementation of ENet, trained on the Cityscapes dataset.

segmentation TensorFlow implementation of ENet (https://arxiv.org/pdf/1606.02147.pdf) based on the official Torch implementation (https://github.com/e

Fredrik Gustafsson 248 Dec 16, 2022
FocusFace: Multi-task Contrastive Learning for Masked Face Recognition

FocusFace This is the official repository of "FocusFace: Multi-task Contrastive Learning for Masked Face Recognition" accepted at IEEE International C

Pedro Neto 21 Nov 17, 2022
QICK: Quantum Instrumentation Control Kit

QICK: Quantum Instrumentation Control Kit The QICK is a kit of firmware and software to use the Xilinx RFSoC to control quantum systems. It consists o

81 Dec 15, 2022
Part-aware Measurement for Robust Multi-View Multi-Human 3D Pose Estimation and Tracking

Part-aware Measurement for Robust Multi-View Multi-Human 3D Pose Estimation and Tracking Part-Aware Measurement for Robust Multi-View Multi-Human 3D P

19 Oct 27, 2022
Face Recognition and Emotion Detector Device

Face Recognition and Emotion Detector Device Orange PI 1 Python 3.10.0 + Django 3.2.9 Project's file explanation Django manage.py Django commands hand

BootyAss 2 Dec 21, 2021