Python inverse kinematics for your robot model based on Pinocchio.

Related tags

Deep Learningpink
Overview

Pink

build Documentation PyPI package Status

Python inverse kinematics for your robot model based on Pinocchio.

Upcoming changes

Pink's API is not stable. Expect the following upcoming changes:

  • Import task template from pymanoid
  • Reformulate task gains as time constants

Installation

First, install Pinocchio, for instance by pip install pin.

Then install Pink by:

pip install pin-pink

Usage

Under construction...

Example

Under construction...

History

Pink implements the same task-based inverse kinematics as pymanoid, but it is much simpler to install and runs faster thanks to Pinocchio. Its internal math is summarized in this note. If you find yourself needing to read that to use the library, it means the API has abstraction leakage, please open an issue :-)

Comments
  • pink installation on mac

    pink installation on mac

    Hello Stephan,

    Thank you for your effort in maintaining this nice repo!

    While using pink, I get the following two questions for you.

    1. I've installed pink on my mac which is intel OSX Monterey 12.5.1 and I am using anaconda virtual environment (python version 3.8). When I tried to run the upkie_crouching.py example, it kept complaining there is no module named pink.models. So, instead of running the script, I manually tried opening the python interpreter(python version 3.8) in the same anaconda environment and typed the code in upkie_crouching.py line by line, and it successfully imported all the modules. I don't know how this could be possible. Do you have anything in your mind?

    2. Other than the aforementioned software issue, I have another question regarding the inverse kinematics solver interface (API). I have a 7-DoF robotic manipulator which has a holonomic constraint (q_1 = q_2) so it has 6 active joints with one passive joint. Given any cartesian tasks, I would like to solve the inverse geometry problem to get the joint positions satisfying the holonomic constraint. In this case, I think one way to solve the problem is by setting the holonomic constraint as a task in the cost function and giving the larger task gain compared to the cartesian task. Another way to solve the problem is using projected jacobian (J_cartesian_task * N_holonomic_constraint) with N = I - JJ_pseudo_inverse. Do you think those two methods sound okay to obtain the solution that I want? If so, can you point out which API in pink I should use to set the holonomic constraint as a cost in the QP (I think I could try the latter one by myself)?

    Thank you, Seung Hyeon

    opened by shbang91 2
  • Display a TF tree using pinocchio model

    Display a TF tree using pinocchio model

    Dear Caron: I found this repo by using Pinocchio when I tried to learn more about Meshcat, and thanks a lot for your code, I get some inspiration for drawing a TF tree for a robot model. My understanding of this code in pink

    meshcat_shapes.frame(viewer["left_contact_target"], opacity=0.5)
    

    is that we will replace the old object with a new frame. My question is if we could just add the frame by using addGeometryObject?

    Thanks for your help! heaty

    opened by whtqh 1
  • Posture task doesn't work with continuous joints

    Posture task doesn't work with continuous joints

    Continuous joints have nq=2, whereas the posture task assumes nq=1 for revolute joints so that the tangent twist between two joint configurations is simply their difference. This will need to be generalized.

    • Example: see WIP_kinova_gen2_arm.py in the examples folder.
    • Related: https://github.com/stack-of-tasks/pinocchio/issues/1751
    • Related: https://github.com/stack-of-tasks/pinocchio/issues/794
    opened by stephane-caron 1
  • CVXOPT does not handle infinity

    CVXOPT does not handle infinity

    When there is no velocity bound on a joint, Pink currently sets inequality constraints as $-\infty < v_i < \infty$. But with CVXOPT this approach yields ValueError: domain error.

    Possible solutions:

    • Trim large values (might not generalize well)
    • Add some post-processing to remove redundant inequalities for CVXOPT specifically
    • Avoid such inequalities altogether
    opened by stephane-caron 0
  • Joint limits for planar joints

    Joint limits for planar joints

    The omnidirectional three-wheeled robot added by https://github.com/tasts-robots/pink/pull/14 does not work yet because of joint limits for its root planar joint.

    This issue will be fixed by https://github.com/tasts-robots/pink/pull/12.

    bug 
    opened by stephane-caron 0
  • Improve joint limit computations

    Improve joint limit computations

    Performance increase is 5x as of 3f2feae3396bbc847a843b34c9ce162f75e55596 (on Upkie model):

    In [1]: from pink.limits import compute_velocity_limits_2, compute_velocity_limits             
    
    In [2]: %timeit compute_velocity_limits(configuration, dt)                                     
    68.1 µs ± 5.7 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
    
    In [3]: %timeit compute_velocity_limits_2(configuration, dt)                                   
    13.4 µs ± 596 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
    
    opened by stephane-caron 0
Releases(v0.6.0)
  • v0.6.0(Dec 1, 2022)

    This release makes the solver argument mandatory for all calls to solve_ik.

    Note that the project is still in beta, so don't expect proper deprecation paths / API-change preemptive warnings before it hits v1.0.0 :wink:

    Source code(tar.gz)
    Source code(zip)
  • v0.5.0(Sep 26, 2022)

    With this release, Pink handles more general joint types, including fixed or free flyer root joints, unbounded joints (called continuous in URDF), etc. New examples showcase this on both arms :mechanical_arm: and legged :mechanical_leg: robots.

    Banner for Pink v0.5.0

    Under the hood, this release also improves on various points of the QP formulation (joint limits, posture task, ...) so that it works nicely with more solvers (e.g. CVXOPT), beyond quadprog and OSQP which were the two main solvers so far.

    Added

    • Body task targets can be read directly from a robot configuration
    • Example: double pendulum
    • Example: Kinova Gen2 arm
    • Example: loading a custom URDF description
    • Example: visualization in MeshCat
    • Example: visualization in yourdfpy
    • Generalize configuration limits to any root joint
    • Handle descriptions that have no velocity limit
    • Handle general root joint in configuration limits
    • Handle general root joint in posture task
    • Handle unbounded velocity limits in QP formulation
    • Posture task targets can be read directly from a configuration
    • Simple rate limiter in pink.utils

    Changed

    • Raise an error when querying a body that doesn't exist
    • Transition from pink.models to robot_descriptions
    • Update reference posture in Upkie wheeled biped example
    • Warn when the backend QP solver is not explicitly selected

    Fixed

    • Unbounded velocities when the backend solver is CVXOPT
    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Jun 21, 2022)

    This release brings documentation, full test coverage, and handles robot models installed from PyPI.

    Also, it indulges in a project icon :wink:

    Added

    • Coveralls for continuous coverage testing
    • Document differential inverse kinematics and task targets
    • Single-task test on task target translations mapped to IK output translations

    Changed

    • Argument to build_from_urdf functions is now the path to the URDF file
    • Bumped status to beta
    • Examples use the jvrc_description and upkie_description packages
    • Use jvrc_description and upkie_description packages from PyPI
    • Task is now an abstract base class

    Fixed

    • Unit tests for robot models
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Mar 30, 2022)

    This release adds proper handling of joint position and velocity limits.

    Added

    • Joint velocity limits
    • Configuration limits

    Changed

    • Bumped status to alpha
    • Configuration limit check now has a tolerance argument
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Mar 29, 2022)

    This pre-release adds the regularizing posture task and corresponding unit tests.

    Added

    • Check configuration limits against model
    • Mock configuration type for unit testing
    • Tangent member of a configuration
    • Unit test the body task

    Changed

    • Specify path when loading a model description
    • Switch to the Apache 2.0 license
    • build_jvrc_model is now build_from_urdf

    Fixed

    • Don't distribute robot models with the library
    • IK unit test that used robot instead of configuration
    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Mar 17, 2022)

    This is a first working version of the library with a humanoid example that can be run and tweaked. Keep in mind that 0.x versions mean the library is still under active development, with the goal that 1.0 is the first stable version. So, this is still the very beginning :wink:

    Added

    • Body task
    • Humanoid example

    Changed

    • ConfiguredRobot(model, data) type is now Configuration(model, data, q)

    Fixed

    • Add floating base joint when loading JVRC model
    Source code(tar.gz)
    Source code(zip)
Owner
Stéphane Caron
Roboticist who enjoys teaching things to balance and walk.
Stéphane Caron
Text-to-Music Retrieval using Pre-defined/Data-driven Emotion Embeddings

Text2Music Emotion Embedding Text-to-Music Retrieval using Pre-defined/Data-driven Emotion Embeddings Reference Emotion Embedding Spaces for Matching

Minz Won 50 Dec 05, 2022
WSDM2022 "A Simple but Effective Bidirectional Extraction Framework for Relational Triple Extraction"

BiRTE WSDM2022 "A Simple but Effective Bidirectional Extraction Framework for Relational Triple Extraction" Requirements The main requirements are: py

9 Dec 27, 2022
Official implementation of "One-Shot Voice Conversion with Weight Adaptive Instance Normalization".

One-Shot Voice Conversion with Weight Adaptive Instance Normalization By Shengjie Huang, Yanyan Xu*, Dengfeng Ke*, Mingjie Chen, Thomas Hain. This rep

31 Dec 07, 2022
Bridging Vision and Language Model

BriVL BriVL (Bridging Vision and Language Model) 是首个中文通用图文多模态大规模预训练模型。BriVL模型在图文检索任务上有着优异的效果,超过了同期其他常见的多模态预训练模型(例如UNITER、CLIP)。 BriVL论文:WenLan: Bridgi

235 Dec 27, 2022
Official PyTorch implementation of RIO

Image-Level or Object-Level? A Tale of Two Resampling Strategies for Long-Tailed Detection Figure 1: Our proposed Resampling at image-level and obect-

NVIDIA Research Projects 17 May 20, 2022
Code for the SIGGRAPH 2021 paper "Consistent Depth of Moving Objects in Video".

Consistent Depth of Moving Objects in Video This repository contains training code for the SIGGRAPH 2021 paper "Consistent Depth of Moving Objects in

Google 203 Jan 05, 2023
HistoKT: Cross Knowledge Transfer in Computational Pathology

HistoKT: Cross Knowledge Transfer in Computational Pathology Exciting News! HistoKT has been accepted to ICASSP 2022. HistoKT: Cross Knowledge Transfe

Mahdi S. Hosseini 5 Jan 05, 2023
Sinkformers: Transformers with Doubly Stochastic Attention

Code for the paper : "Sinkformers: Transformers with Doubly Stochastic Attention" Paper You will find our paper here. Compat This package has been dev

Michael E. Sander 31 Dec 29, 2022
OpenGAN: Open-Set Recognition via Open Data Generation

OpenGAN: Open-Set Recognition via Open Data Generation ICCV 2021 (oral) Real-world machine learning systems need to analyze novel testing data that di

Shu Kong 90 Jan 06, 2023
DanceTrack: Multiple Object Tracking in Uniform Appearance and Diverse Motion

DanceTrack DanceTrack is a benchmark for tracking multiple objects in uniform appearance and diverse motion. DanceTrack provides box and identity anno

260 Dec 28, 2022
CCP dataset from Clothing Co-Parsing by Joint Image Segmentation and Labeling

Clothing Co-Parsing (CCP) Dataset Clothing Co-Parsing (CCP) dataset is a new clothing database including elaborately annotated clothing items. 2, 098

Wei Yang 434 Dec 24, 2022
Quantum-enhanced transformer neural network

Example of a Quantum-enhanced transformer neural network Get the code: git clone https://github.com/rdisipio/qtransformer.git cd qtransformer Create

Riccardo Di Sipio 61 Nov 08, 2022
Repo for EMNLP 2021 paper "Beyond Preserved Accuracy: Evaluating Loyalty and Robustness of BERT Compression"

beyond-preserved-accuracy Repo for EMNLP 2021 paper "Beyond Preserved Accuracy: Evaluating Loyalty and Robustness of BERT Compression" How to implemen

Kevin Canwen Xu 10 Dec 23, 2022
Code for the paper Task Agnostic Morphology Evolution.

Task-Agnostic Morphology Optimization This repository contains code for the paper Task-Agnostic Morphology Evolution by Donald (Joey) Hejna, Pieter Ab

Joey Hejna 18 Aug 04, 2022
Train Scene Graph Generation for Visual Genome and GQA in PyTorch >= 1.2 with improved zero and few-shot generalization.

Scene Graph Generation Object Detections Ground truth Scene Graph Generated Scene Graph In this visualization, woman sitting on rock is a zero-shot tr

Boris Knyazev 93 Dec 28, 2022
DziriBERT: a Pre-trained Language Model for the Algerian Dialect

DziriBERT DziriBERT is the first Transformer-based Language Model that has been pre-trained specifically for the Algerian Dialect. It handles Algerian

117 Jan 07, 2023
Camera ready code repo for the NeuRIPS 2021 paper: "Impression learning: Online representation learning with synaptic plasticity".

Impression-Learning-Camera-Ready Camera ready code repo for the NeuRIPS 2021 paper: "Impression learning: Online representation learning with synaptic

2 Feb 09, 2022
[Link]deep_portfolo - Use Reforcemet earg ad Supervsed learg to Optmze portfolo allocato []

rl_portfolio This Repository uses Reinforcement Learning and Supervised learning to Optimize portfolio allocation. The goal is to make profitable agen

Deepender Singla 165 Dec 02, 2022
This is a yolo3 implemented via tensorflow 2.7

YoloV3 - an object detection algorithm implemented via TF 2.x source code In this article I assume you've already familiar with basic computer vision

2 Jan 17, 2022
Offical implementation of Shunted Self-Attention via Multi-Scale Token Aggregation

Shunted Transformer This is the offical implementation of Shunted Self-Attention via Multi-Scale Token Aggregation by Sucheng Ren, Daquan Zhou, Shengf

156 Dec 27, 2022