A tight inclusion function for continuous collision detection

Overview

Tight-Inclusion Continuous Collision Detection

Build

A conservative Continuous Collision Detection (CCD) method with support for minimum separation.

You can read more about this work in our ACM Transactions on Graphics paper:

"A Large Scale Benchmark and an Inclusion-Based Algorithm forContinuous Collision Detection"

@article{Wang:2021:Benchmark,
    title   = {A Large Scale Benchmark and an Inclusion-Based Algorithm for Continuous Collision Detection},
    author  = {Bolun Wang and Zachary Ferguson and Teseo Schneider and Xin Jiang and Marco Attene and Daniele Panozzo},
    year    = 2021,
    journal = {ACM Transactions on Graphics}
}

Compiling Instruction

To compile the code, first make sure CMake is installed.

To build the library on Linux or macOS:

mkdir build
cd build
cmake ../ -DCMAKE_BUILD_TYPE=Release
make

Then you can run a CCD example:

./Tight_Inclusion_bin 

We also provide you an example to run the Sample Queries using our CCD method. You may need to install gmp before compiling the code. Then set the CMake option TIGHT_INCLUSION_WITH_TESTS as ON when compiling:

cmake ../ -DCMAKE_BUILD_TYPE=Release -DTIGHT_INCLUSION_WITH_TESTS=ON
make

Then you can run ./Tight_Inclusion_bin to test the handcrafted queries in the Sample Queries.

Usage

Include #include <tight_inclusion/inclusion_ccd.hpp>

To check edge-edge ccd, use bool inclusion_ccd::edgeEdgeCCD_double();

To check vertex-face ccd, use bool inclusion_ccd::vertexFaceCCD_double();

💡 If collision is detected, the ccd function will return true, otherwise, the ccd function will return false. Since our method is CONSERVATIVE, if the returned result is false, we guarantee that there is no collision happens. If the result is true, it is possible that there is no collision but we falsely report a collision, but we can guarantee that this happens only if the minimal distance between the two primitives in this time step is no larger than tolerance + ms + err. We wil explain these parameters below.

For both edge-edge ccd and vertex-face ccd, the input CCD query is presented by 8 vertices which are in the format of Eigen::Vector3d. Please read our code in tight_inclusion/inclusion_ccd.hpp for the correct input order of the vertices.

Beside the input vertices, there are some input and output parameters for users to tune the performace or to get more CCD information. Here is a list of the explanations of the parameters:

input:
    err                 The numerical filters of the x, y and z coordinates. It measures the errors introduced by floating-point calculation when solving inclusion functions.
    ms                  Minimum separation distance no less than 0. It guarantees that collision will be reported if the distance between the two primitives is less than ms.
    tolerance           User-specific solving precision. It is the target maximal x, y, and z length of the inclusion function. We suggest the user to set it as 1e-6.
    t_max               The time range [0, t_max] where we detect collisions. Since the input query implies the motion in time range [0, 1], t_max should no larger than 1.
    max_itr             The parameter to enable early termination of the algorithm. If you set max_itr < 0, early termination will be disabled, but this may cause longer runtime. We suggest to set max_itr = 1e6.
    CCD_TYPE            The parameter to choose collision detection algorithms. By default CCD_TYPE = 1. If set CCD_TYPE = 0, the code will switch to a naive conservative CCD algorithm, but lack of our advanced features. 
    
output:
    toi                 Time of impact. If multiple collisions happen in this time step, it will return the earlist collision time. If there is no collision, the returned toi value will be std::numeric_limits<double>::infinity().
    output_tolerance    The real solving precision. If early termination is enabled, the solving precision may not reach the target precision. This parameter will return the real solving precision when the code is terminated.

Tips

💡 The input parameter err is crucial to guarantee our algorithm to be a conservative method not affected by floating point rounding errors. To run a single query, you can set err = {{-1, -1, -1}} to enable a sub-function to calculate the real numerical filters when solving CCD. If you are integrating our CCD in simulators, you need to:

  • Include the headler: #include <tight_inclusion/interval_root_finder.hpp>.
  • Call std::array<double, 3> err_vf = inclusion_ccd::get_numerical_error() and std::array<double, 3> err_ee = inclusion_ccd::get_numerical_error()
  • Use the parameter err_ee each time you call bool inclusion_ccd::edgeEdgeCCD_double() and err_vf when you call bool inclusion_ccd::vertexFaceCCD_double().

The parameters for function inclusion_ccd::get_numerical_error() is explained below:

input:
    vertices            Vertices of the Axies-Aligned-Bounding-Box of the simulation scene. Before you run the simulation, you need to conservatively estimate the Axies-Aligned-Bounding-Box in which the meshes will located during the whole simulation process, and the vertices should be the corners of the AABB.
    check_vf            A boolean instruction showing if you are checking vertex-face or edge-edge CCD.
    using_minimum_separation    A boolean instruction. If you are using minimum-separation CCD (the input parameter ms > 0), please set it as true.

💡 For some simulators which use non-zero minimum separation distance (ms > 0) to make sure intersection-free for each time-step, we have a CMake option TIGHT_INCLUSION_WITH_NO_ZERO_TOI to avoid the returned collision time toi is 0. You need to set TIGHT_INCLUSION_WITH_NO_ZERO_TOI as ON when compiling by: cmake ../ -DCMAKE_BUILD_TYPE=Release -DTIGHT_INCLUSION_WITH_NO_ZERO_TOI=ON. Then when you use the CCD functions, the code will continue the refinement in higher precision if the output toi is 0 under the given tolerance. So, the eventually toi will not be 0.

To have a better understand, or to get more details of our Tight-Inclusion CCD algorithm, please refer to our paper.

You might also like...
Continuous Query Decomposition for Complex Query Answering in Incomplete Knowledge Graphs

Continuous Query Decomposition This repository contains the official implementation for our ICLR 2021 (Oral) paper, Complex Query Answering with Neura

On the model-based stochastic value gradient for continuous reinforcement learning

On the model-based stochastic value gradient for continuous reinforcement learning This repository is by Brandon Amos, Samuel Stanton, Denis Yarats, a

Continuous Diffusion Graph Neural Network

We present Graph Neural Diffusion (GRAND) that approaches deep learning on graphs as a continuous diffusion process and treats Graph Neural Networks (GNNs) as discretisations of an underlying PDE.

PyTorch implementation for  MINE: Continuous-Depth MPI with Neural Radiance Fields
PyTorch implementation for MINE: Continuous-Depth MPI with Neural Radiance Fields

MINE: Continuous-Depth MPI with Neural Radiance Fields Project Page | Video PyTorch implementation for our ICCV 2021 paper. MINE: Towards Continuous D

This repository contains the source code and data for reproducing results of Deep Continuous Clustering paper
This repository contains the source code and data for reproducing results of Deep Continuous Clustering paper

Deep Continuous Clustering Introduction This is a Pytorch implementation of the DCC algorithms presented in the following paper (paper): Sohil Atul Sh

This is the codebase for the ICLR 2021 paper Trajectory Prediction using Equivariant Continuous Convolution
This is the codebase for the ICLR 2021 paper Trajectory Prediction using Equivariant Continuous Convolution

Trajectory Prediction using Equivariant Continuous Convolution (ECCO) This is the codebase for the ICLR 2021 paper Trajectory Prediction using Equivar

PyTorch implementation for Stochastic Fine-grained Labeling of Multi-state Sign Glosses for Continuous Sign Language Recognition.

Stochastic CSLR This is the PyTorch implementation for the ECCV 2020 paper: Stochastic Fine-grained Labeling of Multi-state Sign Glosses for Continuou

Code for the ICASSP-2021 paper: Continuous Speech Separation with Conformer.

Continuous Speech Separation with Conformer Introduction We examine the use of the Conformer architecture for continuous speech separation. Conformer

A clean and robust Pytorch implementation of PPO on continuous action space.
A clean and robust Pytorch implementation of PPO on continuous action space.

PPO-Continuous-Pytorch I found the current implementation of PPO on continuous action space is whether somewhat complicated or not stable. And this is

Comments
  • Improved No Zero ToI

    Improved No Zero ToI

    • I changed the options for avoiding a zero time of impact by making the option a parameter to the CCD functions.
    • I also changed from a recursive algorithm to an iterative one with a maximum number of iterations.
    opened by zfergus 2
  • a case returned 0 TOI

    a case returned 0 TOI

    For the PT CCD case below, TICCD returned 0 TOI.

    3.5000000000e-01 0.0000000000e+00 3.5000000000e-01
    3.4604643638e-01 2.7847887896e-03 3.4658121160e-01
    3.5000819263e-01 -1.5763377304e-06 3.5001214242e-01
    3.4903882032e-01 -1.5866575093e-03 3.4560164356e-01
    0.0000000000e+00 0.0000000000e+00 0.0000000000e+00
    1.0120300789e-07 -1.4009994248e-04 -8.5653091896e-05
    -9.2976239634e-07 1.4361165221e-06 5.5168830145e-07
    1.1649271683e-04 -3.6172565398e-05 -4.7128237917e-05
    

    The format is

    node_x node_y node_z
    traingle_node0_x traingle_node0_y traingle_node0_z
    traingle_node1_x traingle_node1_y traingle_node1_z
    traingle_node2_x traingle_node2_y traingle_node2_z
    node_displacement_x node_displacement_y node_displacement_z
    traingle_node0_displacement_x traingle_node0_displacement_y traingle_node0_displacement_z
    traingle_node1_displacement_x traingle_node1_displacement_y traingle_node1_displacement_z
    traingle_node2_displacement_x traingle_node2_displacement_y traingle_node2_displacement_z
    

    I used the TICCD from the CCDWrapper repo (latest commit), and I called it like

    template <class T>
    bool Point_Triangle_CCD_TI(
        const Eigen::Matrix<T, 3, 1>& p, 
        const Eigen::Matrix<T, 3, 1>& t0, 
        const Eigen::Matrix<T, 3, 1>& t1,
        const Eigen::Matrix<T, 3, 1>& t2,
        const Eigen::Matrix<T, 3, 1>& dp, 
        const Eigen::Matrix<T, 3, 1>& dt0, 
        const Eigen::Matrix<T, 3, 1>& dt1,
        const Eigen::Matrix<T, 3, 1>& dt2,
        T eta, T thickness, T& toc)
    {
        T toc_prev = toc;
        T output_tolerance;
    
        T dist2_cur;
        Point_Triangle_Distance_Unclassified(p, t0, t1, t2, dist2_cur);
        T dist_cur = std::sqrt(dist2_cur);
        if (inclusion_ccd::vertexFaceCCD_double(p, t0, t1, t2,
            p + dp, t0 + dt0, t1 + dt1, t2 + dt2, { { -1, 0, 0 } },
            eta * (dist2_cur - thickness * thickness) / (dist_cur + thickness) + thickness,
            toc, 1e-6, toc_prev, 1e6, output_tolerance, 1))
        {
            if (toc < 1.0e-6) {
                std::cout << "PT CCD tiny!" << std::endl;
                if (inclusion_ccd::vertexFaceCCD_double(p, t0, t1, t2,
                    p + dp, t0 + dt0, t1 + dt1, t2 + dt2, { { -1, 0, 0 } },
                    thickness, toc, 1e-6, toc_prev, 1e6, output_tolerance, 1))
                {
                    toc *= (1.0 - eta);
                    return true;
                }
                else {
                    return false;
                }
            }
            return true;
        }
        return false;
    }
    
    printf("TICCD:\n");
    largestAlpha = 1;
    tstart = clock();
    if (Point_Triangle_CCD_TI(x[0], x[1], x[2], x[3], x[4], x[5], x[6], x[7], 0.1, 0.0, largestAlpha)) {
        printf("%le\n", largestAlpha);
    }
    else {
        printf("no collision\n");
    }
    printf("%.1lems\n", double(clock() - tstart) / CLOCKS_PER_SEC * 1000);
    

    The output is

    TICCD:
    PT CCD tiny!
    0.000000e+00
    1.9e+00ms
    

    However I implemented a simpler version of TICCD according to the paper as

    template <class T>
    void Point_Triangle_Distance_Vector_Unclassified(
        const Eigen::Matrix<T, 3, 1>& p, 
        const Eigen::Matrix<T, 3, 1>& t0, 
        const Eigen::Matrix<T, 3, 1>& t1,
        const Eigen::Matrix<T, 3, 1>& t2,
        const Eigen::Matrix<T, 3, 1>& dp, 
        const Eigen::Matrix<T, 3, 1>& dt0, 
        const Eigen::Matrix<T, 3, 1>& dt1,
        const Eigen::Matrix<T, 3, 1>& dt2,
        T t, T lambda, T beta,
        Eigen::Matrix<T, 3, 1>& distVec)
    {
        const Eigen::Matrix<T, 3, 1> tp = (1 - lambda - beta) * t0 + lambda * t1 + beta * t2;
        const Eigen::Matrix<T, 3, 1> dtp = (1 - lambda - beta) * dt0 + lambda * dt1 + beta * dt2;
        distVec = p + t * dp - (tp + t * dtp);
    }
    
    template <class T>
    bool Point_Triangle_CheckInterval_Unclassified(
        const Eigen::Matrix<T, 3, 1>& p, 
        const Eigen::Matrix<T, 3, 1>& t0, 
        const Eigen::Matrix<T, 3, 1>& t1,
        const Eigen::Matrix<T, 3, 1>& t2,
        const Eigen::Matrix<T, 3, 1>& dp, 
        const Eigen::Matrix<T, 3, 1>& dt0, 
        const Eigen::Matrix<T, 3, 1>& dt1,
        const Eigen::Matrix<T, 3, 1>& dt2,
        const std::array<T, 6>& interval,
        T gap)
    {
        Eigen::Matrix<T, 3, 1> distVecMax, distVecMin;
        distVecMax.setConstant(-2 * gap - 1);
        distVecMin.setConstant(2 * gap + 1);
        for (int t = 0; t < 2; ++t) {
            for (int lambda = 0; lambda < 2; ++lambda) {
                for (int beta = 0; beta < 2; ++beta) {
                    if (lambda == 1 && beta == 1) {
                        continue;
                    }
                    Eigen::Matrix<T, 3, 1> distVec;
                    Point_Triangle_Distance_Vector_Unclassified(p, t0, t1, t2, dp, dt0, dt1, dt2, 
                        interval[t], interval[2 + lambda], interval[4 + beta], distVec);
                    distVecMax = distVecMax.array().max(distVec.array());
                    distVecMin = distVecMin.array().min(distVec.array());
                }
            }
        }
        return (distVecMax.array() >= -gap).all() && (distVecMin.array() <= gap).all();
    }
    template <class T>
    bool Point_Triangle_TICCD(
        const Eigen::Matrix<T, 3, 1>& p, 
        const Eigen::Matrix<T, 3, 1>& t0, 
        const Eigen::Matrix<T, 3, 1>& t1,
        const Eigen::Matrix<T, 3, 1>& t2,
        Eigen::Matrix<T, 3, 1> dp, 
        Eigen::Matrix<T, 3, 1> dt0, 
        Eigen::Matrix<T, 3, 1> dt1,
        Eigen::Matrix<T, 3, 1> dt2,
        T eta, T thickness, T& toc)
    {
        T dist2_cur;
        Point_Triangle_Distance_Unclassified(p, t0, t1, t2, dist2_cur);
        T dist_cur = std::sqrt(dist2_cur);
        T gap = eta * (dist2_cur - thickness * thickness) / (dist_cur + thickness);
    
        T tTol = 1e-3;
    
        std::vector<std::array<T, 6>> roots;
        std::deque<std::array<T, 6>> intervals;
        intervals.push_back({0, toc, 0, 1, 0, 1});
        int iterAmt = 0;
        while (!intervals.empty()) {
            ++iterAmt;
    
            std::array<T, 6> curIV = intervals.front();
            intervals.pop_front();
    
            // check
            if (Point_Triangle_CheckInterval_Unclassified(p, t0, t1, t2, dp, dt0, dt1, dt2, curIV, gap)) {
                if (curIV[0] && curIV[1] - curIV[0] < tTol) {
                    // root found within tTol
                    roots.emplace_back(curIV);
                }
                else {
                    // split interval and push back
                    std::vector<T> intervalLen({curIV[1] - curIV[0], curIV[3] - curIV[2], curIV[5] - curIV[4]});
                    switch (std::max_element(intervalLen.begin(), intervalLen.end()) - intervalLen.begin()) {
                    case 0:
                        intervals.push_back({curIV[0], (curIV[1] + curIV[0]) / 2, curIV[2], curIV[3], curIV[4], curIV[5]});
                        intervals.push_back({(curIV[1] + curIV[0]) / 2, curIV[1], curIV[2], curIV[3], curIV[4], curIV[5]});
                        break;
    
                    case 1:
                        intervals.push_back({curIV[0], curIV[1], curIV[2], (curIV[2] + curIV[3]) / 2, curIV[4], curIV[5]});
                        intervals.push_back({curIV[0], curIV[1], (curIV[2] + curIV[3]) / 2, curIV[3], curIV[4], curIV[5]});
                        break;
    
                    case 2:
                        intervals.push_back({curIV[0], curIV[1], curIV[2], curIV[3], curIV[4], (curIV[4] + curIV[5]) / 2});
                        intervals.push_back({curIV[0], curIV[1], curIV[2], curIV[3], (curIV[4] + curIV[5]) / 2, curIV[5]});
                        break;
                    }
                }
            }
        }
        
        if (roots.empty()) {
            printf("TICCD PT converged with %d iters\n", iterAmt);
            return false;
        }
        else {
            for (const auto& rI : roots) {
                if (toc > rI[0]) {
                    toc = rI[0];
                }
            }
            printf("TICCD PT converged with %d iters\n", iterAmt);
            return true;
        }
    }
    

    and it kinds of worked on this case and output

    TICCD PT converged with 6143 iters
    4.882812e-04
    5.3e-01ms
    

    which with tighter convergence tolerance might be able to reach the ground truth solution -- no collision.

    In my implementation, I didn't include the epsilons in formula (5) in the paper. Is it the main cause here?

    opened by liminchen 2
  • Refactored and cleaned up code

    Refactored and cleaned up code

    I tested and timed it on the entire dataset. I get 0 FN, a couple of fewer FP on the handcrafted data, and the timing is a little faster on the simulation and faster on handcrafted.

    opened by zfergus 0
Releases(v1.0.2)
Owner
Continuous Collision Detection
Code for continuous collision detection methods bechmarked in "A Large Scale Benchmark and an Inclusion-Based Algorithm for Continuous Collision Detection"
Continuous Collision Detection
MQBench: Towards Reproducible and Deployable Model Quantization Benchmark

MQBench: Towards Reproducible and Deployable Model Quantization Benchmark We propose a benchmark to evaluate different quantization algorithms on vari

494 Dec 29, 2022
Code release to accompany paper "Geometry-Aware Gradient Algorithms for Neural Architecture Search."

Geometry-Aware Gradient Algorithms for Neural Architecture Search This repository contains the code required to run the experiments for the DARTS sear

18 May 27, 2022
Make a Turtlebot3 follow a figure 8 trajectory and create a robot arm and make it follow a trajectory

HW2 - ME 495 Overview Part 1: Makes the robot move in a figure 8 shape. The robot starts moving when launched on a real turtlebot3 and can be paused a

Devesh Bhura 0 Oct 21, 2022
The King is Naked: on the Notion of Robustness for Natural Language Processing

the-king-is-naked: on the notion of robustness for natural language processing AAAI2022 DISCLAIMER:This repo will be updated soon with instructions on

Iperboreo_ 1 Nov 24, 2022
RL Algorithms with examples in Python / Pytorch / Unity ML agents

Reinforcement Learning Project This project was created to make it easier to get started with Reinforcement Learning. It now contains: An implementati

Rogier Wachters 3 Aug 19, 2022
Bayesian Optimization Library for Medical Image Segmentation.

bayesmedaug: Bayesian Optimization Library for Medical Image Segmentation. bayesmedaug optimizes your data augmentation hyperparameters for medical im

Åžafak Bilici 7 Feb 10, 2022
Trading and Backtesting environment for training reinforcement learning agent or simple rule base algo.

TradingGym TradingGym is a toolkit for training and backtesting the reinforcement learning algorithms. This was inspired by OpenAI Gym and imitated th

Yvictor 1.1k Jan 02, 2023
1st Solution For NeurIPS 2021 Competition on ML4CO Dual Task

KIDA: Knowledge Inheritance in Data Aggregation This project releases our 1st place solution on NeurIPS2021 ML4CO Dual Task. Slide and model weights a

MEGVII Research 24 Sep 08, 2022
Finding an Unsupervised Image Segmenter in each of your Deep Generative Models

Finding an Unsupervised Image Segmenter in each of your Deep Generative Models Description Recent research has shown that numerous human-interpretable

Luke Melas-Kyriazi 61 Oct 17, 2022
The official implementation of You Only Compress Once: Towards Effective and Elastic BERT Compression via Exploit-Explore Stochastic Nature Gradient.

You Only Compress Once: Towards Effective and Elastic BERT Compression via Exploit-Explore Stochastic Nature Gradient (paper) @misc{zhang2021compress,

46 Dec 07, 2022
The code for our CVPR paper PISE: Person Image Synthesis and Editing with Decoupled GAN, Project Page, supp.

PISE The code for our CVPR paper PISE: Person Image Synthesis and Editing with Decoupled GAN, Project Page, supp. Requirement conda create -n pise pyt

jinszhang 110 Nov 21, 2022
ImageBART: Bidirectional Context with Multinomial Diffusion for Autoregressive Image Synthesis

ImageBART NeurIPS 2021 Patrick Esser*, Robin Rombach*, Andreas Blattmann*, Björn Ommer * equal contribution arXiv | BibTeX | Poster Requirements A sui

CompVis Heidelberg 110 Jan 01, 2023
Implementation of DocFormer: End-to-End Transformer for Document Understanding, a multi-modal transformer based architecture for the task of Visual Document Understanding (VDU)

DocFormer - PyTorch Implementation of DocFormer: End-to-End Transformer for Document Understanding, a multi-modal transformer based architecture for t

171 Jan 06, 2023
Running Google MoveNet Multipose Tracking models on OpenVINO.

MoveNet MultiPose Tracking on OpenVINO

60 Nov 17, 2022
Code for Dual Contrastive Learning for Unsupervised Image-to-Image Translation, NTIRE, CVPRW 2021.

arXiv Dual Contrastive Learning Adversarial Generative Networks (DCLGAN) We provide our PyTorch implementation of DCLGAN, which is a simple yet powerf

119 Dec 04, 2022
Gym for multi-agent reinforcement learning

PettingZoo is a Python library for conducting research in multi-agent reinforcement learning, akin to a multi-agent version of Gym. Our website, with

Farama Foundation 1.6k Jan 09, 2023
A Kaggle competition: discriminate gender based on handwriting

Gender discrimination based on handwriting See http://fastml.com/gender-discrimination/ for description. prep_data.py - a first step chunk_by_authors.

Zygmunt ZajÄ…c 22 Jul 20, 2022
Demonstrational Session git repo for H SAF User Workshop (28/1)

5th H SAF User Workshop The 5th H SAF User Workshop supported by EUMeTrain will be held in online in January 24-28 2022. This repository contains inst

H SAF 4 Aug 04, 2022
Unofficial implement with paper SpeakerGAN: Speaker identification with conditional generative adversarial network

Introduction This repository is about paper SpeakerGAN , and is unofficially implemented by Mingming Huang ( 7 Jan 03, 2023

[ICCV2021] Learning to Track Objects from Unlabeled Videos

Unsupervised Single Object Tracking (USOT) 🌿 Learning to Track Objects from Unlabeled Videos Jilai Zheng, Chao Ma, Houwen Peng and Xiaokang Yang 2021

53 Dec 28, 2022