A tight inclusion function for continuous collision detection

Overview

Tight-Inclusion Continuous Collision Detection

Build

A conservative Continuous Collision Detection (CCD) method with support for minimum separation.

You can read more about this work in our ACM Transactions on Graphics paper:

"A Large Scale Benchmark and an Inclusion-Based Algorithm forContinuous Collision Detection"

@article{Wang:2021:Benchmark,
    title   = {A Large Scale Benchmark and an Inclusion-Based Algorithm for Continuous Collision Detection},
    author  = {Bolun Wang and Zachary Ferguson and Teseo Schneider and Xin Jiang and Marco Attene and Daniele Panozzo},
    year    = 2021,
    journal = {ACM Transactions on Graphics}
}

Compiling Instruction

To compile the code, first make sure CMake is installed.

To build the library on Linux or macOS:

mkdir build
cd build
cmake ../ -DCMAKE_BUILD_TYPE=Release
make

Then you can run a CCD example:

./Tight_Inclusion_bin 

We also provide you an example to run the Sample Queries using our CCD method. You may need to install gmp before compiling the code. Then set the CMake option TIGHT_INCLUSION_WITH_TESTS as ON when compiling:

cmake ../ -DCMAKE_BUILD_TYPE=Release -DTIGHT_INCLUSION_WITH_TESTS=ON
make

Then you can run ./Tight_Inclusion_bin to test the handcrafted queries in the Sample Queries.

Usage

Include #include <tight_inclusion/inclusion_ccd.hpp>

To check edge-edge ccd, use bool inclusion_ccd::edgeEdgeCCD_double();

To check vertex-face ccd, use bool inclusion_ccd::vertexFaceCCD_double();

💡 If collision is detected, the ccd function will return true, otherwise, the ccd function will return false. Since our method is CONSERVATIVE, if the returned result is false, we guarantee that there is no collision happens. If the result is true, it is possible that there is no collision but we falsely report a collision, but we can guarantee that this happens only if the minimal distance between the two primitives in this time step is no larger than tolerance + ms + err. We wil explain these parameters below.

For both edge-edge ccd and vertex-face ccd, the input CCD query is presented by 8 vertices which are in the format of Eigen::Vector3d. Please read our code in tight_inclusion/inclusion_ccd.hpp for the correct input order of the vertices.

Beside the input vertices, there are some input and output parameters for users to tune the performace or to get more CCD information. Here is a list of the explanations of the parameters:

input:
    err                 The numerical filters of the x, y and z coordinates. It measures the errors introduced by floating-point calculation when solving inclusion functions.
    ms                  Minimum separation distance no less than 0. It guarantees that collision will be reported if the distance between the two primitives is less than ms.
    tolerance           User-specific solving precision. It is the target maximal x, y, and z length of the inclusion function. We suggest the user to set it as 1e-6.
    t_max               The time range [0, t_max] where we detect collisions. Since the input query implies the motion in time range [0, 1], t_max should no larger than 1.
    max_itr             The parameter to enable early termination of the algorithm. If you set max_itr < 0, early termination will be disabled, but this may cause longer runtime. We suggest to set max_itr = 1e6.
    CCD_TYPE            The parameter to choose collision detection algorithms. By default CCD_TYPE = 1. If set CCD_TYPE = 0, the code will switch to a naive conservative CCD algorithm, but lack of our advanced features. 
    
output:
    toi                 Time of impact. If multiple collisions happen in this time step, it will return the earlist collision time. If there is no collision, the returned toi value will be std::numeric_limits<double>::infinity().
    output_tolerance    The real solving precision. If early termination is enabled, the solving precision may not reach the target precision. This parameter will return the real solving precision when the code is terminated.

Tips

💡 The input parameter err is crucial to guarantee our algorithm to be a conservative method not affected by floating point rounding errors. To run a single query, you can set err = {{-1, -1, -1}} to enable a sub-function to calculate the real numerical filters when solving CCD. If you are integrating our CCD in simulators, you need to:

  • Include the headler: #include <tight_inclusion/interval_root_finder.hpp>.
  • Call std::array<double, 3> err_vf = inclusion_ccd::get_numerical_error() and std::array<double, 3> err_ee = inclusion_ccd::get_numerical_error()
  • Use the parameter err_ee each time you call bool inclusion_ccd::edgeEdgeCCD_double() and err_vf when you call bool inclusion_ccd::vertexFaceCCD_double().

The parameters for function inclusion_ccd::get_numerical_error() is explained below:

input:
    vertices            Vertices of the Axies-Aligned-Bounding-Box of the simulation scene. Before you run the simulation, you need to conservatively estimate the Axies-Aligned-Bounding-Box in which the meshes will located during the whole simulation process, and the vertices should be the corners of the AABB.
    check_vf            A boolean instruction showing if you are checking vertex-face or edge-edge CCD.
    using_minimum_separation    A boolean instruction. If you are using minimum-separation CCD (the input parameter ms > 0), please set it as true.

💡 For some simulators which use non-zero minimum separation distance (ms > 0) to make sure intersection-free for each time-step, we have a CMake option TIGHT_INCLUSION_WITH_NO_ZERO_TOI to avoid the returned collision time toi is 0. You need to set TIGHT_INCLUSION_WITH_NO_ZERO_TOI as ON when compiling by: cmake ../ -DCMAKE_BUILD_TYPE=Release -DTIGHT_INCLUSION_WITH_NO_ZERO_TOI=ON. Then when you use the CCD functions, the code will continue the refinement in higher precision if the output toi is 0 under the given tolerance. So, the eventually toi will not be 0.

To have a better understand, or to get more details of our Tight-Inclusion CCD algorithm, please refer to our paper.

You might also like...
Continuous Query Decomposition for Complex Query Answering in Incomplete Knowledge Graphs

Continuous Query Decomposition This repository contains the official implementation for our ICLR 2021 (Oral) paper, Complex Query Answering with Neura

On the model-based stochastic value gradient for continuous reinforcement learning

On the model-based stochastic value gradient for continuous reinforcement learning This repository is by Brandon Amos, Samuel Stanton, Denis Yarats, a

Continuous Diffusion Graph Neural Network

We present Graph Neural Diffusion (GRAND) that approaches deep learning on graphs as a continuous diffusion process and treats Graph Neural Networks (GNNs) as discretisations of an underlying PDE.

PyTorch implementation for  MINE: Continuous-Depth MPI with Neural Radiance Fields
PyTorch implementation for MINE: Continuous-Depth MPI with Neural Radiance Fields

MINE: Continuous-Depth MPI with Neural Radiance Fields Project Page | Video PyTorch implementation for our ICCV 2021 paper. MINE: Towards Continuous D

This repository contains the source code and data for reproducing results of Deep Continuous Clustering paper
This repository contains the source code and data for reproducing results of Deep Continuous Clustering paper

Deep Continuous Clustering Introduction This is a Pytorch implementation of the DCC algorithms presented in the following paper (paper): Sohil Atul Sh

This is the codebase for the ICLR 2021 paper Trajectory Prediction using Equivariant Continuous Convolution
This is the codebase for the ICLR 2021 paper Trajectory Prediction using Equivariant Continuous Convolution

Trajectory Prediction using Equivariant Continuous Convolution (ECCO) This is the codebase for the ICLR 2021 paper Trajectory Prediction using Equivar

PyTorch implementation for Stochastic Fine-grained Labeling of Multi-state Sign Glosses for Continuous Sign Language Recognition.

Stochastic CSLR This is the PyTorch implementation for the ECCV 2020 paper: Stochastic Fine-grained Labeling of Multi-state Sign Glosses for Continuou

Code for the ICASSP-2021 paper: Continuous Speech Separation with Conformer.

Continuous Speech Separation with Conformer Introduction We examine the use of the Conformer architecture for continuous speech separation. Conformer

A clean and robust Pytorch implementation of PPO on continuous action space.
A clean and robust Pytorch implementation of PPO on continuous action space.

PPO-Continuous-Pytorch I found the current implementation of PPO on continuous action space is whether somewhat complicated or not stable. And this is

Comments
  • Improved No Zero ToI

    Improved No Zero ToI

    • I changed the options for avoiding a zero time of impact by making the option a parameter to the CCD functions.
    • I also changed from a recursive algorithm to an iterative one with a maximum number of iterations.
    opened by zfergus 2
  • a case returned 0 TOI

    a case returned 0 TOI

    For the PT CCD case below, TICCD returned 0 TOI.

    3.5000000000e-01 0.0000000000e+00 3.5000000000e-01
    3.4604643638e-01 2.7847887896e-03 3.4658121160e-01
    3.5000819263e-01 -1.5763377304e-06 3.5001214242e-01
    3.4903882032e-01 -1.5866575093e-03 3.4560164356e-01
    0.0000000000e+00 0.0000000000e+00 0.0000000000e+00
    1.0120300789e-07 -1.4009994248e-04 -8.5653091896e-05
    -9.2976239634e-07 1.4361165221e-06 5.5168830145e-07
    1.1649271683e-04 -3.6172565398e-05 -4.7128237917e-05
    

    The format is

    node_x node_y node_z
    traingle_node0_x traingle_node0_y traingle_node0_z
    traingle_node1_x traingle_node1_y traingle_node1_z
    traingle_node2_x traingle_node2_y traingle_node2_z
    node_displacement_x node_displacement_y node_displacement_z
    traingle_node0_displacement_x traingle_node0_displacement_y traingle_node0_displacement_z
    traingle_node1_displacement_x traingle_node1_displacement_y traingle_node1_displacement_z
    traingle_node2_displacement_x traingle_node2_displacement_y traingle_node2_displacement_z
    

    I used the TICCD from the CCDWrapper repo (latest commit), and I called it like

    template <class T>
    bool Point_Triangle_CCD_TI(
        const Eigen::Matrix<T, 3, 1>& p, 
        const Eigen::Matrix<T, 3, 1>& t0, 
        const Eigen::Matrix<T, 3, 1>& t1,
        const Eigen::Matrix<T, 3, 1>& t2,
        const Eigen::Matrix<T, 3, 1>& dp, 
        const Eigen::Matrix<T, 3, 1>& dt0, 
        const Eigen::Matrix<T, 3, 1>& dt1,
        const Eigen::Matrix<T, 3, 1>& dt2,
        T eta, T thickness, T& toc)
    {
        T toc_prev = toc;
        T output_tolerance;
    
        T dist2_cur;
        Point_Triangle_Distance_Unclassified(p, t0, t1, t2, dist2_cur);
        T dist_cur = std::sqrt(dist2_cur);
        if (inclusion_ccd::vertexFaceCCD_double(p, t0, t1, t2,
            p + dp, t0 + dt0, t1 + dt1, t2 + dt2, { { -1, 0, 0 } },
            eta * (dist2_cur - thickness * thickness) / (dist_cur + thickness) + thickness,
            toc, 1e-6, toc_prev, 1e6, output_tolerance, 1))
        {
            if (toc < 1.0e-6) {
                std::cout << "PT CCD tiny!" << std::endl;
                if (inclusion_ccd::vertexFaceCCD_double(p, t0, t1, t2,
                    p + dp, t0 + dt0, t1 + dt1, t2 + dt2, { { -1, 0, 0 } },
                    thickness, toc, 1e-6, toc_prev, 1e6, output_tolerance, 1))
                {
                    toc *= (1.0 - eta);
                    return true;
                }
                else {
                    return false;
                }
            }
            return true;
        }
        return false;
    }
    
    printf("TICCD:\n");
    largestAlpha = 1;
    tstart = clock();
    if (Point_Triangle_CCD_TI(x[0], x[1], x[2], x[3], x[4], x[5], x[6], x[7], 0.1, 0.0, largestAlpha)) {
        printf("%le\n", largestAlpha);
    }
    else {
        printf("no collision\n");
    }
    printf("%.1lems\n", double(clock() - tstart) / CLOCKS_PER_SEC * 1000);
    

    The output is

    TICCD:
    PT CCD tiny!
    0.000000e+00
    1.9e+00ms
    

    However I implemented a simpler version of TICCD according to the paper as

    template <class T>
    void Point_Triangle_Distance_Vector_Unclassified(
        const Eigen::Matrix<T, 3, 1>& p, 
        const Eigen::Matrix<T, 3, 1>& t0, 
        const Eigen::Matrix<T, 3, 1>& t1,
        const Eigen::Matrix<T, 3, 1>& t2,
        const Eigen::Matrix<T, 3, 1>& dp, 
        const Eigen::Matrix<T, 3, 1>& dt0, 
        const Eigen::Matrix<T, 3, 1>& dt1,
        const Eigen::Matrix<T, 3, 1>& dt2,
        T t, T lambda, T beta,
        Eigen::Matrix<T, 3, 1>& distVec)
    {
        const Eigen::Matrix<T, 3, 1> tp = (1 - lambda - beta) * t0 + lambda * t1 + beta * t2;
        const Eigen::Matrix<T, 3, 1> dtp = (1 - lambda - beta) * dt0 + lambda * dt1 + beta * dt2;
        distVec = p + t * dp - (tp + t * dtp);
    }
    
    template <class T>
    bool Point_Triangle_CheckInterval_Unclassified(
        const Eigen::Matrix<T, 3, 1>& p, 
        const Eigen::Matrix<T, 3, 1>& t0, 
        const Eigen::Matrix<T, 3, 1>& t1,
        const Eigen::Matrix<T, 3, 1>& t2,
        const Eigen::Matrix<T, 3, 1>& dp, 
        const Eigen::Matrix<T, 3, 1>& dt0, 
        const Eigen::Matrix<T, 3, 1>& dt1,
        const Eigen::Matrix<T, 3, 1>& dt2,
        const std::array<T, 6>& interval,
        T gap)
    {
        Eigen::Matrix<T, 3, 1> distVecMax, distVecMin;
        distVecMax.setConstant(-2 * gap - 1);
        distVecMin.setConstant(2 * gap + 1);
        for (int t = 0; t < 2; ++t) {
            for (int lambda = 0; lambda < 2; ++lambda) {
                for (int beta = 0; beta < 2; ++beta) {
                    if (lambda == 1 && beta == 1) {
                        continue;
                    }
                    Eigen::Matrix<T, 3, 1> distVec;
                    Point_Triangle_Distance_Vector_Unclassified(p, t0, t1, t2, dp, dt0, dt1, dt2, 
                        interval[t], interval[2 + lambda], interval[4 + beta], distVec);
                    distVecMax = distVecMax.array().max(distVec.array());
                    distVecMin = distVecMin.array().min(distVec.array());
                }
            }
        }
        return (distVecMax.array() >= -gap).all() && (distVecMin.array() <= gap).all();
    }
    template <class T>
    bool Point_Triangle_TICCD(
        const Eigen::Matrix<T, 3, 1>& p, 
        const Eigen::Matrix<T, 3, 1>& t0, 
        const Eigen::Matrix<T, 3, 1>& t1,
        const Eigen::Matrix<T, 3, 1>& t2,
        Eigen::Matrix<T, 3, 1> dp, 
        Eigen::Matrix<T, 3, 1> dt0, 
        Eigen::Matrix<T, 3, 1> dt1,
        Eigen::Matrix<T, 3, 1> dt2,
        T eta, T thickness, T& toc)
    {
        T dist2_cur;
        Point_Triangle_Distance_Unclassified(p, t0, t1, t2, dist2_cur);
        T dist_cur = std::sqrt(dist2_cur);
        T gap = eta * (dist2_cur - thickness * thickness) / (dist_cur + thickness);
    
        T tTol = 1e-3;
    
        std::vector<std::array<T, 6>> roots;
        std::deque<std::array<T, 6>> intervals;
        intervals.push_back({0, toc, 0, 1, 0, 1});
        int iterAmt = 0;
        while (!intervals.empty()) {
            ++iterAmt;
    
            std::array<T, 6> curIV = intervals.front();
            intervals.pop_front();
    
            // check
            if (Point_Triangle_CheckInterval_Unclassified(p, t0, t1, t2, dp, dt0, dt1, dt2, curIV, gap)) {
                if (curIV[0] && curIV[1] - curIV[0] < tTol) {
                    // root found within tTol
                    roots.emplace_back(curIV);
                }
                else {
                    // split interval and push back
                    std::vector<T> intervalLen({curIV[1] - curIV[0], curIV[3] - curIV[2], curIV[5] - curIV[4]});
                    switch (std::max_element(intervalLen.begin(), intervalLen.end()) - intervalLen.begin()) {
                    case 0:
                        intervals.push_back({curIV[0], (curIV[1] + curIV[0]) / 2, curIV[2], curIV[3], curIV[4], curIV[5]});
                        intervals.push_back({(curIV[1] + curIV[0]) / 2, curIV[1], curIV[2], curIV[3], curIV[4], curIV[5]});
                        break;
    
                    case 1:
                        intervals.push_back({curIV[0], curIV[1], curIV[2], (curIV[2] + curIV[3]) / 2, curIV[4], curIV[5]});
                        intervals.push_back({curIV[0], curIV[1], (curIV[2] + curIV[3]) / 2, curIV[3], curIV[4], curIV[5]});
                        break;
    
                    case 2:
                        intervals.push_back({curIV[0], curIV[1], curIV[2], curIV[3], curIV[4], (curIV[4] + curIV[5]) / 2});
                        intervals.push_back({curIV[0], curIV[1], curIV[2], curIV[3], (curIV[4] + curIV[5]) / 2, curIV[5]});
                        break;
                    }
                }
            }
        }
        
        if (roots.empty()) {
            printf("TICCD PT converged with %d iters\n", iterAmt);
            return false;
        }
        else {
            for (const auto& rI : roots) {
                if (toc > rI[0]) {
                    toc = rI[0];
                }
            }
            printf("TICCD PT converged with %d iters\n", iterAmt);
            return true;
        }
    }
    

    and it kinds of worked on this case and output

    TICCD PT converged with 6143 iters
    4.882812e-04
    5.3e-01ms
    

    which with tighter convergence tolerance might be able to reach the ground truth solution -- no collision.

    In my implementation, I didn't include the epsilons in formula (5) in the paper. Is it the main cause here?

    opened by liminchen 2
  • Refactored and cleaned up code

    Refactored and cleaned up code

    I tested and timed it on the entire dataset. I get 0 FN, a couple of fewer FP on the handcrafted data, and the timing is a little faster on the simulation and faster on handcrafted.

    opened by zfergus 0
Releases(v1.0.2)
Owner
Continuous Collision Detection
Code for continuous collision detection methods bechmarked in "A Large Scale Benchmark and an Inclusion-Based Algorithm for Continuous Collision Detection"
Continuous Collision Detection
💊 A 3D Generative Model for Structure-Based Drug Design (NeurIPS 2021)

A 3D Generative Model for Structure-Based Drug Design Coming soon... Citation @inproceedings{luo2021sbdd, title={A 3D Generative Model for Structu

Shitong Luo 118 Jan 05, 2023
AtlasNet: A Papier-Mâché Approach to Learning 3D Surface Generation

AtlasNet [Project Page] [Paper] [Talk] AtlasNet: A Papier-Mâché Approach to Learning 3D Surface Generation Thibault Groueix, Matthew Fisher, Vladimir

577 Dec 17, 2022
Multi-Person Extreme Motion Prediction

Multi-Person Extreme Motion Prediction Implementation for paper Wen Guo, Xiaoyu Bie, Xavier Alameda-Pineda, Francesc Moreno-Noguer, Multi-Person Extre

GUO-W 38 Nov 15, 2022
Official implementation of "Articulation Aware Canonical Surface Mapping"

Articulation-Aware Canonical Surface Mapping Nilesh Kulkarni, Abhinav Gupta, David F. Fouhey, Shubham Tulsiani Paper Project Page Requirements Python

Nilesh Kulkarni 56 Dec 16, 2022
Code for paper: "Spinning Language Models for Propaganda-As-A-Service"

Spinning Language Models for Propaganda-As-A-Service This is the source code for the Arxiv version of the paper. You can use this Google Colab to expl

Eugene Bagdasaryan 16 Jan 03, 2023
Mixed Neural Likelihood Estimation for models of decision-making

Mixed neural likelihood estimation for models of decision-making Mixed neural likelihood estimation (MNLE) enables Bayesian parameter inference for mo

mackelab 9 Dec 22, 2022
Character Controllers using Motion VAEs

Character Controllers using Motion VAEs This repo is the codebase for the SIGGRAPH 2020 paper with the title above. Please find the paper and demo at

Electronic Arts 165 Jan 03, 2023
The code for paper "Learning Implicit Fields for Generative Shape Modeling".

implicit-decoder The tensorflow code for paper "Learning Implicit Fields for Generative Shape Modeling", Zhiqin Chen, Hao (Richard) Zhang. Project pag

Zhiqin Chen 353 Dec 30, 2022
The aim of the game, as in the original one, is to find a specific image from a group of different images of a person's face

GUESS WHO Main Links: [Github] [App] Related Links: [CLIP] [Celeba] The aim of the game, as in the original one, is to find a specific image from a gr

Arnau - DIMAI 3 Jan 04, 2022
Fuzzy Overclustering (FOC)

Fuzzy Overclustering (FOC) In real-world datasets, we need consistent annotations between annotators to give a certain ground-truth label. However, in

2 Nov 08, 2022
Code for training and evaluation of the model from "Language Generation with Recurrent Generative Adversarial Networks without Pre-training"

Language Generation with Recurrent Generative Adversarial Networks without Pre-training Code for training and evaluation of the model from "Language G

Amir Bar 253 Sep 14, 2022
Efficient Multi Collection Style Transfer Using GAN

Proposed a new model that can make style transfer from single style image, and allow to transfer into multiple different styles in a single model.

Zhaozheng Shen 2 Jan 15, 2022
Libtorch yolov3 deepsort

Overview It is for my undergrad thesis in Tsinghua University. There are four modules in the project: Detection: YOLOv3 Tracking: SORT and DeepSORT Pr

Xu Wei 226 Dec 13, 2022
PyTorch implementation of Higher Order Recurrent Space-Time Transformer

Higher Order Recurrent Space-Time Transformer (HORST) This is the official PyTorch implementation of Higher Order Recurrent Space-Time Transformer. Th

13 Oct 18, 2022
Physics-informed Neural Operator for Learning Partial Differential Equation

PINO Physics-informed Neural Operator for Learning Partial Differential Equation Abstract: Machine learning methods have recently shown promise in sol

107 Jan 02, 2023
Some code of the implements of Geological Modeling Using 3D Pixel-Adaptive and Deformable Convolutional Neural Network

3D-GMPDCNN Geological Modeling Using 3D Pixel-Adaptive and Deformable Convolutional Neural Network PyTorch implementation of "Geological Modeling Usin

5 Nov 21, 2022
This is the official repository for our paper: ''Pruning Self-attentions into Convolutional Layers in Single Path''.

Pruning Self-attentions into Convolutional Layers in Single Path This is the official repository for our paper: Pruning Self-attentions into Convoluti

Zhuang AI Group 77 Dec 26, 2022
A TensorFlow implementation of DeepMind's WaveNet paper

A TensorFlow implementation of DeepMind's WaveNet paper This is a TensorFlow implementation of the WaveNet generative neural network architecture for

Igor Babuschkin 5.3k Dec 28, 2022
Code needed to reproduce the examples found in "The Temporal Robustness of Stochastic Signals"

The Temporal Robustness of Stochastic Signals Code needed to reproduce the examples found in "The Temporal Robustness of Stochastic Signals" Case stud

0 Oct 28, 2021
Explaining Hyperparameter Optimization via PDPs

Explaining Hyperparameter Optimization via PDPs This repository gives access to an implementation of the methods presented in the paper submission “Ex

2 Nov 16, 2022