A library for differentiable nonlinear optimization.

Related tags

Deep Learningtheseus
Overview

CircleCI License python 3.7, 3.8 pre-commit Code style: black

Theseus

A library for differentiable nonlinear optimization built on PyTorch to support constructing various problems in robotics and vision as end-to-end differentiable architectures.

The current focus is on nonlinear least squares with support for sparsity, batching, GPU, and backward modes for unrolling, truncated and implicit, and sampling based differentiation. This library is in beta with expected full release in mid 2022.

Getting Started

  • Prerequisites

    • We strongly recommend you install theseus in a venv or conda environment.
    • Theseus requires torch installation. To install for your particular CPU/CUDA configuration, follow the instructions in the PyTorch website.
  • Installing

    git clone https://github.com/facebookresearch/theseus.git && cd theseus
    pip install -e .
  • Running unit tests

    pytest theseus
  • See tutorials and examples to learn about the API and usage.

Additional Information

  • Use Github issues for questions, suggestions, and bugs.
  • See CONTRIBUTING if interested in helping out.
  • Theseus is being developed with the help of many contributors, see THANKS.

License

Theseus is MIT licensed. See the LICENSE for details.

Comments
  • Override Vector operators in Point2 and Point3

    Override Vector operators in Point2 and Point3

    Motivation and Context

    Overrides vector operations for Point2, Point3 #113

    How Has This Been Tested

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [x] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist

    • [ ] My code follows the code style of this project.
    • [ ] My change requires a change to the documentation.
    • [ ] I have updated the documentation accordingly.
    • [ ] I have read the CONTRIBUTING document.
    • [ ] I have completed my CLA (see CONTRIBUTING)
    • [ ] I have added tests to cover my changes.
    • [ ] All new and existing tests passed.
    CLA Signed 
    opened by jeffin07 15
  • error and errorSquaredNorm can optionally take in variable data

    error and errorSquaredNorm can optionally take in variable data

    🚀 Feature

    API improvement in Objective: error and errorSquaredNorm can optionally take in var_data which if passed would call update internally. Document the behavior that if var_data is passed this will update the objective.

    Motivation

    Facilitates usage for cases where only error needs to be queried (w/o running optimization or even updating the variables).

    Pitch

    Following ways to use this api after:

    1. Get error on current internal values: call error without passing any var_data
    2. Get error on new values w\ update to objective: call error and pass var_data
    3. Get error on new values w\o update to objective: call error, pass var_data and True optional flag to not update objective
    enhancement good first issue 
    opened by luisenp 13
  • Allow constant inputs to cost functions to be passed as floats

    Allow constant inputs to cost functions to be passed as floats

    Motivation and Context

    To close #38

    How Has This Been Tested

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [x] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist

    • [x] My code follows the code style of this project.
    • [ ] My change requires a change to the documentation.
    • [ ] I have updated the documentation accordingly.
    • [x] I have read the CONTRIBUTING document.
    • [x] I have completed my CLA (see CONTRIBUTING)
    • [ ] I have added tests to cover my changes.
    • [ ] All new and existing tests passed.
    CLA Signed refactor 
    opened by jeffin07 10
  • error and errorsquaredNorm optional data

    error and errorsquaredNorm optional data

    Motivation and Context

    API improvement in Objective: error and errorSquaredNorm can optionally take in var_data which if passed would call update internally. Document the behavior that if var_data is passed this will update the objective. #4

    How Has This Been Tested

    Testes using unit test and a custom example

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [x] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist

    • [ ] My code follows the code style of this project.
    • [ ] My change requires a change to the documentation.
    • [ ] I have updated the documentation accordingly.
    • [ ] I have read the CONTRIBUTING document.
    • [ ] I have completed my CLA (see CONTRIBUTING)
    • [x] I have added tests to cover my changes.
    • [ ] All new and existing tests passed.
    enhancement CLA Signed 
    opened by jeffin07 10
  • Updating just `aux_vars` isn't sufficient to re-solve with some data changed

    Updating just `aux_vars` isn't sufficient to re-solve with some data changed

    🐛 Bug

    Using the quadratic fit example, I thought it would be reasonable to update the data in just aux_vars and re-solve, but it seems like there's a dependence on the global data_x.

    Steps to Reproduce

    I included a MWE below that outputs the incorrect solution in the middle here by just changing aux_inputs:

    optimal a:  tensor([[1.0076]], grad_fn=<AddBackward0>)
    == Only changing aux_vars["x"] (this should not be the same solution)
    optimal a:  tensor([[1.0076]], grad_fn=<AddBackward0>)
    == Globally updating data_x (this is the correct solution)
    optimal a:  tensor([[0.0524]], grad_fn=<AddBackward0>)
    

    Expected behavior

    I was pretty confused at first when my code using wasn't working and didn't realize it was because of this. We should make updating aux_inputs sufficient to re-solve the problem, or if this is challenging we should consider 1) raising a warning/adding a check with aux_inputs doesn't match or 2) remove duplicated passing of aux_inputs when it doesn't do anything.

    Code

    #!/usr/bin/env python3
    
    import torch
    import theseus as th
    import theseus.optimizer.nonlinear as thnl
    
    import numpy as np
    import numdifftools as nd
    
    def generate_data(num_points=10, a=1., b=0.5, noise_factor=0.01):
        data_x = torch.rand((1, num_points))
        noise = torch.randn((1, num_points)) * noise_factor
        data_y = a * data_x.square() + b + noise
        return data_x, data_y
    
    num_points = 10
    data_x, data_y = generate_data(num_points)
    
    x = th.Variable(data_x.requires_grad_(), name="x")
    y = th.Variable(data_y.requires_grad_(), name="y")
    a = th.Vector(1, name="a")
    b = th.Vector(1, name="b")
    
    def quad_error_fn(optim_vars, aux_vars):
        a, b = optim_vars
        x, y = aux_vars
        est = a.data * x.data.square() + b.data
        err = y.data - est
        return err
    
    optim_vars = a, b
    aux_vars = x, y
    cost_function = th.AutoDiffCostFunction(
        optim_vars, quad_error_fn, num_points, aux_vars=aux_vars, name="quadratic_cost_fn"
    )
    objective = th.Objective()
    objective.add(cost_function)
    optimizer = th.GaussNewton(
        objective,
        max_iterations=15,
        step_size=0.5,
    )
    
    theseus_inputs = {
    "a": 2 * torch.ones((1, 1)).requires_grad_(),
    "b": torch.ones((1, 1)).requires_grad_()
    }
    aux_vars = {
    "x": data_x,
    "y": data_y,
    }
    theseus_optim = th.TheseusLayer(optimizer)
    updated_inputs, info = theseus_optim.forward(
        theseus_inputs, aux_vars=aux_vars,
        track_best_solution=True, verbose=False,
        backward_mode=thnl.BackwardMode.FULL,
    )
    print('optimal a: ', updated_inputs['a'])
    
    aux_vars = {
    "x": data_x+10.,
    "y": data_y,
    }
    updated_inputs, info = theseus_optim.forward(
        theseus_inputs, aux_vars=aux_vars,
        track_best_solution=True, verbose=False,
        backward_mode=thnl.BackwardMode.FULL,
    )
    print('== Only changing aux_vars["x"] (this should not be the same solution)')
    print('optimal a: ', updated_inputs['a'])
    
    data_x.data += 10.
    aux_vars = {
    "x": data_x,
    "y": data_y,
    }
    updated_inputs, info = theseus_optim.forward(
        theseus_inputs, aux_vars=aux_vars,
        track_best_solution=True, verbose=False,
        backward_mode=thnl.BackwardMode.FULL,
    )
    print('== Globally updating data_x (this is the correct solution)')
    print('optimal a: ', updated_inputs['a'])
    
    documentation question refactor 
    opened by bamos 7
  • Refactored SE3.log_map_impl() to avoid in place operations

    Refactored SE3.log_map_impl() to avoid in place operations

    Fixes @exhaustin torch backward errors in this script.

    The script still has other errors, where the final system is not positive definite so it cannot be solved with CholeskyDense. But unclear if this is related to Lie groups or something else.

    bug CLA Signed 
    opened by luisenp 6
  • Installation in Docker : error in compilation of extlib/mat_mul.cu

    Installation in Docker : error in compilation of extlib/mat_mul.cu

    Hi ! Would it be possible to have a Dockerfile with the right config ? I've tried many compatible versions of pytorch and CUDA, but I always get the same error when building theseus-ai.. In my last trial, I started from nvidia ngc pytorch container nvcr.io/nvidia/pytorch:21.06-py3, which is an Ubuntu 20.04 with cuda 11.3, python 3.8 and I reinstalled torch==1.10.1+cu113 version.

    Here's the full error :

        /home/dir/theseus/theseus/extlib/mat_mult.cu(74): error: no instance of overloaded function "atomicAdd" matches the argument list
                    argument types are: (double *, double)
        /home/dir/theseus/theseus/extlib/mat_mult.cu(239): error: no instance of overloaded function "atomicAdd" matches the argument list
                    argument types are: (double *, double)
        2 errors detected in the compilation of "/home/dir/theseus/theseus/extlib/mat_mult.cu".
        ninja: build stopped: subcommand failed.
        Traceback (most recent call last):
          File "/opt/conda/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1717, in _run_ninja_build
            subprocess.run(
          File "/opt/conda/lib/python3.8/subprocess.py", line 516, in run
            raise CalledProcessError(retcode, process.args,
        subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.
        The above exception was the direct cause of the following exception:
        Traceback (most recent call last):
          File "<string>", line 1, in <module>
          File "/home/dir/theseus/setup.py", line 60, in <module>
            setuptools.setup(
          File "/opt/conda/lib/python3.8/site-packages/setuptools/__init__.py", line 163, in setup
            return distutils.core.setup(**attrs)
          File "/opt/conda/lib/python3.8/distutils/core.py", line 148, in setup
            dist.run_commands()
          File "/opt/conda/lib/python3.8/distutils/dist.py", line 966, in run_commands
            self.run_command(cmd)
          File "/opt/conda/lib/python3.8/distutils/dist.py", line 985, in run_command
            cmd_obj.run()
          File "/opt/conda/lib/python3.8/site-packages/setuptools/command/develop.py", line 38, in run
            self.install_for_development()
          File "/opt/conda/lib/python3.8/site-packages/setuptools/command/develop.py", line 140, in install_for_development
            self.run_command('build_ext')
          File "/opt/conda/lib/python3.8/distutils/cmd.py", line 313, in run_command
            self.distribution.run_command(command)
          File "/opt/conda/lib/python3.8/distutils/dist.py", line 985, in run_command
            cmd_obj.run()
          File "/opt/conda/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 87, in run
            _build_ext.run(self)
          File "/opt/conda/lib/python3.8/site-packages/Cython/Distutils/old_build_ext.py", line 186, in run
            _build_ext.build_ext.run(self)
          File "/opt/conda/lib/python3.8/distutils/command/build_ext.py", line 340, in run
            self.build_extensions()
          File "/opt/conda/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 735, in build_extensions
            build_ext.build_extensions(self)
          File "/opt/conda/lib/python3.8/site-packages/Cython/Distutils/old_build_ext.py", line 194, in build_extensions
            self.build_extension(ext)
          File "/opt/conda/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 208, in build_extension
            _build_ext.build_extension(self, ext)
          File "/opt/conda/lib/python3.8/distutils/command/build_ext.py", line 528, in build_extension
            objects = self.compiler.compile(sources,
          File "/opt/conda/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 556, in unix_wrap_ninja_compile
            _write_ninja_file_and_compile_objects(
          File "/opt/conda/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1399, in _write_ninja_file_and_compile_objects
            _run_ninja_build(
          File "/opt/conda/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1733, in _run_ninja_build
            raise RuntimeError(message) from e
        RuntimeError: Error compiling objects for extension
        ----------------------------------------
    ERROR: Command errored out with exit status 1: /opt/conda/bin/python3.8 -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/home/dir/theseus/setup.py'"'"'; __file__='"'"'/home/dir/theseus/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' develop --no-deps Check the logs for full command output.
    
    
    
    opened by fmagera 5
  • Homography example with functorch

    Homography example with functorch

    Motivation and Context

    How Has This Been Tested

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist

    • [ ] My code follows the code style of this project.
    • [ ] My change requires a change to the documentation.
    • [ ] I have updated the documentation accordingly.
    • [ ] I have read the CONTRIBUTING document.
    • [ ] I have completed my CLA (see CONTRIBUTING)
    • [ ] I have added tests to cover my changes.
    • [ ] All new and existing tests passed.
    CLA Signed 
    opened by fantaosha 5
  • Add robust cost function

    Add robust cost function

    Motivation and Context

    How Has This Been Tested

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist

    • [ ] My code follows the code style of this project.
    • [ ] My change requires a change to the documentation.
    • [ ] I have updated the documentation accordingly.
    • [ ] I have read the CONTRIBUTING document.
    • [ ] I have completed my CLA (see CONTRIBUTING)
    • [ ] I have added tests to cover my changes.
    • [ ] All new and existing tests passed.
    CLA Signed 
    opened by fantaosha 5
  • Add ManifoldGaussian class for messages in belief propagation

    Add ManifoldGaussian class for messages in belief propagation

    Motivation and Context

    It would be useful to have an optional covariance / precision matrix as part of the Manifold class as Gaussian Belief Propagation involves sending Gaussian distributions over the Manifold variables. Currently I'm using a wrapper Gaussian class but it could be more widely useful to have covariance / precision matrix as an attribute of the Manifold class?

    How Has This Been Tested

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist

    • [ ] My code follows the code style of this project.
    • [ ] My change requires a change to the documentation.
    • [ ] I have updated the documentation accordingly.
    • [ ] I have read the CONTRIBUTING document.
    • [ ] I have completed my CLA (see CONTRIBUTING)
    • [ ] I have added tests to cover my changes.
    • [ ] All new and existing tests passed.
    enhancement CLA Signed 
    opened by joeaortiz 5
  • Override `Vector` operators in `Point2` and `Point3` with the correct return type

    Override `Vector` operators in `Point2` and `Point3` with the correct return type

    🚀 Feature

    Something like

    class Point2(Vector):
        ...
        
        def __add__(self, other: Vector) -> "Point2":
            return cast(Point2, super().__add__(other))
    

    Motivation

    Eliminates unnecessary casting when using typing.

    Alternatives

    There might be some way for mypy to do the correct thing that wouldn't require overriding these methods.

    Additional context

    enhancement good first issue 
    opened by luisenp 5
  • Add differentiable forward kinematics

    Add differentiable forward kinematics

    Motivation and Context

    How Has This Been Tested

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist

    • [ ] My code follows the code style of this project.
    • [ ] My change requires a change to the documentation.
    • [ ] I have updated the documentation accordingly.
    • [ ] I have read the CONTRIBUTING document.
    • [ ] I have completed my CLA (see CONTRIBUTING)
    • [ ] I have added tests to cover my changes.
    • [ ] All new and existing tests passed.
    enhancement CLA Signed 
    opened by fantaosha 0
  • CUDA kernel for differentiable sparse matrix vector product

    CUDA kernel for differentiable sparse matrix vector product

    Backward pass can be made more efficient in GPU if we write a custom CUDA kernel for it, but this should be reasonable enough for now.

    _Originally posted by @luisenp in https://github.com/facebookresearch/theseus/pull/392

    opened by mhmukadam 0
  • Add robot model

    Add robot model

    Motivation and Context

    How Has This Been Tested

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist

    • [ ] My code follows the code style of this project.
    • [ ] My change requires a change to the documentation.
    • [ ] I have updated the documentation accordingly.
    • [ ] I have read the CONTRIBUTING document.
    • [ ] I have completed my CLA (see CONTRIBUTING)
    • [ ] I have added tests to cover my changes.
    • [ ] All new and existing tests passed.
    CLA Signed 
    opened by fantaosha 0
  • Add prismatic joint

    Add prismatic joint

    Motivation and Context

    How Has This Been Tested

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist

    • [ ] My code follows the code style of this project.
    • [ ] My change requires a change to the documentation.
    • [ ] I have updated the documentation accordingly.
    • [ ] I have read the CONTRIBUTING document.
    • [ ] I have completed my CLA (see CONTRIBUTING)
    • [ ] I have added tests to cover my changes.
    • [ ] All new and existing tests passed.
    enhancement CLA Signed 
    opened by fantaosha 0
  • Add revolute joint

    Add revolute joint

    Motivation and Context

    How Has This Been Tested

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist

    • [ ] My code follows the code style of this project.
    • [ ] My change requires a change to the documentation.
    • [ ] I have updated the documentation accordingly.
    • [ ] I have read the CONTRIBUTING document.
    • [ ] I have completed my CLA (see CONTRIBUTING)
    • [ ] I have added tests to cover my changes.
    • [ ] All new and existing tests passed.
    enhancement CLA Signed 
    opened by fantaosha 0
  • Add se3.log()

    Add se3.log()

    Motivation and Context

    How Has This Been Tested

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist

    • [ ] My code follows the code style of this project.
    • [ ] My change requires a change to the documentation.
    • [ ] I have updated the documentation accordingly.
    • [ ] I have read the CONTRIBUTING document.
    • [ ] I have completed my CLA (see CONTRIBUTING)
    • [ ] I have added tests to cover my changes.
    • [ ] All new and existing tests passed.
    enhancement CLA Signed 
    opened by fantaosha 0
Releases(0.1.3)
  • 0.1.3(Nov 9, 2022)

    Major Updates

    • Adaptive damping for Levenberg-Marquardt by @luisenp in https://github.com/facebookresearch/theseus/pull/328
    • Moved all unit tests to a separate folder by @luisenp in https://github.com/facebookresearch/theseus/pull/352

    Other Changes

    • Removed manual cmake install for CPU tests. by @luisenp in https://github.com/facebookresearch/theseus/pull/338
    • Fixed vmap related bug breaking homography ex. with sparse solvers. by @luisenp in https://github.com/facebookresearch/theseus/pull/337
    • Small vectorization improvements by @luisenp in https://github.com/facebookresearch/theseus/pull/336
    • Change CI to separately handle torch >= 1.13 by @luisenp in https://github.com/facebookresearch/theseus/pull/345
    • Fixed quaternion bug at pi by @fantaosha in https://github.com/facebookresearch/theseus/pull/344
    • Expose Lie Groups checks at root level by @luisenp in https://github.com/facebookresearch/theseus/pull/335
    • Added option for making LieGroup checks silent. by @luisenp in https://github.com/facebookresearch/theseus/pull/351
    • Added a few other CUDA versions to build script. by @luisenp in https://github.com/facebookresearch/theseus/pull/349
    • Set vectorization off by default when using optimizers w/o TheseusLayer by @luisenp in https://github.com/facebookresearch/theseus/pull/350
    • Some more cleanup before 0.1.3 by @luisenp in https://github.com/facebookresearch/theseus/pull/353
    • Add tests for wheels in CI by @luisenp in https://github.com/facebookresearch/theseus/pull/354
    • #355 Add device parameter to UrdfRobotModel by @thomasweng15 in https://github.com/facebookresearch/theseus/pull/356
    • Added th.device to represent both str and torch.device. by @luisenp in https://github.com/facebookresearch/theseus/pull/357
    • Update to 0.1.3 by @luisenp in https://github.com/facebookresearch/theseus/pull/358

    New Contributors

    • @thomasweng15 made their first contribution in https://github.com/facebookresearch/theseus/pull/356

    Full Changelog: https://github.com/facebookresearch/theseus/compare/0.1.2...0.1.3

    Source code(tar.gz)
    Source code(zip)
  • 0.1.2(Oct 20, 2022)

    Major Updates

    • Add support for BaSpaCho sparse solver by @maurimo in https://github.com/facebookresearch/theseus/pull/324
    • Set vmap as the default autograd mode for autodiff cost function. by @luisenp in https://github.com/facebookresearch/theseus/pull/313

    Other Changes

    • Changed homography example to allow benchmarking only cost computation by @luisenp in https://github.com/facebookresearch/theseus/pull/311
    • Run isort on all files by @luisenp in https://github.com/facebookresearch/theseus/pull/312
    • Added usort:skip tags. by @luisenp in https://github.com/facebookresearch/theseus/pull/314
    • Fixed checkout tag syntax in build script. by @luisenp in https://github.com/facebookresearch/theseus/pull/315
    • Removed redundant directory in homography gif save. by @luisenp in https://github.com/facebookresearch/theseus/pull/316
    • Fixing simple example by @Gralerfics in https://github.com/facebookresearch/theseus/pull/320
    • AutodiffCostFunction now expands tensors with batch size 1 before running vmap by @luisenp in https://github.com/facebookresearch/theseus/pull/327
    • Deprecated FULL backward mode (now UNROLL). by @luisenp in https://github.com/facebookresearch/theseus/pull/332
    • Using better names for CHOLMOD solver python files. by @luisenp in https://github.com/facebookresearch/theseus/pull/333

    New Contributors

    • @Gralerfics made their first contribution in https://github.com/facebookresearch/theseus/pull/320

    Full Changelog: https://github.com/facebookresearch/theseus/compare/0.1.1...0.1.2

    Source code(tar.gz)
    Source code(zip)
  • 0.1.1(Sep 28, 2022)

    Highlights

    • Added pip install theseus-ai instructions. by @luisenp in https://github.com/facebookresearch/theseus/pull/276
    • Add functorch support for AutoDiffCostFunction by @fantaosha in https://github.com/facebookresearch/theseus/pull/268
    • Profile AutoDiffCostFunction and refactor the homography example by @fantaosha in https://github.com/facebookresearch/theseus/pull/296

    What's Changed

    • update pose graph data link by @fantaosha in https://github.com/facebookresearch/theseus/pull/256
    • [homography] Use kornia lib properly for perspective transform by @ddetone in https://github.com/facebookresearch/theseus/pull/258
    • Small Update to 00_introduction.ipynb by @NeilPandya in https://github.com/facebookresearch/theseus/pull/259
    • Changed SDF constructor to accept more convenient data types. by @luisenp in https://github.com/facebookresearch/theseus/pull/260
    • Fixed small bugs in MotionPlanner class by @luisenp in https://github.com/facebookresearch/theseus/pull/261
    • Added option to visualize SDF to trajectory visualization function by @luisenp in https://github.com/facebookresearch/theseus/pull/263
    • update readme by @mhmukadam in https://github.com/facebookresearch/theseus/pull/264
    • Added MotionPlanner.forward() method. by @luisenp in https://github.com/facebookresearch/theseus/pull/267
    • Small bug fixes and tweaks to generate_trajectory_figs. by @luisenp in https://github.com/facebookresearch/theseus/pull/271
    • Added a script for building wheels on a new docker image. by @luisenp in https://github.com/facebookresearch/theseus/pull/257
    • Bugfix: homography estimation - create data folder before downloading data by @luizgh in https://github.com/facebookresearch/theseus/pull/275
    • Added pip install theseus-ai instructions. by @luisenp in https://github.com/facebookresearch/theseus/pull/276
    • Refactored MotionPlanner so that objective can be passed separately. by @luisenp in https://github.com/facebookresearch/theseus/pull/277
    • add numel() to Manifold and Lie groups by @fantaosha in https://github.com/facebookresearch/theseus/pull/280
    • Add support for SE2 poses in Collision2D by @luisenp in https://github.com/facebookresearch/theseus/pull/278
    • Probabilistically correct SO(3) sampling by @brentyi in https://github.com/facebookresearch/theseus/pull/286
    • Refactor SO3 and SE3 to be consistent with functorch by @fantaosha in https://github.com/facebookresearch/theseus/pull/266
    • Add SE2 support in MotionPlanner by @luisenp in https://github.com/facebookresearch/theseus/pull/282
    • Fixed bug in visualization of SE2 motion plans. by @luisenp in https://github.com/facebookresearch/theseus/pull/293
    • Add functorch support for AutoDiffCostFunction by @fantaosha in https://github.com/facebookresearch/theseus/pull/268
    • Changed requirements so that main.txt only includes essential dependencies by @luisenp in https://github.com/facebookresearch/theseus/pull/294
    • Add to_quaternion, rotation, translation and convention comment by @fantaosha in https://github.com/facebookresearch/theseus/pull/295
    • Added th.as_variable() function to simplify creating new variables. by @luisenp in https://github.com/facebookresearch/theseus/pull/299
    • Added an optional end-of-step callback to NonlinearOptimizer.optimize(). by @luisenp in https://github.com/facebookresearch/theseus/pull/297
    • Add AutogradMode to AutoDiffCostFunction by @fantaosha in https://github.com/facebookresearch/theseus/pull/300
    • Profile AutoDiffCostFunction and refactor the homography example by @fantaosha in https://github.com/facebookresearch/theseus/pull/296
    • Changed unit tests so that the batch sizes to tests are defined in a central import by @luisenp in https://github.com/facebookresearch/theseus/pull/298
    • enhance the efficiency of Objectve.add() by @Christopher6488 in https://github.com/facebookresearch/theseus/pull/303
    • Added missing end newlines by @luisenp in https://github.com/facebookresearch/theseus/pull/307
    • Rename BackwardMode.FULL --> UNROLL and simplify backward mode config by @luisenp in https://github.com/facebookresearch/theseus/pull/305
    • Simplified autograd mode specification. by @luisenp in https://github.com/facebookresearch/theseus/pull/306
    • Clean up test_theseus_layer by @luisenp in https://github.com/facebookresearch/theseus/pull/308
    • update readme and bump version by @mhmukadam in https://github.com/facebookresearch/theseus/pull/309

    New Contributors

    • @NeilPandya made their first contribution in https://github.com/facebookresearch/theseus/pull/259
    • @luizgh made their first contribution in https://github.com/facebookresearch/theseus/pull/275
    • @brentyi made their first contribution in https://github.com/facebookresearch/theseus/pull/286
    • @Christopher6488 made their first contribution in https://github.com/facebookresearch/theseus/pull/303

    Full Changelog: https://github.com/facebookresearch/theseus/compare/0.1.0...0.1.1

    Source code(tar.gz)
    Source code(zip)
  • 0.1.0(Jul 20, 2022)

    What's Changed

    • Add SO3 support by @luisenp in https://github.com/facebookresearch/theseus/pull/46
    • Taoshaf.add so3 class by @fantaosha in https://github.com/facebookresearch/theseus/pull/65
    • Add SO3 rotate and unrotate by @fantaosha in https://github.com/facebookresearch/theseus/pull/57
    • add so3._hat_matrix_check() by @fantaosha in https://github.com/facebookresearch/theseus/pull/59
    • Encapsulated data loading functions in tactile pushing example into a new class by @luisenp in https://github.com/facebookresearch/theseus/pull/51
    • Added a TactilePoseEstimator class to easily create TheseusLayer for tactile pushing by @luisenp in https://github.com/facebookresearch/theseus/pull/52
    • Refactor tactile pushing model interface by @luisenp in https://github.com/facebookresearch/theseus/pull/55
    • Minor fixes 02/01/2022 by @luisenp in https://github.com/facebookresearch/theseus/pull/66
    • add adjoint, hat and vee for SE3 by @fantaosha in https://github.com/facebookresearch/theseus/pull/68
    • Add SE3.exp_map() and SE3.log_map() by @fantaosha in https://github.com/facebookresearch/theseus/pull/71
    • Add SE3.compose() by @fantaosha in https://github.com/facebookresearch/theseus/pull/72
    • add SE3.transform_from and SE3.transform_to by @fantaosha in https://github.com/facebookresearch/theseus/pull/80
    • Updated to CircleCI's next-gen images. by @luisenp in https://github.com/facebookresearch/theseus/pull/89
    • Updated README with libsuitesparse installation instructions. by @luisenp in https://github.com/facebookresearch/theseus/pull/90
    • Added kwarg to NonlinearOptimizer.optimizer() for tracking error history by @luisenp in https://github.com/facebookresearch/theseus/pull/82
    • Merge infos results for truncated backward modes by @luisenp in https://github.com/facebookresearch/theseus/pull/83
    • Fix Issue 88 by @maurimo in https://github.com/facebookresearch/theseus/pull/97
    • Fixed bug in Variable.update() that was breaking torch graph... by @luisenp in https://github.com/facebookresearch/theseus/pull/96
    • Forced Gauss-Newton step for last iterations of truncated backward. by @luisenp in https://github.com/facebookresearch/theseus/pull/81
    • Add automatic differentiation on the Lie group tangent space by @fantaosha in https://github.com/facebookresearch/theseus/pull/74
    • Add rand() to LieGroup by @fantaosha in https://github.com/facebookresearch/theseus/pull/95
    • Fix SO2 rotate and unrotate jacobian by @fantaosha in https://github.com/facebookresearch/theseus/pull/58
    • Add projection for sparse Jacobian matrices by @fantaosha in https://github.com/facebookresearch/theseus/pull/98
    • Add LieGroup Support for AutoDiffFunction by @fantaosha in https://github.com/facebookresearch/theseus/pull/99
    • Add SE2.transform from() and fix shape bugs in SE2.transform_to() by @fantaosha in https://github.com/facebookresearch/theseus/pull/103
    • Switch SE3.transform_from and SE3.transform_to by @fantaosha in https://github.com/facebookresearch/theseus/pull/104
    • Enabled back ellipsoidal damping in LM with linear solvers support checks by @luisenp in https://github.com/facebookresearch/theseus/pull/87
    • Changed name of pytest mark for CUDA extensions. by @luisenp in https://github.com/facebookresearch/theseus/pull/102
    • Fix backpropagation bugs in SO3 and SE3 log_map by @fantaosha in https://github.com/facebookresearch/theseus/pull/109
    • Add analytical jacobians for LieGroup.exp_map by @fantaosha in https://github.com/facebookresearch/theseus/pull/110
    • error and errorsquaredNorm optional data by @jeffin07 in https://github.com/facebookresearch/theseus/pull/105
    • Add analytical derivatives for LieGroup.log_map() by @fantaosha in https://github.com/facebookresearch/theseus/pull/114
    • Refactor SO3.to_quaternion to fix backward bugs and improve the accuracy around pi by @fantaosha in https://github.com/facebookresearch/theseus/pull/116
    • Change RobotModel.forward_kinematics() interface by @luisenp in https://github.com/facebookresearch/theseus/pull/94
    • Added error handling for missing matplotlib and omegaconf installation when using theg by @luisenp in https://github.com/facebookresearch/theseus/pull/119
    • Add jacobians argument to exp_map and log_map functions by @joeaortiz in https://github.com/facebookresearch/theseus/pull/122
    • Add pose graph optimization example by @fantaosha in https://github.com/facebookresearch/theseus/pull/118
    • Added 'secret' option to keep constant step size when using truncated… by @luisenp in https://github.com/facebookresearch/theseus/pull/130
    • Add jacobians for LieGroup.local() by @fantaosha in https://github.com/facebookresearch/theseus/pull/129
    • Added method to update SE2 from x_y_theta. by @luisenp in https://github.com/facebookresearch/theseus/pull/131
    • A complete version of Bundle Adjusment by @luisenp in https://github.com/facebookresearch/theseus/pull/117
    • Fixed jacobians for Between and VariableDifference by @fantaosha in https://github.com/facebookresearch/theseus/pull/133
    • Added batching support to tactile pushing example. by @luisenp in https://github.com/facebookresearch/theseus/pull/132
    • Changed tactile pose example optim var initialization to use start pose for all vars by @luisenp in https://github.com/facebookresearch/theseus/pull/137
    • Override Vector operators in Point2 and Point3 by @jeffin07 in https://github.com/facebookresearch/theseus/pull/124
    • Merge Between and VariableDiff with RelativePoseError and PosePriorError by @fantaosha in https://github.com/facebookresearch/theseus/pull/136
    • Fix a bug in Objective.copy() by @fantaosha in https://github.com/facebookresearch/theseus/pull/139
    • Added option to force max iterations for TactilePoseEstimator. by @luisenp in https://github.com/facebookresearch/theseus/pull/141
    • black version bump by @jeffin07 in https://github.com/facebookresearch/theseus/pull/144
    • Added code to split tactile pushing trajectories data into train/val by @luisenp in https://github.com/facebookresearch/theseus/pull/143
    • Removed RobotModel.dim() by @luisenp in https://github.com/facebookresearch/theseus/pull/156
    • [bug-fix] Fixed wrong data shape initialization for GPCostWeight.dt by @luisenp in https://github.com/facebookresearch/theseus/pull/157
    • Made AutoDiffCostFunction._tmp_optim_vars copies of original by @luisenp in https://github.com/facebookresearch/theseus/pull/155
    • Add forward kinematics using an URDF to theseus.embodied.kinematics. by @exhaustin in https://github.com/facebookresearch/theseus/pull/84
    • Fixed dtype error in se3.py that came up in unit tests by @joeaortiz in https://github.com/facebookresearch/theseus/pull/158
    • Add-ons for backward experiments on Tactile Pose Estimation by @luisenp in https://github.com/facebookresearch/theseus/pull/164
    • Change unit tests to avoid making mypy a main requirement. by @luisenp in https://github.com/facebookresearch/theseus/pull/168
    • Update readme and contrib by @luisenp in https://github.com/facebookresearch/theseus/pull/169
    • Add ManifoldGaussian class for messages in belief propagation by @joeaortiz in https://github.com/facebookresearch/theseus/pull/121
    • More efficient implementation of forward kinematics by @exhaustin in https://github.com/facebookresearch/theseus/pull/175
    • Changing setup virtualenv command. by @luisenp in https://github.com/facebookresearch/theseus/pull/178
    • Updated SDF object in collision cost functions whenever an aux var is updated by @luisenp in https://github.com/facebookresearch/theseus/pull/177
    • Fixed device bug that occurred when merging info in TRUNCATED backward modes by @luisenp in https://github.com/facebookresearch/theseus/pull/181
    • Allow constant inputs to cost functions to be passed as floats by @jeffin07 in https://github.com/facebookresearch/theseus/pull/150
    • Minor changes to core test code. by @luisenp in https://github.com/facebookresearch/theseus/pull/197
    • adding logo by @mhmukadam in https://github.com/facebookresearch/theseus/pull/200
    • Added aliases for Difference and Between by @luisenp in https://github.com/facebookresearch/theseus/pull/199
    • Fixed infinite recursion in GPMotionModel.copy() by @luisenp in https://github.com/facebookresearch/theseus/pull/201
    • Fix bug in diagonal cost weight by @luisenp in https://github.com/facebookresearch/theseus/pull/203
    • Added a check in TheseusFunction that enforces copy() also copies the variables by @luisenp in https://github.com/facebookresearch/theseus/pull/202
    • Fixed bug in Objective.error() that was updating data unncessarily. by @luisenp in https://github.com/facebookresearch/theseus/pull/204
    • DLM gradients by @rtqichen in https://github.com/facebookresearch/theseus/pull/161
    • Setup now uses torch for checking CUDA availability, and CI runs py3.9 tests by @luisenp in https://github.com/facebookresearch/theseus/pull/206
    • Update README to specify python versions and CUDA during install step + by @cpaxton in https://github.com/facebookresearch/theseus/pull/207
    • Vectorization refactor by @luisenp in https://github.com/facebookresearch/theseus/pull/205
    • Implement Robust Cost Function by @luisenp in https://github.com/facebookresearch/theseus/pull/148
    • Added option for auto resetting LUCudaSparseSolver if the batch size needs to change by @luisenp in https://github.com/facebookresearch/theseus/pull/212
    • Address comments on RobustCostFunction implementation by @luisenp in https://github.com/facebookresearch/theseus/pull/213
    • Moved the method that retracts all variables with a given delta to Objective by @luisenp in https://github.com/facebookresearch/theseus/pull/214
    • Fixed flaky unit test for Collision2D jacobians. by @luisenp in https://github.com/facebookresearch/theseus/pull/216
    • Bundle Adjustment using RobustCostFunction by @luisenp in https://github.com/facebookresearch/theseus/pull/149
    • Fixed the cross product bug for SE3.exp_map and SE3.log_map by @fantaosha in https://github.com/facebookresearch/theseus/pull/217
    • Vectorize optimization variables retraction step by @luisenp in https://github.com/facebookresearch/theseus/pull/215
    • Moved Vectorize(objective) to the Optimizer class. by @luisenp in https://github.com/facebookresearch/theseus/pull/218
    • Added on-demand vectorization and also vectorized Objective.error(). by @luisenp in https://github.com/facebookresearch/theseus/pull/221
    • Vectorize PGO by @luisenp in https://github.com/facebookresearch/theseus/pull/211
    • Destroy cusolver context in CusolverLUSolver destructor by @luisenp in https://github.com/facebookresearch/theseus/pull/222
    • Jacobian computation using loop by @fantaosha in https://github.com/facebookresearch/theseus/pull/225
    • Vectorization handles singleton costs by @luisenp in https://github.com/facebookresearch/theseus/pull/226
    • Added close-formed jacobian for DLM perturbation cost. by @luisenp in https://github.com/facebookresearch/theseus/pull/224
    • SE2/SE3/SO3 - consolidate EPS, add dtype-conditioned EPS and add float32 unit tests by @luisenp in https://github.com/facebookresearch/theseus/pull/220
    • Add normalization to Lie group by @fantaosha in https://github.com/facebookresearch/theseus/pull/227
    • Add normalization to Lie group constructor by @fantaosha in https://github.com/facebookresearch/theseus/pull/228
    • Benchmarking PGO on main branch by @fantaosha in https://github.com/facebookresearch/theseus/pull/233
    • Renamed Variable.data as Variable.tensor by @luisenp in https://github.com/facebookresearch/theseus/pull/229
    • More data -> tensor renaming by @luisenp in https://github.com/facebookresearch/theseus/pull/230
    • Added state history tracking by @luisenp in https://github.com/facebookresearch/theseus/pull/234
    • Unified all cost functions so that cost weight is the last non-default argument by @luisenp in https://github.com/facebookresearch/theseus/pull/235
    • Ensure that CHOLMOD python interface casts to 64-bit precision by @luisenp in https://github.com/facebookresearch/theseus/pull/238
    • Add sphinx and readthedocs configuration by @luisenp in https://github.com/facebookresearch/theseus/pull/237
    • Fixed bug in state history for matrix data tensors. by @luisenp in https://github.com/facebookresearch/theseus/pull/240
    • Added isort for examples folder by @luisenp in https://github.com/facebookresearch/theseus/pull/243
    • Avoid batching SDF data, since it's shared by all trajectories. by @luisenp in https://github.com/facebookresearch/theseus/pull/246
    • Fixed device bug in DLM perturbation's jacobians by @luisenp in https://github.com/facebookresearch/theseus/pull/247
    • Benchmark PGO on the main branch by @fantaosha in https://github.com/facebookresearch/theseus/pull/244
    • Added checks to enforce 32- or 64-bit dtype by @luisenp in https://github.com/facebookresearch/theseus/pull/245
    • Update readme by @mhmukadam in https://github.com/facebookresearch/theseus/pull/251
    • Added MANIFEST.in and changed project name to theseus-ai. by @luisenp in https://github.com/facebookresearch/theseus/pull/252
    • Adding simple example by @mhmukadam in https://github.com/facebookresearch/theseus/pull/253
    • Added option to clear cuda cache when vectorization cache is cleared. by @luisenp in https://github.com/facebookresearch/theseus/pull/249
    • Added evaluation directory by @luisenp in https://github.com/facebookresearch/theseus/pull/241

    New Contributors

    • @jeffin07 made their first contribution in https://github.com/facebookresearch/theseus/pull/105
    • @joeaortiz made their first contribution in https://github.com/facebookresearch/theseus/pull/122
    • @exhaustin made their first contribution in https://github.com/facebookresearch/theseus/pull/84
    • @rtqichen made their first contribution in https://github.com/facebookresearch/theseus/pull/161
    • @cpaxton made their first contribution in https://github.com/facebookresearch/theseus/pull/207

    Full Changelog: https://github.com/facebookresearch/theseus/compare/0.1.0-b.2...0.1.0

    Source code(tar.gz)
    Source code(zip)
  • 0.1.0-b.2(Feb 1, 2022)

    Major Additions

    • Initial implicit/truncated backward modes by @bamos in https://github.com/facebookresearch/theseus/pull/29
    • Adds support for energy based learning with NLL loss (LEO) by @psodhi in https://github.com/facebookresearch/theseus/pull/30
    • cusolver based batched LU solver by @maurimo in https://github.com/facebookresearch/theseus/pull/22
    • CUDA batch matrix multiplication and ops by @maurimo in https://github.com/facebookresearch/theseus/pull/23
    • CUDA-based solver class and autograd function by @maurimo in https://github.com/facebookresearch/theseus/pull/24

    What Else Changed

    • Added clearer explanation at the end of Tutorial 0 and fixed doc typos by @luisenp in https://github.com/facebookresearch/theseus/pull/2
    • Default SE2/SO2 is zero element rather than torch empty. by @luisenp in https://github.com/facebookresearch/theseus/pull/3
    • Add plots to tutorials by @bamos in https://github.com/facebookresearch/theseus/pull/25
    • update text in Tutorial 2 per issue #27 by @vshobha in https://github.com/facebookresearch/theseus/pull/31
    • Update contrib and add gitattributes by @mhmukadam in https://github.com/facebookresearch/theseus/pull/33
    • update continuous integration by @maurimo in https://github.com/facebookresearch/theseus/pull/21
    • Changed TheseusLayer.forward() to receive optimizer_kwargs as a single dict by @luisenp in https://github.com/facebookresearch/theseus/pull/45
    • [hotfix] fix lint issues by @maurimo in https://github.com/facebookresearch/theseus/pull/54
    • Update version by @mhmukadam in https://github.com/facebookresearch/theseus/pull/63

    New Contributors

    • @luisenp made their first contribution in https://github.com/facebookresearch/theseus/pull/2
    • @bamos made their first contribution in https://github.com/facebookresearch/theseus/pull/25
    • @vshobha made their first contribution in https://github.com/facebookresearch/theseus/pull/31
    • @maurimo made their first contribution in https://github.com/facebookresearch/theseus/pull/21
    • @mhmukadam made their first contribution in https://github.com/facebookresearch/theseus/pull/33
    • @psodhi made their first contribution in https://github.com/facebookresearch/theseus/pull/30

    Full Changelog: https://github.com/facebookresearch/theseus/compare/0.1.0-b.1...0.1.0-b.2

    Source code(tar.gz)
    Source code(zip)
  • 0.1.0-b.1(Dec 3, 2021)

    Initial beta release.

    • Core data structures.
    • Vector and 2D rigid body representations.
    • Gauss-Newton and LM nonlinear optimizers.
    • LU and Cholesky dense linear solvers.
    • Cholmod sparse linear solver (CPU only).
    • Cost functions for planar motion planning and tactile estimation in planar pushing.
    Source code(tar.gz)
    Source code(zip)
Owner
Meta Research
Meta Research
Just-Now - This Is Just Now Login Friendlist Cloner Tools

JUST NOW LOGIN FRIENDLIST CLONER TOOLS Install $ apt update $ apt upgrade $ apt

MAHADI HASAN AFRIDI 21 Mar 09, 2022
Bravia core script for python

Bravia-Core-Script You need to have a mandatory account If this L3 does not work, try another L3. enjoy

5 Dec 26, 2021
An index of recommendation algorithms that are based on Graph Neural Networks.

An index of recommendation algorithms that are based on Graph Neural Networks.

FIB LAB, Tsinghua University 564 Jan 07, 2023
Vision Transformer and MLP-Mixer Architectures

Vision Transformer and MLP-Mixer Architectures Update (2.7.2021): Added the "When Vision Transformers Outperform ResNets..." paper, and SAM (Sharpness

Google Research 6.4k Jan 04, 2023
Code for the AI lab course 2021/2022 of the University of Verona

AI-Lab Code for the AI lab course 2021/2022 of the University of Verona Set-Up the environment for the curse Download Anaconda for your System. Instal

Davide Corsi 5 Oct 19, 2022
pip install python-office

🍬 python for office 👉 http://www.python4office.cn/ 👈 🌎 English Documentation 📚 简介 Python-office 是一个 Python 自动化办公第三方库,能解决大部分自动化办公的问题。而且每个功能只需一行代码,

程序员晚枫 272 Dec 29, 2022
An interpreter for RASP as described in the ICML 2021 paper "Thinking Like Transformers"

RASP Setup Mac or Linux Run ./setup.sh . It will create a python3 virtual environment and install the dependencies for RASP. It will also try to insta

141 Jan 03, 2023
On-device wake word detection powered by deep learning.

Porcupine Made in Vancouver, Canada by Picovoice Porcupine is a highly-accurate and lightweight wake word engine. It enables building always-listening

Picovoice 2.8k Dec 29, 2022
Principled Detection of Out-of-Distribution Examples in Neural Networks

ODIN: Out-of-Distribution Detector for Neural Networks This is a PyTorch implementation for detecting out-of-distribution examples in neural networks.

189 Nov 29, 2022
N-Person-Check-Checker-Splitter - A calculator app use to divide checks

N-Person-Check-Checker-Splitter This is my from-scratch programmed calculator ap

2 Feb 15, 2022
WSDM2022 Challenge - Large scale temporal graph link prediction

WSDM 2022 Large-scale Temporal Graph Link Prediction - Baseline and Initial Test Set WSDM Cup Website link Link to this challenge This branch offers A

Deep Graph Library 34 Dec 29, 2022
A privacy-focused, intelligent security camera system.

Self-Hosted Home Security Camera System A privacy-focused, intelligent security camera system. Features: Multi-camera support w/ minimal configuration

Scott Barnes 175 Jan 01, 2023
Automatically align face images 🙃→🙂. Can also do windowing and warping.

Automatic Face Alignment (AFA) Carl M. Gaspar & Oliver G.B. Garrod You have lots of photos of faces like this: But you want to line up all of the face

Carl Michael Gaspar 15 Dec 12, 2022
Deep RGB-D Saliency Detection with Depth-Sensitive Attention and Automatic Multi-Modal Fusion (CVPR'2021, Oral)

DSA^2 F: Deep RGB-D Saliency Detection with Depth-Sensitive Attention and Automatic Multi-Modal Fusion (CVPR'2021, Oral) This repo is the official imp

如今我已剑指天涯 46 Dec 21, 2022
PyoMyo - Python Opensource Myo library

PyoMyo Python module for the Thalmic Labs Myo armband. Cross platform and multithreaded and works without the Myo SDK. pip install pyomyo Documentati

PerlinWarp 81 Jan 08, 2023
시각 장애인을 위한 스마트 지팡이에 활용될 딥러닝 모델 (DL Model Repo)

SmartCane-DL-Model Smart Cane using semantic segmentation 참고한 Github repositoy 🔗 https://github.com/JunHyeok96/Road-Segmentation.git 데이터셋 🔗 https://

반드시 졸업한다 (Team Just Graduate) 4 Dec 03, 2021
CryptoFrog - My First Strategy for freqtrade

cryptofrog-strategies CryptoFrog - My First Strategy for freqtrade NB: (2021-04-20) You'll need the latest freqtrade develop branch otherwise you migh

Robert Davey 137 Jan 01, 2023
Gym for multi-agent reinforcement learning

PettingZoo is a Python library for conducting research in multi-agent reinforcement learning, akin to a multi-agent version of Gym. Our website, with

Farama Foundation 1.6k Jan 09, 2023
official Pytorch implementation of ICCV 2021 paper FuseFormer: Fusing Fine-Grained Information in Transformers for Video Inpainting.

FuseFormer: Fusing Fine-Grained Information in Transformers for Video Inpainting By Rui Liu, Hanming Deng, Yangyi Huang, Xiaoyu Shi, Lewei Lu, Wenxiu

77 Dec 27, 2022
Domain Generalization with MixStyle, ICLR'21.

MixStyle This repo contains the code of our ICLR'21 paper, "Domain Generalization with MixStyle". The OpenReview link is https://openreview.net/forum?

Kaiyang 208 Dec 28, 2022