Deep Learning and Logical Reasoning from Data and Knowledge

Overview

Logic Tensor Networks (LTN)

Logic Tensor Network (LTN) is a neurosymbolic framework that supports querying, learning and reasoning with both rich data and rich abstract knowledge about the world. LTN uses a differentiable first-order logic language, called Real Logic, to incorporate data and logic.

Grounding_illustration

LTN converts Real Logic formulas (e.g. ∀x(cat(x) → ∃y(partOf(x,y)∧tail(y)))) into TensorFlow computational graphs. Such formulas can express complex queries about the data, prior knowledge to satisfy during learning, statements to prove ...

Computational_graph_illustration

One can represent and effectively compute the most important tasks of deep learning. Examples of such tasks are classification, regression, clustering, or link prediction. The "Getting Started" section of the README links to tutorials and examples of LTN code.

[Paper]

@misc{badreddine2021logic,
      title={Logic Tensor Networks}, 
      author={Samy Badreddine and Artur d'Avila Garcez and Luciano Serafini and Michael Spranger},
      year={2021},
      eprint={2012.13635},
      archivePrefix={arXiv},
      primaryClass={cs.AI}
}

Installation

Clone the LTN repository and install it using pip install -e <local project path>.

Following are the dependencies we used for development (similar versions should run fine):

  • python 3.8
  • tensorflow >= 2.2 (for running the core system)
  • numpy >= 1.18 (for examples)
  • matplotlib >= 3.2 (for examples)

Repository structure

  • logictensornetworks/core.py -- core system for defining constants, variables, predicates, functions and formulas,
  • logictensornetworks/fuzzy_ops.py -- a collection of fuzzy logic operators defined using Tensorflow primitives,
  • logictensornetworks/utils.py -- a collection of useful functions,
  • tutorials/ -- tutorials to start with LTN,
  • examples/ -- various problems approached using LTN,
  • tests/ -- tests.

Getting Started

Tutorials

tutorials/ contains a walk-through of LTN. In order, the tutorials cover the following topics:

  1. Grounding in LTN part 1: Real Logic, constants, predicates, functions, variables,
  2. Grounding in LTN part 2: connectives and quantifiers (+ complement: choosing appropriate operators for learning),
  3. Learning in LTN: using satisfiability of LTN formulas as a training objective,
  4. Reasoning in LTN: measuring if a formula is the logical consequence of a knowledgebase.

The tutorials are implemented using jupyter notebooks.

Examples

examples/ contains a series of experiments. Their objective is to show how the language of Real Logic can be used to specify a number of tasks that involve learning from data and reasoning about logical knowledge. Examples of such tasks are: classification, regression, clustering, link prediction.

  • The binary classification example illustrates in the simplest setting how to ground a binary classifier as a predicate in LTN, and how to feed batches of data during training,
  • The multiclass classification examples (single-label, multi-label) illustrate how to ground predicates that can classify samples in several classes,
  • The MNIST digit addition example showcases the power of a neurosymbolic approach in a classification task that only provides groundtruth for some final labels (result of the addition), where LTN is used to provide prior knowledge about intermediate labels (possible digits used in the addition),
  • The regression example illustrates how to ground a regressor as a function symbol in LTN,
  • The clustering example illustrates how LTN can solve a task using first-order constraints only, without any label being given through supervision,
  • The Smokes Friends Cancer example is a classical link prediction problem of Statistical Relational Learning where LTN learns embeddings for individuals based on fuzzy groundtruths and first-order constraints.

The examples are presented with both jupyter notebooks and Python scripts.

Querying with LTN

Learning with LTN

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgements

LTN has been developed thanks to active contributions and discussions with the following people (in alphabetical order):

  • Alessandro Daniele (FBK)
  • Artur d’Avila Garcez (City)
  • Benedikt Wagner (City)
  • Emile van Krieken (VU Amsterdam)
  • Francesco Giannini (UniSiena)
  • Giuseppe Marra (UniSiena)
  • Ivan Donadello (FBK)
  • Lucas Bechberger (UniOsnabruck)
  • Luciano Serafini (FBK)
  • Marco Gori (UniSiena)
  • Michael Spranger (Sony AI)
  • Michelangelo Diligenti (UniSiena)
  • Samy Badreddine (Sony AI)
Comments
  • ValueError: mask cannot be scalar.

    ValueError: mask cannot be scalar.

    When I try define ltn.variable the following error is returned:

        <ipython-input-11-51fc9a0fab79>:5 axioms *
            bb12_relation = ltn.variable("P",features[labels_position=="P"])
        C:\Users\Milena\Anaconda3\lib\site-packages\tensorflow\python\ops\array_ops.py:600 _slice_helper
            return boolean_mask(tensor=tensor, mask=slice_spec)
        C:\Users\Milena\Anaconda3\lib\site-packages\tensorflow\python\ops\array_ops.py:1365 boolean_mask
            raise ValueError("mask cannot be scalar.")
    
        ValueError: mask cannot be scalar.
    

    Based on the code of multiclass-multilabel.ipynb I declare the first variable in the axioms function that returns the mentioned error: ltn.variable("P",features[labels_position=="P"])

    opened by MilenaTenorio 9
  • ltnw: run knowledgebase without training should be possibe

    ltnw: run knowledgebase without training should be possibe

    import logging; logging.basicConfig(level=logging.INFO)
    
    import logictensornetworks_wrapper as ltnw
    import tensorflow as tf
    
    ltnw.constant("c",[2.1,3])
    ltnw.constant("d",[3.4,1.5])
    ltnw.function("f",4,2,fun_definition=lambda x,y:x-y)
    mu = tf.constant([2.,3.])
    ltnw.predicate("P",2,pred_definition=lambda x:tf.exp(-tf.reduce_sum(tf.square(x-mu))))
    
    ltnw.formula("P(c)")
    
    ltnw.initialize_knowledgebase()
    
    with tf.Session() as sess:
        print(sess.run(ltnw.ask("P(c)")))
        print(sess.run(ltnw.ask("P(d)")))
        print(sess.run(ltnw.ask("P(f(c,d))")))
    

    Throws ValueError: No variables to optimize.

    bug 
    opened by mspranger 3
  • Lambda for functions need to be implemented using Functional API of TF

    Lambda for functions need to be implemented using Functional API of TF

    Here is what I did:

    import logictensornetworks as ltn
    f1 = ltn.Function.Lambda(lambda args: args[0]-args[1])
    c1 = ltn.constant([2.1,3])
    c2 = ltn.constant([4.5,0.8])
    print(f1([c1,c2])) # multiple arguments are passed as a list
    

    And I get this:

    WARNING:tensorflow:Layers in a Sequential model should only have a single input tensor, but we receive a <class 'list'> input: [<tf.Tensor: shape=(1, 2), dtype=float32, numpy=array([[2.1, 3. ]], dtype=float32)>, <tf.Tensor: shape=(1, 2), dtype=float32, numpy=array([[4.5, 0.8]], dtype=float32)>]
    Consider rewriting this model with the Functional API.
    tf.Tensor([-2.4  2.2], shape=(2,), dtype=float32)
    

    Here are the versions:

    tensorflow=2.4.0
    ltn = directly from this repo today (24 Jan 2021)
    
    opened by thoth291 2
  • Check of number_of_features_or_feed of ltn.variable

    Check of number_of_features_or_feed of ltn.variable

    opened by ivanDonadello 2
  • ltnw.term: evaluating a term after redeclaring its constants, variables or functions

    ltnw.term: evaluating a term after redeclaring its constants, variables or functions

    The implementation of ltnw.term is incompatible with the redeclaration of constants, variables or functions

    ltnw.term is looking at the result value previously stored in the global dictionary ltnw.TERMS rather than reconstructing the term

    For instance, the code:

    ltnw.variable('?x',[[3.0,5.0],[2.0,6.0],[3.0,9.0]])
    print('1st call')
    print('value of variable:\n'+str(ltnw.VARIABLES['var_x'].eval()))
    print('value of term:\n'+str(ltnw.term('?x').eval()))
    
    ltnw.variable('?x',[[3.0,10.0],[1.0,6.0]])
    print('2nd call')
    print('value of variable:\n'+str(ltnw.VARIABLES['var_x'].eval()))
    print('value of term:\n'+str(ltnw.term('?x').eval()))
    

    outputs:

    1st call
    value of variable:
    [[3. 5.]
     [2. 6.]
     [3. 9.]]
    value of term:
    [[3. 5.]
     [2. 6.]
     [3. 9.]]
    
    2nd call
    value of variable:
    [[ 3. 10.]
     [ 1.  6.]]
    value of term:
    [[3. 5.]
     [2. 6.]
     [3. 9.]]
    
    opened by sbadredd 2
  • Error in the axioms of the clustering example

    Error in the axioms of the clustering example

    Following issue #17 and #20 , the commit 578d7bcaa35c797ac1c94cf322f0a6ec524beaa2 updated the axioms in the clustering example.

    It introduced a typo in the masks. In pseudo-code the rules with masks should be:

    for all x,y s.t. close_threshold > distance(x,y): x,y belong to the same cluster
    for all x,y s.t. distance(x,y) > distant_threshold: x,y belong to different cluster
    

    However, the rules have been written:

    for all x,y s.t.  distance(x,y) > close_threshold: x,y belong to the same cluster
    for all x,y s.t. distant_threshold > distance(x,y) : x,y belong to different cluster
    

    Basically, the operands have been mixed. This explains why the latest results were not as good as the previous ones. This is easy to fix; the operands just have to be interchanged again

    bug 
    opened by sbadredd 1
  • Add runtime Type Checking when constructing expressions

    Add runtime Type Checking when constructing expressions

    Issue #19 defined classes for Term and Formula following the usual definitions of FOL (see also)

    This can be used to type-check the arguments of various functions:

    • The inputs of predicates and functions are instances of Term,
    • The expressions in connectives and quantifier operations are instances of Formula,
    • The masks in quantifiers are instances of Formula.

    This is already indicated in type hints. Adding a runtime validation would make a stronger API and ensure that the user correctly uses the different LTN classes

    enhancement 
    opened by sbadredd 0
  • Parent classes for Terms and Formulas

    Parent classes for Terms and Formulas

    Going further than issue #16, we can define classes for Term and Formula.

    • Variable and Constant would be subclasses of Term
    • The output of a Function is a Term
    • Proposition is a subclass of Formula
    • The output of a Predicate is a Formula, and so is the result of connective and quantifiers operations

    This can in turn be used for type checking the arguments of various functions:

    • The inputs of predicates and functions must be instances of Term
    • The inputs of connective and quantifier operations must be instances of Formula

    This could be useful for helping the user with better error messages and debugging

    enhancement 
    opened by sbadredd 0
  • Add a constructor for variables made from trainable constants

    Add a constructor for variables made from trainable constants

    A variable can be instantiated using two different types of objects:

    • A value (numpy, python list, ...) that will be fed in a tf.constant (the variable refers to a new object).
    • A tf.Tensor instance that will be used directly as the variable (the variable refers to the same object).

    The latter is useful when the variable denotes a sequence of trainable constants.

    c1 = ltn.constant([2.1,3], trainable=True)
    c2 = ltn.constant([4.5,0.8], trainable=True)
    
    with tf.GradientTape() as tape:
        # Notice that the assignation must be done within a tf.GradientTape.
        # Tensorflow will keep track of the gradients between c1/c2 and x.
        x = ltn.variable("x",tf.stack([c1,c2]))
        res = P2(x)
    tape.gradient(res,c1).numpy() # the tape keeps track of gradients between P2(x), x and c1
    

    The assignation must be done within a tf.GradientTape. This is explained in the tutorials, but a user could easily miss this information.

    I propose to add a constructor for variables from constants, that must explicitly take the tf.GradientTape instance as an argument. In this way, it will be harder to miss.

    enhancement 
    opened by sbadredd 0
  • Support masks using LTN syntax instead of TensorFlow operations

    Support masks using LTN syntax instead of TensorFlow operations

    To use a guarded quantifier in a LTN sentence, the user must use lambda functions in the middle of traditional LTN syntax. Also, he can use TensorFlow syntax to write the mask, which adds to the confusion.

    For example, in the MNIST single-digit additional example, we have the following mask:

    exists(...,...,
        mask_vars=[d1,d2,labels_z],
        mask_fn=lambda vars: tf.equal(vars[0]+vars[1],vars[2])
    )
    

    If we would write the mask in LTN syntax, this would give:

    exists(...,...,
        mask= Equal([Add([d1,d2]),labels_z])
    )
    

    I believe the latter is clearer and more coherent within an LTN expression.

    This implies that the user must define extra LTN symbols for Equal and Add. I believe this is worth it, for the sake of clarity. In case the user wouldn't want to do that, he can still re-use the lambda function inside of a Mask predicate:

    Mask = ltn.Predicate.Lambda(lambda vars: tf.equal(vars[0]+vars[1],vars[2]))
    ...
    exists(...,...,
        mask=Mask([d1,d2,labels_z])
    )
    

    The mask is still written using an LTN symbol and doesn't require changing the code much compared to the original approach

    enhancement 
    opened by sbadredd 0
  • Create classes for Variable, Constant and Proposition

    Create classes for Variable, Constant and Proposition

    At the moment, LTN implements most expressions using tf.Tensor objects with some added dynamic attributes.

    For example, for a non-trainable LTN constant, the logic is the following (simplified):

    def constant(value):
        result = tf.constant(value)
        result.active_doms = []
        return result
    

    This makes the system easy to break, and debugging difficult. When copying or operating with the constant, the user might not realize that a new tensor is created and the active_doms attribute is lost.

    I propose to separate the logic of LTN with the logic of Tensorflow, and use distinct types. Something like:

    class Constant:
        def __init__(self, value):
            self.tensor = tf.constant(value)
            self.active_doms = []
    

    This implies that LTN predicates and functions will have to be adapted to work with constant.tensor, variable.tensor, ...

    enhancement 
    opened by sbadredd 0
  • Add a ltn.Predicate constructor that takes in a logits model

    Add a ltn.Predicate constructor that takes in a logits model

    Constructors for ltn.Predicate

    The constructor for ltn.Predicate accepts a model that outputs one truth degree in [0,1].

    class ModelThatOutputsATruthDegree(tf.keras.Model):
        def __init__(self):
            super().__init__()
            self.dense1 = tf.keras.layers.Dense(5, activation=tf.nn.relu)
            self.dense2 = tf.keras.layers.Dense(1, activation=tf.nn.sigmoid) # returns one value in [0,1]
    
        def call(self, x):
            x = self.dense1(x)
            return self.dense2(x)
    
    model = ModelThatOutputsATruthDegree()
    P1 = ltn.Predicate(model)
    P1(x) # -> call with a ltn Variable
    

    Issue

    Many models output several values simultaneously. For example, a model for the predicate P2 classifying images x into n classes type_1, ..., type_n will likely output n logits using the same hidden layers.

    Eventually, we would expect to call the corresponding predicate using the syntax P2(x,type). This requires two additional steps:

    1. Transforming the logits into values in [0,1],
    2. Indexing the class using the term type.

    Because this is a common use-case, we implemented a function ltn.utils.LogitsToPredicateModel for convenience. It is used in some of the examples (cf MNIST digit addition). The syntax is:

    logits_model(x) # how to call `logits_model`
    P2 = ltn.Predicate(ltn.utils.LogitsToPredicateModel(logits_model), single_label=True)
    P2([x,type]) # how to call the predicate
    

    It automatically adds a final argument for class indexing and performs a sigmoid or softmax activation depending on the parameter single_label.

    Proposition

    It would be more elegant to have the functionality of creating a predicate from a logits model as a class constructor for ltn.Predicate.

    A suggested syntax is:

    P2 = ltn.Predicate.FromLogits(logits_model, activation_function="softmax", with_class_indexing=True)
    
    • The functionality comes as a new class constructor,
    • The activation function is more explicit than the single_label parameter in ltn.utils.LogitsToPredicateModel,
    • with_class_indexing=False still allows creating predicates in the form of P1(x), like abovementioned.

    Changes to the rest of the API

    The proposition adds a new constructor but shouldn't change any other method of ltn.Predicate or any framework method in general.

    enhancement 
    opened by sbadredd 1
  • Weighted connective operators

    Weighted connective operators

    Hello,

    In my project, I needed to use connective fuzzy logic operator., So, I implemented a class that enables to add weights to classic fuzzy operators, based on this paper : https://www.researchgate.net/publication/2610015_The_Weighting_Issue_in_Fuzzy_Logic

    I think it may be useful for other people or even to add it to ltn operators, so here is my code :

    class WeightedConnective:
        """Class to compute a weighted connective fuzzy operator."""
    
        def __init__(self, single_connective: Callable = ltn.fuzzy_ops.And_Prod()):
            """Initialize WeightedConnective.
    
            Parameters
            ----------
            single_connective : Callable
                Function to compute the binary operation
            """
            self.single_connective = single_connective
    
        def __call__(self, *args: float, weights: list[float] | None = None) -> float:
            """Call function of WeightedConnective.
    
            Parameters
            ----------
            *args : float
                Truth values whose operation should be computed
            weights : list[float] | None
                List of weights for the predicates, None if all predicates should be weighted
                equally, default: None
    
            Returns
            -------
            float:
                Truth value of weighted connective operation between predicates
    
            Raises
            ------
            ValueError
                If no predicate was provided
            ValueError
                If the number of predicates and the number of weights are different
            """
            n = len(args)
            if n == 0:
                raise ValueError("No predicate was found")
            if n == 1:
                return args[0]
            if weights is None:
                weights = [1. / n for _ in range(n)]
            if len(weights) != n:
                raise ValueError(
                    f"Numbers of predicates and weights should be equal : {n} predicates and "
                    f"{len(weights)} weights were found")
    
            s = sum(weights)
            if s != 0:
                weights = [elt / s for elt in weights]
    
            w = max(weights)
            res = (weights[0] / w) * args[0]
            for i, x in enumerate(args):
                if i != 0:
                    res = self.single_connective(res, (weights[i] / w) * args[i])
            return res
    
    enhancement 
    opened by maelle101 1
  • Saving LTN model

    Saving LTN model

    Hello,

    I am working on a project using LTN. I train a model with several Neural Networks (the number varies between executions). Is there an easy way to save and then load an entire LTN model ? Or should I use several time tensorflow saving function and store other information (for example which Predicate corresponds to each NN) by a custom way ?

    Thanks in advance for any answer, and thanks for this great framework.

    opened by maelle101 3
  • Imbalanced classification

    Imbalanced classification

    first, thank you for this great framework, my question is; what is the best way to define variables for imbalanced classification (with a lot of categories) for which in each batch they might be empty? thank you!

    opened by mpourvali 3
  • Allow to permanently `diag` variables

    Allow to permanently `diag` variables

    Diagonal quantification

    Given 2 (or more) variables, ltn.diag allows to express statements about specific pairs (or tuples) of the variables, such that the i-th tuple contains the i-th instances of the variables.

    In simplified pseudo-code, the usual quantification would compute:

    for x_i in x:
        for y_j in y:
            results.append(P(x_i,y_j))
    aggregate(results)
    

    In contrast, diagonal quantification would compute:

    for x_i, y_i in zip(x,y):
        results.append(P(x_i,y_i))
    aggregate(results)
    

    In LTN code, given two variables x1 and x2, we use diagonal quantification as follows:

    x1 = ltn.Variable("x1",np.rand(10,2)) # 10 values in R^2
    x2 = ltn.Variable("x2",np.rand(10,2)) # 10 values in R^2
    P = ltn.Predicate(...)
    P([x1,x2]) # -> returns 10x10 values
    ltn.diag(x1,x2)
    P([x1,x2]) # -> returns only 10 "zipped" values
    ltn.undiag(x1,x2)
    P([x1,x2]) # -> returns 10x10 values
    

    See also the second tutorial.

    Issue

    At the moment, every quantifier automatically calls ltn.undiag after the aggregation is performed, so that the variables keep their normal behavior outside of the formula. Therefore, it is recommended to use ltn.diag only in quantified formulas as follows.

    Forall(ltn.diag(x1,x2), P([x1,x2])) # -> returns an aggregate of only 10 "zipped values"
    Forall((x1,x2), P([x1,x2])) # -> returns an aggregate of 10x10 values
    

    However, there are cases where the second (normal) behavior for the two variables x1 and x2 is never useful. Some variables are designed from the start to be used as paired, zipped variables. In that case, forcing the user to re-use the keyword ltn.diag at every quantification is redundant.

    Proposition

    Define a new keyword ltn.diag_lock which can be used once at the instantiation of the variables, and will force the diag behavior in every subsequent quantification. ltn.undiag will not be called after an aggregation.

    x1 = ltn.Variable("x1",np.rand(10,2)) # 10 values in R^2
    x2 = ltn.Variable("x2",np.rand(10,2)) # 10 values in R^2
    ltn.diag_lock([x1,x2])
    P([x1,x2]) # -> returns only 10 "zipped" values
    Forall((x1,x2), P([x1,x2])) # -> returns an aggregate of only 10 "zipped values"
    Forall((x1,x2), P([x1,x2])) # -> still returns an aggregate of only 10 "zipped values"
    

    Possibly, we can add an ltn.undiag_lock too.

    The implementation details are left to define but shouldn't change the rest of the API.

    enhancement 
    opened by sbadredd 0
  • automated translation of tptp problems to ltn axioms

    automated translation of tptp problems to ltn axioms

    Hello,

    we're trying to automatically translate TPTP problems to axioms computable by the LTNs. Errors occur when trying to apply the gradient tape in the training step because of initialized variables outside of the tape scope as it is described in the tutorial notebooks. Is there by any chance already an implementation (or in the works) to translate a logic problem (written in some intermediate language) to LTN readable axioms?

    Best, Philip

    opened by phjlip 1
Releases(v2.0)
CondenseNet: Light weighted CNN for mobile devices

CondenseNets This repository contains the code (in PyTorch) for "CondenseNet: An Efficient DenseNet using Learned Group Convolutions" paper by Gao Hua

Shichen Liu 690 Nov 30, 2022
[CVPR 2021] A Peek Into the Reasoning of Neural Networks: Interpreting with Structural Visual Concepts

Visual-Reasoning-eXplanation [CVPR 2021 A Peek Into the Reasoning of Neural Networks: Interpreting with Structural Visual Concepts] Project Page | Vid

Andy_Ge 54 Dec 21, 2022
Detection of drones using their thermal signatures from thermal camera through YOLO-V3 based CNN with modifications to encapsulate drone motion

Drone Detection using Thermal Signature This repository highlights the work for night-time drone detection using a using an Optris PI Lightweight ther

Chong Yu Quan 6 Dec 31, 2022
With this package, you can generate mixed-integer linear programming (MIP) models of trained artificial neural networks (ANNs) using the rectified linear unit (ReLU) activation function

With this package, you can generate mixed-integer linear programming (MIP) models of trained artificial neural networks (ANNs) using the rectified linear unit (ReLU) activation function. At the momen

ChemEngAI 40 Dec 27, 2022
Transfer SemanticKITTI labeles into other dataset/sensor formats.

LiDAR-Transfer Transfer SemanticKITTI labeles into other dataset/sensor formats. Content Convert datasets (NUSCENES, FORD, NCLT) to KITTI format Minim

Photogrammetry & Robotics Bonn 64 Nov 21, 2022
Learning to Simulate Dynamic Environments with GameGAN (CVPR 2020)

Learning to Simulate Dynamic Environments with GameGAN PyTorch code for GameGAN Learning to Simulate Dynamic Environments with GameGAN Seung Wook Kim,

199 Dec 26, 2022
iris - Open Source Photos Platform Powered by PyTorch

Open Source Photos Platform Powered by PyTorch. Submission for PyTorch Annual Hackathon 2021.

Omkar Prabhu 137 Sep 10, 2022
An end-to-end machine learning library to directly optimize AUC loss

LibAUC An end-to-end machine learning library for AUC optimization. Why LibAUC? Deep AUC Maximization (DAM) is a paradigm for learning a deep neural n

Andrew 75 Dec 12, 2022
Code for ICDM2020 full paper: "Sub-graph Contrast for Scalable Self-Supervised Graph Representation Learning"

Subg-Con Sub-graph Contrast for Scalable Self-Supervised Graph Representation Learning (Jiao et al., ICDM 2020): https://arxiv.org/abs/2009.10273 Over

34 Jul 06, 2022
Probabilistic-Monocular-3D-Human-Pose-Estimation-with-Normalizing-Flows

Probabilistic-Monocular-3D-Human-Pose-Estimation-with-Normalizing-Flows This is the official implementation of the ICCV 2021 Paper "Probabilistic Mono

62 Nov 23, 2022
A High-Quality Real Time Upscaler for Anime Video

Anime4K Anime4K is a set of open-source, high-quality real-time anime upscaling/denoising algorithms that can be implemented in any programming langua

15.7k Jan 06, 2023
Official Implementation of CVPR 2022 paper: "Mimicking the Oracle: An Initial Phase Decorrelation Approach for Class Incremental Learning"

(CVPR 2022) Mimicking the Oracle: An Initial Phase Decorrelation Approach for Class Incremental Learning ArXiv This repo contains Official Implementat

Yujun Shi 24 Nov 01, 2022
Learning to Reconstruct 3D Manhattan Wireframes from a Single Image

Learning to Reconstruct 3D Manhattan Wireframes From a Single Image This repository contains the PyTorch implementation of the paper: Yichao Zhou, Hao

Yichao Zhou 50 Dec 27, 2022
Intrusion Test Tool with Python

P3ntsT00L Uma ferramenta escrita em Python, feita para Teste de intrusão. Requisitos ter o python 3.9.8 instalado em sua máquina. ter a git instalada

josh washington 2 Dec 27, 2021
Official implementation for the paper: Generating Smooth Pose Sequences for Diverse Human Motion Prediction

Generating Smooth Pose Sequences for Diverse Human Motion Prediction This is official implementation for the paper Generating Smooth Pose Sequences fo

Wei Mao 28 Dec 10, 2022
SparseML is a libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models

SparseML is a toolkit that includes APIs, CLIs, scripts and libraries that apply state-of-the-art sparsification algorithms such as pruning and quantization to any neural network. General, recipe-dri

Neural Magic 1.5k Dec 30, 2022
Clustering is a popular approach to detect patterns in unlabeled data

Visual Clustering Clustering is a popular approach to detect patterns in unlabeled data. Existing clustering methods typically treat samples in a data

Tarek Naous 24 Nov 11, 2022
The code release of paper 'Domain Generalization for Medical Imaging Classification with Linear-Dependency Regularization' NIPS 2020.

Domain Generalization for Medical Imaging Classification with Linear Dependency Regularization The code release of paper 'Domain Generalization for Me

Yufei Wang 56 Dec 28, 2022
MISSFormer: An Effective Medical Image Segmentation Transformer

MISSFormer Code for paper "MISSFormer: An Effective Medical Image Segmentation Transformer". Please read our preprint at the following link: paper_add

Fong 22 Dec 24, 2022
Pytorch code for "Text-Independent Speaker Verification Using 3D Convolutional Neural Networks".

:speaker: Deep Learning & 3D Convolutional Neural Networks for Speaker Verification

Amirsina Torfi 114 Dec 18, 2022