Deep Learning and Logical Reasoning from Data and Knowledge

Overview

Logic Tensor Networks (LTN)

Logic Tensor Network (LTN) is a neurosymbolic framework that supports querying, learning and reasoning with both rich data and rich abstract knowledge about the world. LTN uses a differentiable first-order logic language, called Real Logic, to incorporate data and logic.

Grounding_illustration

LTN converts Real Logic formulas (e.g. ∀x(cat(x) → ∃y(partOf(x,y)∧tail(y)))) into TensorFlow computational graphs. Such formulas can express complex queries about the data, prior knowledge to satisfy during learning, statements to prove ...

Computational_graph_illustration

One can represent and effectively compute the most important tasks of deep learning. Examples of such tasks are classification, regression, clustering, or link prediction. The "Getting Started" section of the README links to tutorials and examples of LTN code.

[Paper]

@misc{badreddine2021logic,
      title={Logic Tensor Networks}, 
      author={Samy Badreddine and Artur d'Avila Garcez and Luciano Serafini and Michael Spranger},
      year={2021},
      eprint={2012.13635},
      archivePrefix={arXiv},
      primaryClass={cs.AI}
}

Installation

Clone the LTN repository and install it using pip install -e <local project path>.

Following are the dependencies we used for development (similar versions should run fine):

  • python 3.8
  • tensorflow >= 2.2 (for running the core system)
  • numpy >= 1.18 (for examples)
  • matplotlib >= 3.2 (for examples)

Repository structure

  • logictensornetworks/core.py -- core system for defining constants, variables, predicates, functions and formulas,
  • logictensornetworks/fuzzy_ops.py -- a collection of fuzzy logic operators defined using Tensorflow primitives,
  • logictensornetworks/utils.py -- a collection of useful functions,
  • tutorials/ -- tutorials to start with LTN,
  • examples/ -- various problems approached using LTN,
  • tests/ -- tests.

Getting Started

Tutorials

tutorials/ contains a walk-through of LTN. In order, the tutorials cover the following topics:

  1. Grounding in LTN part 1: Real Logic, constants, predicates, functions, variables,
  2. Grounding in LTN part 2: connectives and quantifiers (+ complement: choosing appropriate operators for learning),
  3. Learning in LTN: using satisfiability of LTN formulas as a training objective,
  4. Reasoning in LTN: measuring if a formula is the logical consequence of a knowledgebase.

The tutorials are implemented using jupyter notebooks.

Examples

examples/ contains a series of experiments. Their objective is to show how the language of Real Logic can be used to specify a number of tasks that involve learning from data and reasoning about logical knowledge. Examples of such tasks are: classification, regression, clustering, link prediction.

  • The binary classification example illustrates in the simplest setting how to ground a binary classifier as a predicate in LTN, and how to feed batches of data during training,
  • The multiclass classification examples (single-label, multi-label) illustrate how to ground predicates that can classify samples in several classes,
  • The MNIST digit addition example showcases the power of a neurosymbolic approach in a classification task that only provides groundtruth for some final labels (result of the addition), where LTN is used to provide prior knowledge about intermediate labels (possible digits used in the addition),
  • The regression example illustrates how to ground a regressor as a function symbol in LTN,
  • The clustering example illustrates how LTN can solve a task using first-order constraints only, without any label being given through supervision,
  • The Smokes Friends Cancer example is a classical link prediction problem of Statistical Relational Learning where LTN learns embeddings for individuals based on fuzzy groundtruths and first-order constraints.

The examples are presented with both jupyter notebooks and Python scripts.

Querying with LTN

Learning with LTN

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgements

LTN has been developed thanks to active contributions and discussions with the following people (in alphabetical order):

  • Alessandro Daniele (FBK)
  • Artur d’Avila Garcez (City)
  • Benedikt Wagner (City)
  • Emile van Krieken (VU Amsterdam)
  • Francesco Giannini (UniSiena)
  • Giuseppe Marra (UniSiena)
  • Ivan Donadello (FBK)
  • Lucas Bechberger (UniOsnabruck)
  • Luciano Serafini (FBK)
  • Marco Gori (UniSiena)
  • Michael Spranger (Sony AI)
  • Michelangelo Diligenti (UniSiena)
  • Samy Badreddine (Sony AI)
Comments
  • ValueError: mask cannot be scalar.

    ValueError: mask cannot be scalar.

    When I try define ltn.variable the following error is returned:

        <ipython-input-11-51fc9a0fab79>:5 axioms *
            bb12_relation = ltn.variable("P",features[labels_position=="P"])
        C:\Users\Milena\Anaconda3\lib\site-packages\tensorflow\python\ops\array_ops.py:600 _slice_helper
            return boolean_mask(tensor=tensor, mask=slice_spec)
        C:\Users\Milena\Anaconda3\lib\site-packages\tensorflow\python\ops\array_ops.py:1365 boolean_mask
            raise ValueError("mask cannot be scalar.")
    
        ValueError: mask cannot be scalar.
    

    Based on the code of multiclass-multilabel.ipynb I declare the first variable in the axioms function that returns the mentioned error: ltn.variable("P",features[labels_position=="P"])

    opened by MilenaTenorio 9
  • ltnw: run knowledgebase without training should be possibe

    ltnw: run knowledgebase without training should be possibe

    import logging; logging.basicConfig(level=logging.INFO)
    
    import logictensornetworks_wrapper as ltnw
    import tensorflow as tf
    
    ltnw.constant("c",[2.1,3])
    ltnw.constant("d",[3.4,1.5])
    ltnw.function("f",4,2,fun_definition=lambda x,y:x-y)
    mu = tf.constant([2.,3.])
    ltnw.predicate("P",2,pred_definition=lambda x:tf.exp(-tf.reduce_sum(tf.square(x-mu))))
    
    ltnw.formula("P(c)")
    
    ltnw.initialize_knowledgebase()
    
    with tf.Session() as sess:
        print(sess.run(ltnw.ask("P(c)")))
        print(sess.run(ltnw.ask("P(d)")))
        print(sess.run(ltnw.ask("P(f(c,d))")))
    

    Throws ValueError: No variables to optimize.

    bug 
    opened by mspranger 3
  • Lambda for functions need to be implemented using Functional API of TF

    Lambda for functions need to be implemented using Functional API of TF

    Here is what I did:

    import logictensornetworks as ltn
    f1 = ltn.Function.Lambda(lambda args: args[0]-args[1])
    c1 = ltn.constant([2.1,3])
    c2 = ltn.constant([4.5,0.8])
    print(f1([c1,c2])) # multiple arguments are passed as a list
    

    And I get this:

    WARNING:tensorflow:Layers in a Sequential model should only have a single input tensor, but we receive a <class 'list'> input: [<tf.Tensor: shape=(1, 2), dtype=float32, numpy=array([[2.1, 3. ]], dtype=float32)>, <tf.Tensor: shape=(1, 2), dtype=float32, numpy=array([[4.5, 0.8]], dtype=float32)>]
    Consider rewriting this model with the Functional API.
    tf.Tensor([-2.4  2.2], shape=(2,), dtype=float32)
    

    Here are the versions:

    tensorflow=2.4.0
    ltn = directly from this repo today (24 Jan 2021)
    
    opened by thoth291 2
  • Check of number_of_features_or_feed of ltn.variable

    Check of number_of_features_or_feed of ltn.variable

    opened by ivanDonadello 2
  • ltnw.term: evaluating a term after redeclaring its constants, variables or functions

    ltnw.term: evaluating a term after redeclaring its constants, variables or functions

    The implementation of ltnw.term is incompatible with the redeclaration of constants, variables or functions

    ltnw.term is looking at the result value previously stored in the global dictionary ltnw.TERMS rather than reconstructing the term

    For instance, the code:

    ltnw.variable('?x',[[3.0,5.0],[2.0,6.0],[3.0,9.0]])
    print('1st call')
    print('value of variable:\n'+str(ltnw.VARIABLES['var_x'].eval()))
    print('value of term:\n'+str(ltnw.term('?x').eval()))
    
    ltnw.variable('?x',[[3.0,10.0],[1.0,6.0]])
    print('2nd call')
    print('value of variable:\n'+str(ltnw.VARIABLES['var_x'].eval()))
    print('value of term:\n'+str(ltnw.term('?x').eval()))
    

    outputs:

    1st call
    value of variable:
    [[3. 5.]
     [2. 6.]
     [3. 9.]]
    value of term:
    [[3. 5.]
     [2. 6.]
     [3. 9.]]
    
    2nd call
    value of variable:
    [[ 3. 10.]
     [ 1.  6.]]
    value of term:
    [[3. 5.]
     [2. 6.]
     [3. 9.]]
    
    opened by sbadredd 2
  • Error in the axioms of the clustering example

    Error in the axioms of the clustering example

    Following issue #17 and #20 , the commit 578d7bcaa35c797ac1c94cf322f0a6ec524beaa2 updated the axioms in the clustering example.

    It introduced a typo in the masks. In pseudo-code the rules with masks should be:

    for all x,y s.t. close_threshold > distance(x,y): x,y belong to the same cluster
    for all x,y s.t. distance(x,y) > distant_threshold: x,y belong to different cluster
    

    However, the rules have been written:

    for all x,y s.t.  distance(x,y) > close_threshold: x,y belong to the same cluster
    for all x,y s.t. distant_threshold > distance(x,y) : x,y belong to different cluster
    

    Basically, the operands have been mixed. This explains why the latest results were not as good as the previous ones. This is easy to fix; the operands just have to be interchanged again

    bug 
    opened by sbadredd 1
  • Add runtime Type Checking when constructing expressions

    Add runtime Type Checking when constructing expressions

    Issue #19 defined classes for Term and Formula following the usual definitions of FOL (see also)

    This can be used to type-check the arguments of various functions:

    • The inputs of predicates and functions are instances of Term,
    • The expressions in connectives and quantifier operations are instances of Formula,
    • The masks in quantifiers are instances of Formula.

    This is already indicated in type hints. Adding a runtime validation would make a stronger API and ensure that the user correctly uses the different LTN classes

    enhancement 
    opened by sbadredd 0
  • Parent classes for Terms and Formulas

    Parent classes for Terms and Formulas

    Going further than issue #16, we can define classes for Term and Formula.

    • Variable and Constant would be subclasses of Term
    • The output of a Function is a Term
    • Proposition is a subclass of Formula
    • The output of a Predicate is a Formula, and so is the result of connective and quantifiers operations

    This can in turn be used for type checking the arguments of various functions:

    • The inputs of predicates and functions must be instances of Term
    • The inputs of connective and quantifier operations must be instances of Formula

    This could be useful for helping the user with better error messages and debugging

    enhancement 
    opened by sbadredd 0
  • Add a constructor for variables made from trainable constants

    Add a constructor for variables made from trainable constants

    A variable can be instantiated using two different types of objects:

    • A value (numpy, python list, ...) that will be fed in a tf.constant (the variable refers to a new object).
    • A tf.Tensor instance that will be used directly as the variable (the variable refers to the same object).

    The latter is useful when the variable denotes a sequence of trainable constants.

    c1 = ltn.constant([2.1,3], trainable=True)
    c2 = ltn.constant([4.5,0.8], trainable=True)
    
    with tf.GradientTape() as tape:
        # Notice that the assignation must be done within a tf.GradientTape.
        # Tensorflow will keep track of the gradients between c1/c2 and x.
        x = ltn.variable("x",tf.stack([c1,c2]))
        res = P2(x)
    tape.gradient(res,c1).numpy() # the tape keeps track of gradients between P2(x), x and c1
    

    The assignation must be done within a tf.GradientTape. This is explained in the tutorials, but a user could easily miss this information.

    I propose to add a constructor for variables from constants, that must explicitly take the tf.GradientTape instance as an argument. In this way, it will be harder to miss.

    enhancement 
    opened by sbadredd 0
  • Support masks using LTN syntax instead of TensorFlow operations

    Support masks using LTN syntax instead of TensorFlow operations

    To use a guarded quantifier in a LTN sentence, the user must use lambda functions in the middle of traditional LTN syntax. Also, he can use TensorFlow syntax to write the mask, which adds to the confusion.

    For example, in the MNIST single-digit additional example, we have the following mask:

    exists(...,...,
        mask_vars=[d1,d2,labels_z],
        mask_fn=lambda vars: tf.equal(vars[0]+vars[1],vars[2])
    )
    

    If we would write the mask in LTN syntax, this would give:

    exists(...,...,
        mask= Equal([Add([d1,d2]),labels_z])
    )
    

    I believe the latter is clearer and more coherent within an LTN expression.

    This implies that the user must define extra LTN symbols for Equal and Add. I believe this is worth it, for the sake of clarity. In case the user wouldn't want to do that, he can still re-use the lambda function inside of a Mask predicate:

    Mask = ltn.Predicate.Lambda(lambda vars: tf.equal(vars[0]+vars[1],vars[2]))
    ...
    exists(...,...,
        mask=Mask([d1,d2,labels_z])
    )
    

    The mask is still written using an LTN symbol and doesn't require changing the code much compared to the original approach

    enhancement 
    opened by sbadredd 0
  • Create classes for Variable, Constant and Proposition

    Create classes for Variable, Constant and Proposition

    At the moment, LTN implements most expressions using tf.Tensor objects with some added dynamic attributes.

    For example, for a non-trainable LTN constant, the logic is the following (simplified):

    def constant(value):
        result = tf.constant(value)
        result.active_doms = []
        return result
    

    This makes the system easy to break, and debugging difficult. When copying or operating with the constant, the user might not realize that a new tensor is created and the active_doms attribute is lost.

    I propose to separate the logic of LTN with the logic of Tensorflow, and use distinct types. Something like:

    class Constant:
        def __init__(self, value):
            self.tensor = tf.constant(value)
            self.active_doms = []
    

    This implies that LTN predicates and functions will have to be adapted to work with constant.tensor, variable.tensor, ...

    enhancement 
    opened by sbadredd 0
  • Add a ltn.Predicate constructor that takes in a logits model

    Add a ltn.Predicate constructor that takes in a logits model

    Constructors for ltn.Predicate

    The constructor for ltn.Predicate accepts a model that outputs one truth degree in [0,1].

    class ModelThatOutputsATruthDegree(tf.keras.Model):
        def __init__(self):
            super().__init__()
            self.dense1 = tf.keras.layers.Dense(5, activation=tf.nn.relu)
            self.dense2 = tf.keras.layers.Dense(1, activation=tf.nn.sigmoid) # returns one value in [0,1]
    
        def call(self, x):
            x = self.dense1(x)
            return self.dense2(x)
    
    model = ModelThatOutputsATruthDegree()
    P1 = ltn.Predicate(model)
    P1(x) # -> call with a ltn Variable
    

    Issue

    Many models output several values simultaneously. For example, a model for the predicate P2 classifying images x into n classes type_1, ..., type_n will likely output n logits using the same hidden layers.

    Eventually, we would expect to call the corresponding predicate using the syntax P2(x,type). This requires two additional steps:

    1. Transforming the logits into values in [0,1],
    2. Indexing the class using the term type.

    Because this is a common use-case, we implemented a function ltn.utils.LogitsToPredicateModel for convenience. It is used in some of the examples (cf MNIST digit addition). The syntax is:

    logits_model(x) # how to call `logits_model`
    P2 = ltn.Predicate(ltn.utils.LogitsToPredicateModel(logits_model), single_label=True)
    P2([x,type]) # how to call the predicate
    

    It automatically adds a final argument for class indexing and performs a sigmoid or softmax activation depending on the parameter single_label.

    Proposition

    It would be more elegant to have the functionality of creating a predicate from a logits model as a class constructor for ltn.Predicate.

    A suggested syntax is:

    P2 = ltn.Predicate.FromLogits(logits_model, activation_function="softmax", with_class_indexing=True)
    
    • The functionality comes as a new class constructor,
    • The activation function is more explicit than the single_label parameter in ltn.utils.LogitsToPredicateModel,
    • with_class_indexing=False still allows creating predicates in the form of P1(x), like abovementioned.

    Changes to the rest of the API

    The proposition adds a new constructor but shouldn't change any other method of ltn.Predicate or any framework method in general.

    enhancement 
    opened by sbadredd 1
  • Weighted connective operators

    Weighted connective operators

    Hello,

    In my project, I needed to use connective fuzzy logic operator., So, I implemented a class that enables to add weights to classic fuzzy operators, based on this paper : https://www.researchgate.net/publication/2610015_The_Weighting_Issue_in_Fuzzy_Logic

    I think it may be useful for other people or even to add it to ltn operators, so here is my code :

    class WeightedConnective:
        """Class to compute a weighted connective fuzzy operator."""
    
        def __init__(self, single_connective: Callable = ltn.fuzzy_ops.And_Prod()):
            """Initialize WeightedConnective.
    
            Parameters
            ----------
            single_connective : Callable
                Function to compute the binary operation
            """
            self.single_connective = single_connective
    
        def __call__(self, *args: float, weights: list[float] | None = None) -> float:
            """Call function of WeightedConnective.
    
            Parameters
            ----------
            *args : float
                Truth values whose operation should be computed
            weights : list[float] | None
                List of weights for the predicates, None if all predicates should be weighted
                equally, default: None
    
            Returns
            -------
            float:
                Truth value of weighted connective operation between predicates
    
            Raises
            ------
            ValueError
                If no predicate was provided
            ValueError
                If the number of predicates and the number of weights are different
            """
            n = len(args)
            if n == 0:
                raise ValueError("No predicate was found")
            if n == 1:
                return args[0]
            if weights is None:
                weights = [1. / n for _ in range(n)]
            if len(weights) != n:
                raise ValueError(
                    f"Numbers of predicates and weights should be equal : {n} predicates and "
                    f"{len(weights)} weights were found")
    
            s = sum(weights)
            if s != 0:
                weights = [elt / s for elt in weights]
    
            w = max(weights)
            res = (weights[0] / w) * args[0]
            for i, x in enumerate(args):
                if i != 0:
                    res = self.single_connective(res, (weights[i] / w) * args[i])
            return res
    
    enhancement 
    opened by maelle101 1
  • Saving LTN model

    Saving LTN model

    Hello,

    I am working on a project using LTN. I train a model with several Neural Networks (the number varies between executions). Is there an easy way to save and then load an entire LTN model ? Or should I use several time tensorflow saving function and store other information (for example which Predicate corresponds to each NN) by a custom way ?

    Thanks in advance for any answer, and thanks for this great framework.

    opened by maelle101 3
  • Imbalanced classification

    Imbalanced classification

    first, thank you for this great framework, my question is; what is the best way to define variables for imbalanced classification (with a lot of categories) for which in each batch they might be empty? thank you!

    opened by mpourvali 3
  • Allow to permanently `diag` variables

    Allow to permanently `diag` variables

    Diagonal quantification

    Given 2 (or more) variables, ltn.diag allows to express statements about specific pairs (or tuples) of the variables, such that the i-th tuple contains the i-th instances of the variables.

    In simplified pseudo-code, the usual quantification would compute:

    for x_i in x:
        for y_j in y:
            results.append(P(x_i,y_j))
    aggregate(results)
    

    In contrast, diagonal quantification would compute:

    for x_i, y_i in zip(x,y):
        results.append(P(x_i,y_i))
    aggregate(results)
    

    In LTN code, given two variables x1 and x2, we use diagonal quantification as follows:

    x1 = ltn.Variable("x1",np.rand(10,2)) # 10 values in R^2
    x2 = ltn.Variable("x2",np.rand(10,2)) # 10 values in R^2
    P = ltn.Predicate(...)
    P([x1,x2]) # -> returns 10x10 values
    ltn.diag(x1,x2)
    P([x1,x2]) # -> returns only 10 "zipped" values
    ltn.undiag(x1,x2)
    P([x1,x2]) # -> returns 10x10 values
    

    See also the second tutorial.

    Issue

    At the moment, every quantifier automatically calls ltn.undiag after the aggregation is performed, so that the variables keep their normal behavior outside of the formula. Therefore, it is recommended to use ltn.diag only in quantified formulas as follows.

    Forall(ltn.diag(x1,x2), P([x1,x2])) # -> returns an aggregate of only 10 "zipped values"
    Forall((x1,x2), P([x1,x2])) # -> returns an aggregate of 10x10 values
    

    However, there are cases where the second (normal) behavior for the two variables x1 and x2 is never useful. Some variables are designed from the start to be used as paired, zipped variables. In that case, forcing the user to re-use the keyword ltn.diag at every quantification is redundant.

    Proposition

    Define a new keyword ltn.diag_lock which can be used once at the instantiation of the variables, and will force the diag behavior in every subsequent quantification. ltn.undiag will not be called after an aggregation.

    x1 = ltn.Variable("x1",np.rand(10,2)) # 10 values in R^2
    x2 = ltn.Variable("x2",np.rand(10,2)) # 10 values in R^2
    ltn.diag_lock([x1,x2])
    P([x1,x2]) # -> returns only 10 "zipped" values
    Forall((x1,x2), P([x1,x2])) # -> returns an aggregate of only 10 "zipped values"
    Forall((x1,x2), P([x1,x2])) # -> still returns an aggregate of only 10 "zipped values"
    

    Possibly, we can add an ltn.undiag_lock too.

    The implementation details are left to define but shouldn't change the rest of the API.

    enhancement 
    opened by sbadredd 0
  • automated translation of tptp problems to ltn axioms

    automated translation of tptp problems to ltn axioms

    Hello,

    we're trying to automatically translate TPTP problems to axioms computable by the LTNs. Errors occur when trying to apply the gradient tape in the training step because of initialized variables outside of the tape scope as it is described in the tutorial notebooks. Is there by any chance already an implementation (or in the works) to translate a logic problem (written in some intermediate language) to LTN readable axioms?

    Best, Philip

    opened by phjlip 1
Releases(v2.0)
[CVPR2021] De-rendering the World's Revolutionary Artefacts

De-rendering the World's Revolutionary Artefacts Project Page | Video | Paper In CVPR 2021 Shangzhe Wu1,4, Ameesh Makadia4, Jiajun Wu2, Noah Snavely4,

49 Nov 06, 2022
Code for ViTAS_Vision Transformer Architecture Search

Vision Transformer Architecture Search This repository open source the code for ViTAS: Vision Transformer Architecture Search. ViTAS aims to search fo

46 Dec 17, 2022
Hyperparameter tuning for humans

KerasTuner KerasTuner is an easy-to-use, scalable hyperparameter optimization framework that solves the pain points of hyperparameter search. Easily c

Keras 2.6k Dec 27, 2022
MaskTrackRCNN for video instance segmentation based on mmdetection

MaskTrackRCNN for video instance segmentation Introduction This repo serves as the official code release of the MaskTrackRCNN model for video instance

411 Jan 05, 2023
Extract MNIST handwritten digits dataset binary file into bmp images

MNIST-dataset-extractor Extract MNIST handwritten digits dataset binary file into bmp images More info at http://yann.lecun.com/exdb/mnist/ Dependenci

Omar Mostafa 6 May 24, 2021
NAS Benchmark in "Prioritized Architecture Sampling with Monto-Carlo Tree Search", CVPR2021

NAS-Bench-Macro This repository includes the benchmark and code for NAS-Bench-Macro in paper "Prioritized Architecture Sampling with Monto-Carlo Tree

35 Jan 03, 2023
DeepAL: Deep Active Learning in Python

DeepAL: Deep Active Learning in Python Python implementations of the following active learning algorithms: Random Sampling Least Confidence [1] Margin

Kuan-Hao Huang 583 Jan 03, 2023
Pytorch implementation of face attention network

Face Attention Network Pytorch implementation of face attention network as described in Face Attention Network: An Effective Face Detector for the Occ

Hooks 312 Dec 09, 2022
Animatable Neural Radiance Fields for Modeling Dynamic Human Bodies

To make the comparison with Animatable NeRF easier on the Human3.6M dataset, we save the quantitative results at here, which also contains the results of other methods, including Neural Body, D-NeRF,

ZJU3DV 359 Jan 08, 2023
PyTorch3D is FAIR's library of reusable components for deep learning with 3D data

Introduction PyTorch3D provides efficient, reusable components for 3D Computer Vision research with PyTorch. Key features include: Data structure for

Facebook Research 6.8k Jan 01, 2023
High-performance moving least squares material point method (MLS-MPM) solver.

High-Performance MLS-MPM Solver with Cutting and Coupling (CPIC) (MIT License) A Moving Least Squares Material Point Method with Displacement Disconti

Yuanming Hu 2.2k Dec 31, 2022
Unrolled Generative Adversarial Networks

Unrolled Generative Adversarial Networks Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein arxiv:1611.02163 This repo contains an example notebo

Ben Poole 292 Dec 06, 2022
Open-AI's DALL-E for large scale training in mesh-tensorflow.

DALL-E in Mesh-Tensorflow [WIP] Open-AI's DALL-E in Mesh-Tensorflow. If this is similarly efficient to GPT-Neo, this repo should be able to train mode

EleutherAI 432 Dec 16, 2022
[SIGMETRICS 2022] One Proxy Device Is Enough for Hardware-Aware Neural Architecture Search

One Proxy Device Is Enough for Hardware-Aware Neural Architecture Search paper | website One Proxy Device Is Enough for Hardware-Aware Neural Architec

10 Dec 16, 2022
QRec: A Python Framework for quick implementation of recommender systems (TensorFlow Based)

Introduction QRec is a Python framework for recommender systems (Supported by Python 3.7.4 and Tensorflow 1.14+) in which a number of influential and

Yu 1.4k Dec 30, 2022
Official repository of OFA. Paper: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework

Paper | Blog OFA is a unified multimodal pretrained model that unifies modalities (i.e., cross-modality, vision, language) and tasks (e.g., image gene

OFA Sys 1.4k Jan 08, 2023
simple artificial intelligence utilities

Simple AI Project home: http://github.com/simpleai-team/simpleai This lib implements many of the artificial intelligence algorithms described on the b

921 Dec 08, 2022
Supplemental learning materials for "Fourier Feature Networks and Neural Volume Rendering"

Fourier Feature Networks and Neural Volume Rendering This repository is a companion to a lecture given at the University of Cambridge Engineering Depa

Matthew A Johnson 133 Dec 26, 2022
[NIPS 2021] UOTA: Improving Self-supervised Learning with Automated Unsupervised Outlier Arbitration.

UOTA: Improving Self-supervised Learning with Automated Unsupervised Outlier Arbitration This repository is the official PyTorch implementation of UOT

6 Jun 29, 2022
RefineGNN - Iterative refinement graph neural network for antibody sequence-structure co-design (RefineGNN)

Iterative refinement graph neural network for antibody sequence-structure co-des

Wengong Jin 83 Dec 31, 2022