A lightweight python AUTOmatic-arRAY library.

Overview

autoray

A lightweight python AUTOmatic-arRAY library. Write numeric code that works for:

Azure Pipelines codecov Language grade: Python Anaconda-Server Badge

As an example consider this function that orthogonalizes a matrix using the modified Gram-Schmidt algorithm:

from autoray import do

def modified_gram_schmidt(X):
    # n.b. performance-wise this particular function is *not*
    # a good candidate for a pure python implementation

    Q = []
    for j in range(0, X.shape[0]):

        q = X[j, :]
        for i in range(0, j):
            rij = do('tensordot', do('conj', Q[i]), q, 1)
            q = q - rij * Q[i]

        rjj = do('linalg.norm', q, 2)
        Q.append(q / rjj)

    return do('stack', Q, axis=0)

Which is now compatible with all of the above mentioned libraries! Abstracting out the array interface also allows the following functionality:

  • swap custom versions of functions for specific backends
  • trace through computations lazily without actually running them
  • automatically share intermediates and fold constants in computations
  • compile functions with a unified interface for different backends

... all implemented in a lightweight manner with an emphasis on minimizing overhead. Of course complete compatibility is not going to be possible for all functions, operations and libraries, but autoray hopefully makes the job much easier. Of the above, tensorflow has quite a different interface and pytorch probably the most different. Whilst for example not every function will work out-of-the-box for these two, autoray is also designed with the easy addition of new functions in mind (for example adding new translations is often a one-liner).

Contents

Basic Usage

How does it work?

autoray works using essentially a single dispatch mechanism on the first argument for do, or the like keyword argument if specified, fetching functions from the whichever module defined that supplied array. Additionally, it caches a few custom translations and lookups so as to handle libraries like tensorflow that don't exactly replicate the numpy api (for example sum gets translated to tensorflow.reduce_sum). Due to the caching, each do call only adds 1 or 2 dict look-ups as overhead - much less than using functools.singledispatch for example.

Essentially you call your numpy-style array functions in one of four ways:

1. Automatic backend:

do('sqrt', x)

Here the backend is inferred from x. Usually dispatch happens on the first argument, but several functions (such as stack and einsum) know to override this and look elsewhere.

2. Backend 'like' another array:

do('random.normal', size=(2, 3, 4), like=x)

Here the backend is inferred from another array and can thus be implicitly propagated, even when functions take no array arguments.

3. Explicit backend:

do('einsum', eq, x, y, like='customlib')

Here one simply supplies the desired function backend explicitly.

4. Context manager

with backend_like('autoray.lazy'):
    xy = do('tensordot', x, y, 1)
    z = do('trace', xy)

Here you set a default backend for a whole block of code. This default overrides method 1. above but 2. and 3. still take precedence.

If you don't like the explicit do syntax, then you can import the fake numpy object as a drop-in replacement instead:

from autoray import numpy as np

x = np.random.uniform(size=(2, 3, 4), like='tensorflow')
np.tensordot(x, x, [(2, 1), (2, 1)])
# 
   

np.eye(3, like=x)  # many functions obviously can't dispatch without the `like` keyword
# 
   

Customizing functions

If you want to directly provide a missing or alternative implementation of some function for a particular backend you can swap one in with autoray.register_function:

def my_custom_torch_svd(x):
    import torch

    print('Hello SVD!')
    u, s, v = torch.svd(x)

    return u, s, v.T

ar.register_function('torch', 'linalg.svd', my_custom_torch_svd)

x = ar.do('random.uniform', size=(3, 4), like='torch')

ar.do('linalg.svd', x)
# Hello SVD!
# (tensor([[-0.5832,  0.6188, -0.5262],
#          [-0.5787, -0.7711, -0.2655],
#          [-0.5701,  0.1497,  0.8078]]),
#  tensor([2.0336, 0.8518, 0.4572]),
#  tensor([[-0.4568, -0.3166, -0.6835, -0.4732],
#          [-0.5477,  0.2825, -0.2756,  0.7377],
#          [ 0.2468, -0.8423, -0.0993,  0.4687]]))

If you want to make use of the existing function you can supply wrap=True in which case the custom function supplied should act like a decorator:

def my_custom_sum_wrapper(old_fn):

    def new_fn(*args, **kwargs):
        print('Hello sum!')
        return old_fn(*args **kwargs)

    return new_fn

ar.register_function('torch', 'sum', my_custom_sum_wrapper, wrap=True)

ar.do('sum', x)
# Hello sum!
# tensor(5.4099)

Though be careful, if you call register_function again it will now wrap the new function!

Lazy Computation

Abstracting out the array interface also affords an opportunity to run any computations utilizing autoray.do completely lazily. autoray provides the lazy submodule and LazyArray class for this purpose:

from autoray import lazy

# input array - can be anything autoray.do supports
x = do('random.normal', size=(5, 5), like='torch')

# convert it to a lazy 'computational node'
lx = lazy.array(x)

# supply this to our function
ly = modified_gram_schmidt(lx)
ly
# 
   

None of the functions have been called yet - simply the shapes and dtypes have been propagated through. ly represents the final stack call, and tracks which other LazyArray instances it needs to materialize before it can compute itself. At this point one can perform various bits of introspection:

# --> the largest array encountered
ly.history_max_size()
# 25

# number of unique computational nodes
len(tuple(ly))
# 57

# --> traverse the computational graph and collect statistics
from collections import Counter
Counter(node.fn_name for node in ly)
# Counter({'stack': 1,
#          'truediv': 5,
#          'norm': 5,
#          'sub': 10,
#          'mul': 10,
#          'getitem': 5,
#          'None': 1,
#          'tensordot': 10,
#          'conjugate': 10})

# --> plot the full computation graph
ly.plot()

Preview the memory footprint (in terms of number of array elements) throughout the computation:

ly.plot_history_size_footprint()

Finally, if we want to compute the actual value we call:

ly.compute()
# tensor([[-0.4225,  0.1371, -0.2307,  0.5892,  0.6343],
#         [ 0.4079, -0.5103,  0.5924,  0.4261,  0.2016],
#         [ 0.2569, -0.5173, -0.4875, -0.4238,  0.4992],
#         [-0.2778, -0.5870, -0.3928,  0.3645, -0.5396],
#         [ 0.7155,  0.3297, -0.4515,  0.3986, -0.1291]])

Note that once a node is computed, it only stores the actual result and clears all references to other LazyArray instances.

Sharing intermediates

If the computation might involve repeated computations then you can call it in a shared_intermediates context:

with lazy.shared_intermediates():
    ly = modified_gram_schmidt(lx)

# --> a few nodes can be reused here (c.f. 57 previously)
len(tuple(ly))
# 51

this caches the computational nodes as they are created based on a hash of their input arguments (note this uses id for array like things, i.e. assumes they are immutable). Unlike eagerly caching function calls in real time, which might consume large amounts of memory, now when the computation runs (i.e. ly.compute() is called) data is only kept as long as its needed.

Why not use e.g. dask?

There are many reasons to use dask, but it incurs a pretty large overhead for big computational graphs with comparatively small operations. Calling and computing the modified_gram_schmidt function for a 100x100 matrix (20,102 computational nodes) with dask.array takes ~25sec whereas with lazy.array it takes ~0.25sec:

import dask.array as da

%%time
dx = da.array(x)
dy = modified_gram_schmidt(dx)
y = dy.compute()
# CPU times: user 25.6 s, sys: 137 ms, total: 25.8 s
# Wall time: 25.5 s

%%time
lx = lazy.array(x)
ly = modified_gram_schmidt(lx)
y = ly.compute()
# CPU times: user 256 ms, sys: 0 ns, total: 256 ms
# Wall time: 255 ms

This is enabled by autoray's very minimal implementation.

Compilation

Various libraries provide tools for tracing numeric functions and turning the resulting computation into a more efficient, compiled function. Notably:

autoray is obviously very well suited to these since it just dispatches functions to whichever library is doing the tracing - functions written using autoray should be immediately compatible with all of them.

The autojit wrapper

Moreover, autoray also provides a unified interface for compiling functions so that the compilation backend can be easily switched or automatically identified:

from autoray import autojit

mgs = autojit(modified_gram_schmidt)

Currently autojit supports functions with the signature fn(*args, **kwargs) -> array where both args and kwargs can be any nested combination of tuple, list and dict objects containings arrays. We can compare different compiled versions of this simply by changing the backend option:

x = do("random.normal", size=(50, 50), like='numpy')

# first the uncompiled version
%%timeit
modified_gram_schmidt(x)
# 23.5 ms ± 241 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

# 'python' mode unravels computation into source then uses compile+exec
%%timeit
mgs(x)  # backend='python'
# 17.8 ms ± 191 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

%%timeit
mgs(x, backend='torch')
# 11.9 ms ± 80.5 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)

%%timeit
mgs(x, backend='tensorflow')
# 1.87 ms ± 441 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)

# need to config jax to run on same footing
from jax.config import config
config.update("jax_enable_x64", True)
config.update('jax_platform_name', 'cpu')

%%timeit
mgs(x, backend='jax')
# 226 µs ± 14.8 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)

%%timeit
do('linalg.qr', x, like='numpy')[0]  # appriximately the 'C' version
# 156 µs ± 32.1 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

Here you see (with this very for-loop heavy function), that there are significant gains to be made for all the compilations options. Whilst jax for example achieves fantastic performance, it should be noted the compilation step takes a lot of time and scales badly (super-linearly) with the number of computational nodes.

Details

Special Functions

The main function is do, but the following special (i.e. not in numpy) functions are also implemented that may be useful:

  • autoray.infer_backend - check what library is being inferred for a given array
  • autoray.to_backend_dtype - convert a string specified dtype like 'float32' to torch.float32 for example
  • autoray.get_dtype_name - convert a backend dtype back into the equivalent string specifier like 'complex64'
  • autoray.astype - backend agnostic dtype conversion of arrays
  • autoray.to_numpy - convert any array to a numpy.ndarray

Here are all of those in action:

import autoray as ar

backend = 'torch'
dtype = ar.to_backend_dtype('float64', like=backend)
dtype
# torch.float64

x = ar.do('random.normal', size=(4,), dtype=dtype, like=backend)
x
# tensor([ 0.0461,  0.3028,  0.1790, -0.1494], dtype=torch.float64)

ar.infer_backend(x)
# 'torch'

ar.get_dtype_name(x)
# 'float64'

x32 = ar.astype(x, 'float32')
ar.to_numpy(x32)
# array([ 0.04605161,  0.30280888,  0.17903718, -0.14936243], dtype=float32)

Deviations from numpy

autoray doesn't have an API as such, since it is essentially just a fancy single dispatch mechanism. On the other hand, where translations are in place, they generally use the numpy API. So autoray.do('stack', arrays=pytorch_tensors, axis=0) gets automatically translated into torch.stack(tensors=pytorch_tensors, dims=0) and so forth.

Currently the one place this isn't true is autoray.do('linalg.svd', x) where instead full_matrices=False is used as the default since this generally makes more sense and many libraries don't even implement the other case. Autoray also dispatches 'linalg.expm' for numpy arrays to scipy, and may well do with other scipy-only functions at some point.

Installation

You can install autoray via conda-forge as well as with pip. Alternatively, simply copy the monolithic autoray.py into your project internally (if dependencies aren't your thing) to provide do.

Alternatives

  • The __array_function__ protocol has been suggested and now implemented in numpy. Hopefully this will eventually negate the need for autoray. On the other hand, third party libraries themselves need to implement the interface, which has not been done, for example, in tensorflow yet.
  • The uarray project aims to develop a generic array interface but comes with the warning "This is experimental and very early research code. Don't use this.".

Contributing

Pull requests such as extra translations are very welcome!

Comments
  • custom dispatchers

    custom dispatchers

    I implemented custom dispatchers as discussed in #3.


    I wrote some tests to confirm it works as expected. One potential caveat, functions like np.stack can also eat a one-dimensional array, in which case it acts like the identity function. In those cases backend is inferred from first element of this one-dimensional array by join_array_dispatcher. This works correctly for all supported backends except sparse, where backend is inferred as numpy. This is however not the intended usage of stack anyway, so I don't think this is a problem.


    For join_array_dispatcher we could optionally check if args[0][0] even exists in the first place, and if not just return args[0]. This is only relevant if you want to supply it empty sequences or 0-dimensional arrays (which numpy doesn't allow, for example). If so, one can check this with hasattr(args[0], '__getitem__'). This is a bit nicer than a try: / except: block, especially since we have to catch both IndexErrors and TypeErrors.


    It seems my auto formatter black was wreaking havoc on your code. Biggest thing it did is replacing single quotes for double quotes everywhere. If this bothers you, I will do a commit with only my actual changes. Otherwise, do you use a particular auto formatter for this project?


    I also took the liberty of adding translations for np.take. I'm a bit confused about the syntax of make_translator. For torch implementation of take (i.e. index_select) the names of arguments and order of arguments is both different, and I'm not 100% sure I did it correctly (at least it seems to work).

    opened by RikVoorhaar 7
  • [WIP] implement LazyArray + autocompile

    [WIP] implement LazyArray + autocompile

    This adds two nice and fairly natural features to autoray, any feedback on interface welcome!

    Lazy Computation (LazyArray)

    from autoray import lazy
    ... 
    lx = lazy.array(x)
    lf = fn(lx)
    lf.compute()
    

    If you write a function / algorithm with do calls, then you can trace through the entire thing lazily and:

    • [x] e.g. check max array size encountered, number of calls to specific functions
    • [x] plot the computational graph
    • [x] identify and cache shared intermediates only (with lazy.shared_intermediates(): ...)
    • [x] perform all constant parts of a computational graph

    This is all implemented in a very lightweight manner making it about 100x faster than e.g. dask (which of course offers many other features) on some examples, and suitable for computational graphs with >100,000s nodes.

    spiral

    TODO:

    • [ ] implement a few remaining functions
    • [ ] eventually might be possible to estimate FLOPs etc
    • [ ] eventually might be useful to execute the computational graph with e.g. ray

    Auto compilation / unified JIT interface (@autocompile)

    @autocompile
    def fn(x, y):
        ...
    
    fn(x, y, backend='torch')
    

    The aim here is to have a single decorator for marking an autoray function to be JIT compiled, which makes it v easy to switch backends ('jax', 'tensorflow', 'torch') or just dispatches to the correct one automatically. cc @RikVoorhaar and #5.

    • [x] currently the signature must be fn(*arrays) -> array
    • [x] there is a 'python' backend that 'simply' uses the LazyArray computational graph to compile and exec an unravelled version of the function, with shared intermediates, folded constants + surrounding logic stripped away

    TODO:

    • [ ] CompilePython probably needs to dispatch based on the input array shapes (overhead is very minimal)
    enhancement 
    opened by jcmgray 5
  • with_dtype wrapper

    with_dtype wrapper

    fixes #7

    This seems to work. Only thing I'm not satisfied with is the comparison

    A = fn(*args, **kwargs)
    if (dtype is not None) and (dtype != standard_type):
        ...
    

    For example if standard_dtype='float64' and dtype=np.float64 this will still trigger, but it shouldn't. I have three ways around this:

    • Store standard_type not as string but as backend specific dtype object. Then compare standard_typetoA.dtype` instead.
    • Compare standard_type to get_dtype_name(A).
    • Make a cached function for dtype comparisons across backends
    opened by RikVoorhaar 5
  • split and where translations

    split and where translations

    As mentioned in #5

    • Translating np.diff didn't seem worthwhile. There is tf.experimental.numpy.diff, but it's only on newest version of tensorflow, and probably in a later version it will become just tf.diff.
    • Torch and tensorflow wrappers for split can probably be unified into one function, but this would mean wrapping a translation around the wrapper I made for torch since syntax is slightly different. Both typically take a list as input if using sections to split the array, so I don't think my wrapper brings that much extra overhead.
    • For torch I didn't use tensor_split since it's not yet in the stable release. Maybe one can do different things in autoray depending on torch version number, but that doesn't seem very elegant.
    • I couldn't run any of the CuPy tests on this machine since it doesn't have a GPU
    opened by RikVoorhaar 3
  • correctly infer backend for functions like concatenate

    correctly infer backend for functions like concatenate

    When using concatenate, the result is always converted to numpy arrays. E.g.

    A = np.random.normal(size=(10,10),like='tensorflow')
    B = np.random.normal(size=(10,10),like='tensorflow')
    concat = ar.do('concatenate',(A,B),axis=0)
    type(concat)
    >> numpy.ndarray
    

    This can be mitigated by instead doing

    ar.do('concatenate',(A,B),axis=0,like=ar.infer_backend(A))
    

    but this is a bit unwieldy. The problem is that the argument (A,B) is a tuple, which belongs to backend builtins, which in turn always gets inferred as numpy by infer_backend.

    This problem applies to any function whose first argument is a list/tuple of arrays. I know at least that this applied to concatenate, einsum and stack. For einsum I just opted to call opt_einsum directly, which does correctly infer backend in this case, but that is besides the point.

    I can see several possible approaches:

    1. Make an exception in the way backend is inferred for these specific functions. When using ar.register_function the user should also be able to indicate the function is of this type.
    2. In _infer_class_backend_cached make a specific check for builtins: we check if the item is iterable, if so we check the backend of the first element. If it is again builtins, then leave it as is, but if it is something else then return that backend instead.
    3. Do nothing, but explicitly mention this behavior in the README.

    I'm partial to the second option, as I don't expect it to have too many side-effects. If you want I can do a PR.

    opened by RikVoorhaar 3
  • autoray transpose attribute-like function fails for torch tensors

    autoray transpose attribute-like function fails for torch tensors

    Problem

    The api for the numpy.ndarray transpose attribute allows it to permute an arbitrary number of indices into an arbitrary order. However, the torch.Tensor transpose attribute assumes a matrix and therefore only accepts two indices. This means something like the following will fail:

    import numpy
    import torch
    from autoray import do, transpose
    
    Ttorch = torch.zeros([2,3,4,5])
    Tnp = numpy.zeros([2,3,4,5])
    
    print(Tnp.transpose([2,1,3,0]).shape)   # gives (4,3,5,2), as expected
    print(transpose(Tnp, [2,1,3,0]).shape)  # also gives (4,3,5,2)
    print(Ttorch.transpose([2,1,3,0]).size()) # this fails with a TypeError
    print(transpose(Ttorch, [2,1,3,0]).size())  # which means this also fails
    

    Solution

    The correct torch.Tensor attribute is permute, which has the same exact behavior as numpy.ndarray.transpose. This means that something like the following will do what we want:

    import numpy
    import torch
    from autoray import do, transpose
    
    Ttorch = torch.zeros([2,3,4,5])
    Tnp = numpy.zeros([2,3,4,5])
    
    print(Tnp.transpose([2,1,3,0]).shape)   # gives (4,3,5,2), as expected
    print(transpose(Tnp, [2,1,3,0]).shape)  # also gives (4,3,5,2)
    print(Ttorch.permute(2,1,3,0).size())  # also gives (4,3,5,2)
    

    Proposed code change

    I'm not sure that there is a way to incorporate this behavior in a clean, non-invasive manner. As far as I understand, the _module_aliases and _func_aliases dictionaries are not applicable since permute is only an attribute of torch.Tensor (i.e. there is no torch.permute(torch.Tensor, *args)). This therefore seems to necessitate direct modification of the autoray.transpose function (line 308). The following patch works, but it's not very clean:

    current code:

    def transpose(x, *args):
        try:
            return x.transpose(*args)
        except AttributeError:
            return do('transpose', x, *args)
    

    patched code:

    def transpose(x, *args):
        backend = infer_backend(x)
        if backend == 'torch':
            return x.permute(*args)
        else:
            try:
                return x.transpose(*args)
            except AttributeError:
                return do('transpose', x, *args)
    

    The inherent challenge is that we need to alias x.transpose() to x.permute() when x is a torch.Tensor. If there is a better way than what I have suggested, let me know!

    (p.s.) I found this problem via an error I obtained in quimb. I was trying to fuse multiple bonds of a quimb Tensor when using pyTorch as the backend, and this problem arose.

    opened by mattorourke17 3
  • Supporting sparse

    Supporting sparse

    @jcmgray Here is my attempt at implementing sparse support in autoray. Let me know if something needs to be changed significantly -- happy to help make this work.

    opened by emprice 2
  • Random numbers and dtypes

    Random numbers and dtypes

    Something I ran into is that different backends prefer either single or double precision. I personally need double precision, or at least prefer to consistently use one precision. This is also much more fair for benchmarking. The main problem is when forming arrays, for example to generate (2,2) random normal array with double precision we should do:

    import jax
    jax.config.update('jax_enable_x64', True)
    for backend in ['numpy', 'tensorflow', 'torch', 'jax', 'dask', 'mars', 'sparse']:
        if backend in ('tensorflow', 'torch'):
            A = ar.do("random.normal", size=(2,2), like=backend, dtype=ar.to_backend_dtype('float64', backend))
        else:
            A = ar.do("random.normal", size=(2,2), like=backend)
    

    We can't just always supply the dtype argument, since numpy, dask and sparse throw an error when fed dtype. We could also generate whatever dtype array and then convert the result to double precision, but this doesn't really address the problem. This doesn't just hold for random.normal, but for essentially any kind of array-creating functions, like zeros or eye, although there supplying dtype does work. (for jax we still need to set jax_enable_x64.) I can also see from your gen_rand method in test_autoray.py that you encountered similar problems.

    Suggested solutions

    • Make a wrapper for numpy, dask, sparse (and cupy?) that ignores the dtype keyword, and then converts result to the correct dtype after the fact (if dtype is 'float32'). For jax we should maybe throw a warning if trying to generate double precision random numbers without setting 'jax_enable_x64' to True. In fact, for example for 'zeros', jax already throws a warning in this situation.
    • Make a autoray.random.Generator object, like the numpy.random.Generator, but then backend aware. This may perform slightly better, and I think numpy is urging people to start using this over calling e.g. numpy.random.normal directly (although it doesn't seem to be catching on).

    It might also be worthwhile to add translations for some more standard distributions like binomial or poisson, although I mostly use normal and uniform myself.

    opened by RikVoorhaar 1
  • Interface and 'do'

    Interface and 'do'

    Hi, this is an interesting, yet difficult library! While it seems intriguing, I wonder thought about the interface: why is there always a 'do'? Why does the library not just wrap the API directly? Writing do("command") seems quite cumbersome compared to command.

    Is that API a future plan?

    opened by jonas-eschle 6
  • Translations and ideas for extensions

    Translations and ideas for extensions

    In addition to the take translation I added in my previous PR, there is some more that might be good to add. At least, I am using these myself. I can make a PR.

    • split. The syntax is different for numpy and tensorflow/torch. The former wants the number of splits or an array of locations of splits, whereas tensorflow/torch either want the number of splits or an array of split sizes. We can go from one format the other using np.diff
    • diff. This is implemented in tensorflow as tf.experimental.numpy.diff, and not implemented at all for torch. This also means I don't know what the cleanest way is to implement split mentioned above. Maybe just using np.diff and then convert to array of right backend if necessary?
    • linalg.norm, seems to work with tensorflow, but for torch we need to do _SUBMODULE_ALIASES["torch", "linalg.norm"] = "torch" I didn't check these things for any other libraries.

    Maybe a bit of an overly ambitious idea, but have you ever thought about baking in support for JIT? Right now it seems that for TensorFlow everything works with eager execution, and I'm not sure you can compile the computation graphs resulting from a series of ar.do calls. PyTorch also support JIT to some extend with TorchScript Numpy doesn' t have JIT, but there is Numba Cupy has an interface with Numba that does seem to allow JIT. JAX has support for JIT

    Another thing is gradients. Several of these libraries have automatic gradients, and having an autoray interface for doing computations with automatic gradients would be fantastic as well (although probably also ambitious).

    If you think these things are doable at all, I wouldn't mind spending some time to try to figure out how this could work.


    Less ambitiously, you did mention in #3 that something along the lines of

    with set_backend(like):
        ...
    

    would be pretty nice. I can try to do this. This probably comes down to checking for a global flag in ar.do after the line

    if like is None:
    
    opened by RikVoorhaar 4
Releases(v0.5.3)
Owner
Johnnie Gray
Johnnie Gray
Quadruped-command-tracking-controller - Quadruped command tracking controller (flat terrain)

Quadruped command tracking controller (flat terrain) Prepare Install RAISIM link

Yunho Kim 4 Oct 20, 2022
PyTorch implementation code for the paper MixCo: Mix-up Contrastive Learning for Visual Representation

How to Reproduce our Results This repository contains PyTorch implementation code for the paper MixCo: Mix-up Contrastive Learning for Visual Represen

opcrisis 46 Dec 15, 2022
A parametric soroban written with CADQuery.

A parametric soroban written in CADQuery The purpose of this project is to demonstrate how "code CAD" can be intuitive to learn. See soroban.py for a

Lee 4 Aug 13, 2022
Instantaneous Motion Generation for Robots and Machines.

Ruckig Instantaneous Motion Generation for Robots and Machines. Ruckig generates trajectories on-the-fly, allowing robots and machines to react instan

Berscheid 374 Dec 23, 2022
Platform-agnostic AI Framework 🔥

🇬🇧 TensorLayerX is a multi-backend AI framework, which can run on almost all operation systems and AI hardwares, and support hybrid-framework progra

TensorLayer Community 171 Jan 06, 2023
这是一个yolo3-tf2的源码,可以用于训练自己的模型。

YOLOV3:You Only Look Once目标检测模型在Tensorflow2当中的实现 目录 性能情况 Performance 所需环境 Environment 文件下载 Download 训练步骤 How2train 预测步骤 How2predict 评估步骤 How2eval 参考资料

Bubbliiiing 68 Dec 21, 2022
A denoising diffusion probabilistic model synthesises galaxies that are qualitatively and physically indistinguishable from the real thing.

Realistic galaxy simulation via score-based generative models Official code for 'Realistic galaxy simulation via score-based generative models'. We us

Michael Smith 32 Dec 20, 2022
Official implementation of Rethinking Graph Neural Architecture Search from Message-passing (CVPR2021)

Rethinking Graph Neural Architecture Search from Message-passing Intro The GNAS can automatically learn better architecture with the optimal depth of

Shaofei Cai 48 Sep 30, 2022
AntroPy: entropy and complexity of (EEG) time-series in Python

AntroPy is a Python 3 package providing several time-efficient algorithms for computing the complexity of time-series. It can be used for example to e

Raphael Vallat 153 Dec 27, 2022
Continuous Diffusion Graph Neural Network

We present Graph Neural Diffusion (GRAND) that approaches deep learning on graphs as a continuous diffusion process and treats Graph Neural Networks (GNNs) as discretisations of an underlying PDE.

Twitter Research 227 Jan 05, 2023
Nvdiffrast - Modular Primitives for High-Performance Differentiable Rendering

Nvdiffrast – Modular Primitives for High-Performance Differentiable Rendering Modular Primitives for High-Performance Differentiable Rendering Samuli

NVIDIA Research Projects 675 Jan 06, 2023
This script runs neural style transfer against the provided content image.

Neural Style Transfer Content Style Output Description: This script runs neural style transfer against the provided content image. The content image m

Martynas Subonis 0 Nov 25, 2021
PyTorch framework, for reproducing experiments from the paper Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Neural Networks

Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Neural Networks. Code, based on the PyTorch framework, for reprodu

Asaf 3 Dec 27, 2022
Contrastive Learning with Non-Semantic Negatives

Contrastive Learning with Non-Semantic Negatives This repository is the official implementation of Robust Contrastive Learning Using Negative Samples

39 Jul 31, 2022
Pseudo-Visual Speech Denoising

Pseudo-Visual Speech Denoising This code is for our paper titled: Visual Speech Enhancement Without A Real Visual Stream published at WACV 2021. Autho

Sindhu 94 Oct 22, 2022
Unofficial PyTorch Implementation of "DOLG: Single-Stage Image Retrieval with Deep Orthogonal Fusion of Local and Global Features"

Pytorch Implementation of Deep Orthogonal Fusion of Local and Global Features (DOLG) This is the unofficial PyTorch Implementation of "DOLG: Single-St

DK 96 Jan 06, 2023
Few-shot Relation Extraction via Bayesian Meta-learning on Relation Graphs

Few-shot Relation Extraction via Bayesian Meta-learning on Relation Graphs This is an implemetation of the paper Few-shot Relation Extraction via Baye

MilaGraph 36 Nov 22, 2022
YKKDetector For Python

YKKDetector OpenCVを利用した機械学習データをもとに、VRChatのスクリーンショットなどからYKKさん(もとい「幽狐族のお姉様」)を検出できるソフトウェアです。 マニュアル こちらから実行環境のセットアップから解説する詳細なマニュアルをご覧いただけます。 ライセンス 本ソフトウェア

あんふぃとらいと 5 Dec 07, 2021
The MATH Dataset

Measuring Mathematical Problem Solving With the MATH Dataset This is the repository for Measuring Mathematical Problem Solving With the MATH Dataset b

Dan Hendrycks 267 Dec 26, 2022
Code for "Localization with Sampling-Argmax", NeurIPS 2021

Localization with Sampling-Argmax [Paper] [arXiv] [Project Page] Localization with Sampling-Argmax Jiefeng Li, Tong Chen, Ruiqi Shi, Yujing Lou, Yong-

JeffLi 71 Dec 17, 2022