clustimage is a python package for unsupervised clustering of images.

Overview

clustimage

Python PyPI Version License Github Forks GitHub Open Issues Project Status Sphinx Downloads Downloads BuyMeCoffee DOI

The aim of clustimage is to detect natural groups or clusters of images.

Image recognition is a computer vision task for identifying and verifying objects/persons on a photograph. We can seperate the image recognition task into the two broad tasks, namely the supervised and unsupervised task. In case of the supervised task, we have to classify an image into a fixed number of learned categories. Most packages rely on (deep) neural networks, and try solve the problem of predicting "whats on the image". In case of the unsupervised task, we do not depend on the fact that training data is required but we can interpret the input data and find natural groups or clusters. However, it can be quit a breath to carefully group similar images in an unsupervised manner, or simply identify the unique images.

The aim of clustimage is to detect natural groups or clusters of images. It works using a multi-step proces of carefully pre-processing the images, extracting the features, and evaluating the optimal number of clusters across the feature space. The optimal number of clusters can be determined using well known methods suchs as silhouette, dbindex, and derivatives in combination with clustering methods, such as agglomerative, kmeans, dbscan and hdbscan. With clustimage we aim to determine the most robust clustering by efficiently searching across the parameter and evaluation the clusters. Besides clustering of images, the clustimage model can also be used to find the most similar images for a new unseen sample.

A schematic overview is as following:

clustimage overcomess the following challenges:

* 1. Robustly groups similar images.
* 2. Returns the unique images.
* 3. Finds higly similar images for a given input image.

clustimage is fun because:

* It does not require a learning proces.
* It can group any set of images.
* It can return only the unique() images.
* it can find highly similar images given an input image.
* It provided many plots to improve understanding of the feature-space and sample-sample relationships
* It is build on core statistics, such as PCA, HOG and many more, and therefore it does not has a dependency block.
* It works out of the box.

Installation

  • Install clustimage from PyPI (recommended). clustimage is compatible with Python 3.6+ and runs on Linux, MacOS X and Windows.
  • A new environment can be created as following:
conda create -n env_clustimage python=3.8
conda activate env_clustimage
  • Install from pypi
pip install -U clustimage

Import the clustimage package

from clustimage import Clustimage

Example 1: Digit images.

In this example we will be using a flattened grayscale image array loaded from sklearn. The array in NxM, where N are the samples and M the flattened raw rgb/gray image.

# Load library
import matplotlib.pyplot as plt
from clustimage import Clustimage
# init
cl = Clustimage()
# Load example digit data
X = cl.import_example(data='mnist')

print(X)
# Each row is an image that can be plotted after reshaping:
plt.imshow(X[0,:].reshape(8,8), cmap='binary')
# array([[ 0.,  0.,  5., ...,  0.,  0.,  0.],
#        [ 0.,  0.,  0., ..., 10.,  0.,  0.],
#        [ 0.,  0.,  0., ..., 16.,  9.,  0.],
#        ...,
#        [ 0.,  0.,  0., ...,  9.,  0.,  0.],
#        [ 0.,  0.,  0., ...,  4.,  0.,  0.],
#        [ 0.,  0.,  6., ...,  6.,  0.,  0.]])
# 
# Preprocessing and feature extraction
results = cl.fit_transform(X)

# Lets examine the results.
print(results.keys())
# ['feat', 'xycoord', 'pathnames', 'filenames', 'labels']
# 
# feat      : Extracted features
# xycoord   : Coordinates of samples in the embedded space.
# filenames : Name of the files
# pathnames : Absolute location of the files
# labels    : Cluster labels in the same order as the input

# Get the unique images
unique_samples = cl.unique()
# 
print(unique_samples.keys())
# ['labels', 'idx', 'xycoord_center', 'pathnames']
# 
# Collect the unique images from the input
X[unique_samples['idx'],:]
Plot the unique images.
cl.plot_unique()

Scatter samples based on the embedded space.
# The scatterplot that is coloured on the clusterlabels. The clusterlabels should match the unique labels.
# Cluster 1 contains digit 4
# Cluster 5 contains digit 2
# etc
# 
# No images in scatterplot
cl.scatter(zoom=None)

# Include images scatterplot
cl.scatter(zoom=4)

Plot the clustered images

# Plot all images per cluster
cl.plot(cmap='binary')

# Plot the images in a specific cluster
cl.plot(cmap='binary', labels=[1,5])

Dendrogram

# The dendrogram is based on the high-dimensional feature space.
cl.dendrogram()

Make various other plots

# Plot the explained variance
cl.pca.plot()
# Make scatter plot of PC1 vs PC2
cl.pca.scatter(legend=False, label=False)
# Plot the evaluation of the number of clusters
cl.clusteval.plot()
# Make silhouette plot
cl.clusteval.scatter(cl.results['xycoord'])

Example 2: Flower images.

In this example I will be using flower images for which the path locations are somewhere on disk.

# Load library
from clustimage import Clustimage
# init
cl = Clustimage(method='pca')
# load example with flowers
pathnames = cl.import_example(data='flowers')
# The pathnames are stored in a list
print(pathnames[0:2])
# ['C:\\temp\\flower_images\\0001.png', 'C:\\temp\\flower_images\\0002.png']

# Preprocessing, feature extraction and clustering. Lets set a minimum of 1-
results = cl.fit_transform(pathnames)

# Lets first evaluate the number of detected clusters.
# This looks pretty good because there is a high distinction between the peak for 5 clusters and the number of clusters that subsequently follow.
cl.clusteval.plot()
cl.clusteval.scatter(cl.results['xycoord'])

Scatter

cl.scatter(dotsize=50, zoom=None)
cl.scatter(dotsize=50, zoom=0.5)
cl.scatter(dotsize=50, zoom=0.5, img_mean=False)

Plot the clustered images

# Plot unique images
cl.plot_unique()
cl.plot_unique(img_mean=False)

# Plot all images per cluster
cl.plot()

# Plot the images in a specific cluster
cl.plot(labels=3)

# Plot dendrogram
cl.dendrogram()
# Plot clustered images
cl.plot()

Make prediction for unseen input image.

# Find images that are significanly similar as the unseen input image. 
results_find = cl.find(path_to_imgs[0:2], alpha=0.05)
cl.plot_find()

# Map the unseen images in existing feature-space.
cl.scatter()

Example 3: Cluster the faces on images.

from clustimage import Clustimage
# Initialize with grayscale and extract HOG features.
cl = Clustimage(method='hog', grayscale=True)
# Load example with faces
pathnames = cl.import_example(data='faces')
# First we need to detect and extract the faces from the images
face_results = cl.detect_faces(pathnames)
# The detected faces are extracted and stored in face_resuls. We can now easily provide the pathnames of the faces that are stored in pathnames_face.
results = cl.fit_transform(face_results['pathnames_face'])

# Plot the evaluation of the number of clusters. As you can see, the maximum number of cluster evaluated is 24 can perhaps be too small.
cl.clusteval.plot()
# Lets increase the maximum number and clusters and run solely the clustering. Note that you do not need to fit_transform() anymore. You can only do the clustering now.
cl.cluster(max_clust=35)
# And plot again. As you can see, it keeps increasing which means that it may not found any local maximum anymore.
# When looking at the graph, we see a local maximum at 12 clusters. Lets go for that
cl.cluster(min_clust=12, max_clust=13)

# Lets plot the 12 unique clusters that contain the faces
cl.plot_unique()

# Scatter
cl.scatter(zoom=None)
cl.scatter(zoom=0.2)

# Make plot
cl.plot(show_hog=True, labels=[1,7])

# Plot faces
cl.plot_faces()
# Dendrogram depicts the clustering of the faces
cl.dendrogram()

Maintainers

  • Erdogan Taskesen, github: erdogant
  • https://github.com/erdogant/clustimage
  • Please cite in your publications if this is useful for your research (see citation).
  • All kinds of contributions are welcome!
  • If you wish to buy me a Coffee for this work, it is very appreciated :) See LICENSE for details.

Other interesting stuff

Comments
  • Trainability

    Trainability

    Hello,

    I just want to know if after running the model( with diffrent parameters) on similar datasets, will the model learn from one ruun to another. Like a classical NN when the weights are updated at each iteration?

    if for exemple i run the code with diffrent parameters but i only save the last pkl file, am i only saving the weights of the last run or am i saving the whole thing?

    I hope i explained myself well.

    opened by MalekBezzina 6
  • Trying to retrieve original file path names in the results

    Trying to retrieve original file path names in the results

    Is there a way to be able to get the original pathnames of images used post fit_transform?

    I am uploading images onto google colab, and reading them in by their filepaths as "/content/name_of_image", and then I wish to be able to recover this "/content/name_of_image" post running clustering.

    I tried to extract pathnames per label using the following code, but seemed to be getting the filepaths for images created in a temporary directory as follows:

    CODE Iloc = cl.results['labels']==0 cl.results['pathnames'][Iloc]

    OUTPUT array(['/tmp/clustimage/8732cb41-c72d-4266-b164-ff453d68428a.png', '/tmp/clustimage/440fecd8-8a9c-49a0-b100-ccfb66107425.png', '/tmp/clustimage/3c9c38d8-4da9-4e4f-9130-d3836182b8c6.png', '/tmp/clustimage/85cc4848-1faf-44ea-ae4c-9d9d88bd6323.png', '/tmp/clustimage/6127e4fb-1c25-4ba9-8d68-56ef482e3db4.png', '/tmp/clustimage/abcf85e0-af1a-48f1-8861-122122b64e32.png', '/tmp/clustimage/275bbde0-394d-4ba4-b4d0-1c67da323c8b.png', '/tmp/clustimage/30b62285-2628-45c0-86b2-fea305cb8db3.png', '/tmp/clustimage/c47a6867-3c8f-480c-a7bd-b3e7ec4ba334.png', '/tmp/clustimage/da5c17fc-de2a-4375-b03c-066a0904428a.png'], dtype='<U56')

    I wish to get the output as the original filenames that were in the pathnames list.

    opened by Sid01123 4
  • can not find or create dirpath under linux

    can not find or create dirpath under linux

    Hello, and thank you for clustimage! Running a little script on a linux box generates the following error:

    $ python3.8 clustimg.py [clustimage] >ERROR> [None] does not exists or can not be created. Traceback (most recent call last): File "/home/cpsoz/.local/lib/python3.8/site-packages/clustimage/clustimage.py", line 2554, in _set_tempdir dirpath = os.path.join(tempfile.tempdir, 'clustimage') File "/usr/lib/python3.8/posixpath.py", line 76, in join a = os.fspath(a) TypeError: expected str, bytes or os.PathLike object, not NoneType

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "clustimg.py", line 9, in cl = Clustimage(method='pca',dirpath=None,embedding='tsne',grayscale=False,dim=(128,128),params_pca={'n_components':0.5}) File "/home/cpsoz/.local/lib/python3.8/site-packages/clustimage/clustimage.py", line 208, in init self.params['dirpath'] = _set_tempdir(dirpath) File "/home/cpsoz/.local/lib/python3.8/site-packages/clustimage/clustimage.py", line 2566, in _set_tempdir raise Exception(logger.error('[%s] does not exists or can not be created.', dirpath)) Exception: None

    Script clustimg.py has the following content:

    import sys import os import glob from clustimage import Clustimage cl = Clustimage(method='pca',dirpath=None,embedding='tsne',grayscale=False,dim=(128,128),params_pca={'n_components':0.5}) in_files = input("""Give the absolute path to a directory with your files: \n""") some_files = glob.glob(in_files) print(some_files) results = cl.fit_transform(some_files, cluster='agglomerative', evaluate='silhouette', metric='euclidean', linkage='ward', min_clust=3, max_clust=8, cluster_space='high')

    cl.clusteval.plot() cl.clusteval.scatter(cl.results['xycoord'])

    What I have tried: I can run clustimage if I am changing the line 2554 in /home/cpsoz/.local/lib/python3.8/site-packages/clustimage/clustimage.py like this:

    dirpath = os.path.join(tempfile.tempdir, 'clustimage') --> dirpath = os.path.join('/home/cpsoz', 'clustimage')

    where /home/cpsoz is my user directory. 'tempfile' should default to the user directory if 'dirpath' is None, but it does not. As 'dirpath' is None per default, a workaround could be to prompt the user to enter his own path as 'dirpath'.

    Note: replacing

    cl = Clustimage(method='pca',dirpath=None,embedding='tsne',grayscale=False,dim=(128,128),params_pca={'n_components':0.5})

    with

    cl = Clustimage(method='pca',dirpath='/home/cpsoz',embedding='tsne',grayscale=False,dim=(128,128),params_pca={'n_components':0.5})

    is producing the same error as mentioned. Best wishes cp

    opened by cp1972 3
  • Load the model

    Load the model

    Hello, i have ran and saved the model in a .pkl file and i have been trying to load it again. it says that it is loaded but i keep getting this error and i don't know how to fix it.

    load

    One more thing, if i succeed in loading the model, is it just for displaying the results? can't i run it again with another dataset hoping that it would recognize the shape that it had already clustered before ?

    Thank you for your help!

    opened by MalekBezzina 3
  • Find function errors

    Find function errors

    Hello,

    The find function has stopped working, In fact if i use the "pca" i get this error: image

    If i use "hog" i get another type of error. image

    It was working before and i didn't change anything! could you please help me resolve the problem.

    opened by MalekBezzina 2
  • Unconsistency in scatter function

    Unconsistency in scatter function

    Hello again, I think that we have a little inconsistency in the scatter function, leading the inconsistent behavior -- in my case, two inconsistent behaviors: 1. some scatter plots will show up and suddenly disappear, 2. no scatter plot shows up. Here is a minimal working example to reproduce these inconsistencies:

    import sys import os import glob import numpy as np from clustimage import Clustimage import matplotlib.pyplot as plt

    cl = Clustimage(method='pca',dirpath=None,embedding='tsne',grayscale=False,dim=(128,128),params_pca={'n_components':0.95}) in_files = input("""Give the absolute path to a directory with your files: \n""") some_files = glob.glob(in_files) results = cl.fit_transform(some_files, cluster='agglomerative', evaluate='silhouette', metric='euclidean', linkage='ward', min_clust=3, max_clust=6, cluster_space='high')

    cl.clusteval.plot() cl.clusteval.scatter(cl.results['xycoord']) cl.pca.plot() cl.plot_unique(img_mean=False) cl.plot(cmap='binary') cl.scatter(zoom=1, img_mean=False) cl.scatter(zoom=None, dotsize=200, figsize=(25, 15), args_scatter={'fontsize':24, 'gradient':'#FFFFFF', 'cmap':'Set2', 'legend':True})

    What I have done: your scatter function returns fig, ax which is a tuple of plt.subplot; plotting fig makes the plot showing up and disappearing all of a sudden. As a result, you can not save some scatter plots.

    Workaround:

    cl.scatter(zoom=None, dotsize=200, figsize=(25, 15), args_scatter={'fontsize':24, 'gradient':'#FFFFFF', 'cmap':'Set2', 'legend':True}) plt.show()

    With plt.show(), the scatter plots wait for you to close the gui.

    Best wishes cp

    opened by cp1972 2
  • Error while running clustimage module when embedding='tsne'

    Error while running clustimage module when embedding='tsne'

    An error occurs while running this module, clustimage. The error refers to the 'embedding' setting when "embedding='tsne'". If I run the code while "embedding='none'", it works fine. The concern is that embedding is very practical for visual purposes and should be used. Any ideas why this error occurs and how to resolve it?

    setting

    cl = Clustimage(method='pca',
                    embedding='tsne',
                    grayscale=False,
                    dim=(128,128),
                    params_pca={'n_components':0.95},
                    store_to_disk=True,
                    verbose=50)
    

    error

    File "/Users/name/opt/anaconda3/lib/python3.8/site-packages/sklearn/manifold/_t_sne.py", line 372, in _gradient_descent
        update = momentum * update - learning_rate * grad
    
    UFuncTypeError: ufunc 'multiply' did not contain a loop with signature matching types (dtype('<U32'), dtype('<U32')) -> dtype('<U32')
    

    embedding references https://erdogant.github.io/clustimage/pages/html/clustimage.clustimage.html?highlight=embedding#clustimage.clustimage.Clustimage.embedding

    opened by spicker22 2
  • Question on CNN

    Question on CNN

    Is it possible to use pre-trained neural network (for example, based on ImageNet) for image features in clustimage? And do you support using U-MAP for dimension reduction?

    opened by karelin 1
  • mini-batch k-means

    mini-batch k-means

    Hello, I have been using your model for a while now and i was thinking of how to use your model to create a clustering model that learns with every iteration, and one of the simple ideas that occured to me is that we can use "mini-batch k-means" instead of using the whole dataset we use a small batch of data at a time.

    Do you think you can update the k-means code to use mini-batch k-means instead?

    opened by MalekBezzina 0
  • Found a bug in

    Found a bug in "imresize"

    When giving to fit_transform() a path as a string, it is not able to take the images due to a try-except control in import_data. This is due to the fact that, if the image has already the same shape of the variable dim, the cv2.resize raise an error... By changing the function imresize it works!

    def imresize(img, dim=(128, 128)): """Resize image.""" if dim is not None and img.shape != dim: img = cv2.resize(img, dim, interpolation=cv2.INTER_AREA) return img

    opened by ricaiu 1
  • Small question about recommanded usage

    Small question about recommanded usage

    Hello, first of all thank you for your work your libraries are amzaing !

    I didnt know how to contact you properly and an Issue is probably the wrong way to do it so feel free to clost it without answering if you feel like it.

    Anyway I just wanted to ask you a question. I need to perform some similar images grouping :images can be faces, screenshots, drawings, memes etc. so very differents kinds, there are though some images with small (light crop, lighting ...) or big variations (bigger crop, text added etc.) and I'm trying to find a way to regroup them. Until now I was using your other library (undouble) which was working fine but sometimes the grouping functionnality was excluding images that were really close (when computing the ahashes manually these images all had the same ahash but they were not grouped by undouble.group which is odd).

    So anyway I started trying to use clustimage and I'm a bit overwhelmed, there seem to be so much functionnalities, ways of computing features, distances, evaluating the clusters etc. etc.

    I've read your medium article on clustimage which helps a bit, and I know you're saying one should choose the parameters according to the research question, but I'm no datascientist and I'm a bit lost. My take right now would to try to make a script that iterates over all the possible parameters of clustimage and compute a score based on the images grouping that i've made manually. But I think there must be a smarter way to proceed.

    So in other words, my question is: do you recommand any particular set of methods and parameters to group variations of images which can be of very different types.

    Thank you in advance and have a good day !

    opened by Tyrannas 2
Releases(1.5.11)
  • 1.5.11(Dec 3, 2022)

  • 1.5.10(Dec 2, 2022)

  • 1.5.9(Jul 9, 2022)

  • 1.5.8(Jul 9, 2022)

  • 1.5.7(Jul 9, 2022)

  • 1.5.6(Jul 9, 2022)

  • 1.5.5(Jul 1, 2022)

  • 1.5.4(Jul 1, 2022)

  • 1.5.3(Jun 29, 2022)

  • 1.5.2(Jun 9, 2022)

    • Added functionality to read pandas dataframe as input matrix and use the index names a filenames.
    from clustimage import Clustimage
    import pandas as pd
    import numpy as np
    
    # Initialize
    cl = Clustimage()
    
    # Import data
    Xraw = cl.import_example(data='mnist')
    
    print(Xraw)
    # array([[ 0.,  0.,  5., ...,  0.,  0.,  0.],
    #        [ 0.,  0.,  0., ..., 10.,  0.,  0.],
    #        [ 0.,  0.,  0., ..., 16.,  9.,  0.],
    #        ...,
    #        [ 0.,  0.,  1., ...,  6.,  0.,  0.],
    #        [ 0.,  0.,  2., ..., 12.,  0.,  0.],
    #        [ 0.,  0., 10., ..., 12.,  1.,  0.]])
    
    filenames = list(map(lambda x: str(x) + '.png', np.arange(0, Xraw.shape[0])))
    Xraw = pd.DataFrame(Xraw, index=filenames)
    
    print(Xraw)
    #            0    1     2     3     4     5   ...   58    59    60    61   62   63
    # 0.png     0.0  0.0   5.0  13.0   9.0   1.0  ...  6.0  13.0  10.0   0.0  0.0  0.0
    # 1.png     0.0  0.0   0.0  12.0  13.0   5.0  ...  0.0  11.0  16.0  10.0  0.0  0.0
    # 2.png     0.0  0.0   0.0   4.0  15.0  12.0  ...  0.0   3.0  11.0  16.0  9.0  0.0
    # 3.png     0.0  0.0   7.0  15.0  13.0   1.0  ...  7.0  13.0  13.0   9.0  0.0  0.0
    # 4.png     0.0  0.0   0.0   1.0  11.0   0.0  ...  0.0   2.0  16.0   4.0  0.0  0.0
    #       ...  ...   ...   ...   ...   ...  ...  ...   ...   ...   ...  ...  ...
    # 1792.png  0.0  0.0   4.0  10.0  13.0   6.0  ...  2.0  14.0  15.0   9.0  0.0  0.0
    # 1793.png  0.0  0.0   6.0  16.0  13.0  11.0  ...  6.0  16.0  14.0   6.0  0.0  0.0
    # 1794.png  0.0  0.0   1.0  11.0  15.0   1.0  ...  2.0   9.0  13.0   6.0  0.0  0.0
    # 1795.png  0.0  0.0   2.0  10.0   7.0   0.0  ...  5.0  12.0  16.0  12.0  0.0  0.0
    # 1796.png  0.0  0.0  10.0  14.0   8.0   1.0  ...  8.0  12.0  14.0  12.0  1.0  0.0
    
    # Or all in one run
    results = cl.fit_transform(Xraw)
    
    print(results['filenames'])
    # array(['0.png', '1.png', '2.png', ..., '1794.png', '1795.png', '1796.png'],
    
    
    Source code(tar.gz)
    Source code(zip)
  • 1.5.1(Jun 9, 2022)

  • 1.5.0(Jun 3, 2022)

  • 1.4.10(May 30, 2022)

  • 1.4.9(May 30, 2022)

  • 1.4.8(May 30, 2022)

  • 1.4.7(May 30, 2022)

  • 1.4.6(May 9, 2022)

  • 1.4.5(May 7, 2022)

  • 1.4.4(May 7, 2022)

  • 1.4.3(Mar 25, 2022)

  • 1.4.2(Mar 14, 2022)

  • 1.4.1(Feb 24, 2022)

  • 1.4.0(Jan 24, 2022)

    • in case of clustering on hash, the image-hash is now directly used in the clustering instead of pre-computing the adjacency matrix.
    • Removal of complicated functionalities regarding hashes
    Source code(tar.gz)
    Source code(zip)
  • 1.3.15(Jan 16, 2022)

  • 1.3.14(Jan 11, 2022)

  • 1.3.13(Jan 9, 2022)

  • 1.3.12(Dec 28, 2021)

  • 1.3.11(Dec 26, 2021)

  • 1.3.10(Dec 22, 2021)

  • 1.3.9(Dec 22, 2021)

Owner
Erdogan Taskesen
Erdogan Taskesen
Face Mask Detection system based on computer vision and deep learning using OpenCV and Tensorflow/Keras

Face Mask Detection Face Mask Detection System built with OpenCV, Keras/TensorFlow using Deep Learning and Computer Vision concepts in order to detect

Chandrika Deb 1.4k Jan 03, 2023
FcaNet: Frequency Channel Attention Networks

FcaNet: Frequency Channel Attention Networks PyTorch implementation of the paper "FcaNet: Frequency Channel Attention Networks". Simplest usage Models

327 Dec 27, 2022
TransZero++: Cross Attribute-guided Transformer for Zero-Shot Learning

TransZero++ This repository contains the testing code for the paper "TransZero++: Cross Attribute-guided Transformer for Zero-Shot Learning" submitted

Shiming Chen 6 Aug 16, 2022
Keras implementation of "One pixel attack for fooling deep neural networks" using differential evolution on Cifar10 and ImageNet

One Pixel Attack How simple is it to cause a deep neural network to misclassify an image if an attacker is only allowed to modify the color of one pix

Dan Kondratyuk 1.2k Dec 26, 2022
A Pytorch Implementation of Domain adaptation of object detector using scissor-like networks

A Pytorch Implementation of Domain adaptation of object detector using scissor-like networks Please follow Faster R-CNN and DAF to complete the enviro

2 Oct 07, 2022
An interactive DNN Model deployed on web that predicts the chance of heart failure for a patient with an accuracy of 98%

Heart Failure Predictor About A Web UI deployed Dense Neural Network Model Made using Tensorflow that predicts whether the patient is healthy or has c

Adit Ahmedabadi 0 Jan 09, 2022
Meandering In Networks of Entities to Reach Verisimilar Answers

MINERVA Meandering In Networks of Entities to Reach Verisimilar Answers Code and models for the paper Go for a Walk and Arrive at the Answer - Reasoni

Shehzaad Dhuliawala 271 Dec 13, 2022
NaturalCC is a sequence modeling toolkit that allows researchers and developers to train custom models

NaturalCC NaturalCC is a sequence modeling toolkit that allows researchers and developers to train custom models for many software engineering tasks,

159 Dec 28, 2022
An implementation of Equivariant e2 convolutional kernals into a convolutional self attention network, applied to radio astronomy data.

EquivariantSelfAttention An implementation of Equivariant e2 convolutional kernals into a convolutional self attention network, applied to radio astro

2 Nov 09, 2021
An implementation for the ICCV 2021 paper Deep Permutation Equivariant Structure from Motion.

Deep Permutation Equivariant Structure from Motion Paper | Poster This repository contains an implementation for the ICCV 2021 paper Deep Permutation

72 Dec 27, 2022
PERIN is Permutation-Invariant Semantic Parser developed for MRP 2020

PERIN: Permutation-invariant Semantic Parsing David Samuel & Milan Straka Charles University Faculty of Mathematics and Physics Institute of Formal an

ÚFAL 40 Jan 04, 2023
Implementations of LSTM: A Search Space Odyssey variants and their training results on the PTB dataset.

An LSTM Odyssey Code for training variants of "LSTM: A Search Space Odyssey" on Fomoro. Check out the blog post. Training Install TensorFlow. Clone th

Fomoro AI 95 Apr 13, 2022
PyTorch implementation of "LayoutTransformer: Layout Generation and Completion with Self-attention"

PyTorch implementation of "LayoutTransformer: Layout Generation and Completion with Self-attention" to appear in ICCV 2021

Kamal Gupta 75 Dec 23, 2022
Evolving neural network parameters in JAX.

Evolving Neural Networks in JAX This repository holds code displaying techniques for applying evolutionary network training strategies in JAX. Each sc

Trevor Thackston 6 Feb 12, 2022
Learning Correspondence from the Cycle-consistency of Time (CVPR 2019)

TimeCycle Code for Learning Correspondence from the Cycle-consistency of Time (CVPR 2019, Oral). The code is developed based on the PyTorch framework,

Xiaolong Wang 706 Nov 29, 2022
Based on the paper "Geometry-aware Instance-reweighted Adversarial Training" ICLR 2021 oral

Geometry-aware Instance-reweighted Adversarial Training This repository provides codes for Geometry-aware Instance-reweighted Adversarial Training (ht

Jingfeng 47 Dec 22, 2022
Random-Afg - Afghanistan Random Old Idz Cloner Tools

AFGHANISTAN RANDOM OLD IDZ CLONER TOOLS Install $ apt update $ apt upgrade $ apt

MAHADI HASAN AFRIDI 5 Jan 26, 2022
Binary Passage Retriever (BPR) - an efficient passage retriever for open-domain question answering

BPR Binary Passage Retriever (BPR) is an efficient neural retrieval model for open-domain question answering. BPR integrates a learning-to-hash techni

Studio Ousia 147 Dec 07, 2022
[ICCV 2021] Focal Frequency Loss for Image Reconstruction and Synthesis

Focal Frequency Loss - Official PyTorch Implementation This repository provides the official PyTorch implementation for the following paper: Focal Fre

Liming Jiang 460 Jan 04, 2023
Used to record WKU's utility bills on a regular basis.

WKU水电费小助手 一个用于定期记录WKU水电费的脚本 Looking for English Readme? 背景 由于WKU校园内的水电账单系统时常存在扣费延迟的现象,而补扣的费用缺乏令人信服的证明。不少学生为费用摸不着头脑,但也没有申诉的依据。为了更好地掌握水电费使用情况,留下一手证据,我开源

2 Jul 21, 2022