clustimage is a python package for unsupervised clustering of images.

Overview

clustimage

Python PyPI Version License Github Forks GitHub Open Issues Project Status Sphinx Downloads Downloads BuyMeCoffee DOI

The aim of clustimage is to detect natural groups or clusters of images.

Image recognition is a computer vision task for identifying and verifying objects/persons on a photograph. We can seperate the image recognition task into the two broad tasks, namely the supervised and unsupervised task. In case of the supervised task, we have to classify an image into a fixed number of learned categories. Most packages rely on (deep) neural networks, and try solve the problem of predicting "whats on the image". In case of the unsupervised task, we do not depend on the fact that training data is required but we can interpret the input data and find natural groups or clusters. However, it can be quit a breath to carefully group similar images in an unsupervised manner, or simply identify the unique images.

The aim of clustimage is to detect natural groups or clusters of images. It works using a multi-step proces of carefully pre-processing the images, extracting the features, and evaluating the optimal number of clusters across the feature space. The optimal number of clusters can be determined using well known methods suchs as silhouette, dbindex, and derivatives in combination with clustering methods, such as agglomerative, kmeans, dbscan and hdbscan. With clustimage we aim to determine the most robust clustering by efficiently searching across the parameter and evaluation the clusters. Besides clustering of images, the clustimage model can also be used to find the most similar images for a new unseen sample.

A schematic overview is as following:

clustimage overcomess the following challenges:

* 1. Robustly groups similar images.
* 2. Returns the unique images.
* 3. Finds higly similar images for a given input image.

clustimage is fun because:

* It does not require a learning proces.
* It can group any set of images.
* It can return only the unique() images.
* it can find highly similar images given an input image.
* It provided many plots to improve understanding of the feature-space and sample-sample relationships
* It is build on core statistics, such as PCA, HOG and many more, and therefore it does not has a dependency block.
* It works out of the box.

Installation

  • Install clustimage from PyPI (recommended). clustimage is compatible with Python 3.6+ and runs on Linux, MacOS X and Windows.
  • A new environment can be created as following:
conda create -n env_clustimage python=3.8
conda activate env_clustimage
  • Install from pypi
pip install -U clustimage

Import the clustimage package

from clustimage import Clustimage

Example 1: Digit images.

In this example we will be using a flattened grayscale image array loaded from sklearn. The array in NxM, where N are the samples and M the flattened raw rgb/gray image.

# Load library
import matplotlib.pyplot as plt
from clustimage import Clustimage
# init
cl = Clustimage()
# Load example digit data
X = cl.import_example(data='mnist')

print(X)
# Each row is an image that can be plotted after reshaping:
plt.imshow(X[0,:].reshape(8,8), cmap='binary')
# array([[ 0.,  0.,  5., ...,  0.,  0.,  0.],
#        [ 0.,  0.,  0., ..., 10.,  0.,  0.],
#        [ 0.,  0.,  0., ..., 16.,  9.,  0.],
#        ...,
#        [ 0.,  0.,  0., ...,  9.,  0.,  0.],
#        [ 0.,  0.,  0., ...,  4.,  0.,  0.],
#        [ 0.,  0.,  6., ...,  6.,  0.,  0.]])
# 
# Preprocessing and feature extraction
results = cl.fit_transform(X)

# Lets examine the results.
print(results.keys())
# ['feat', 'xycoord', 'pathnames', 'filenames', 'labels']
# 
# feat      : Extracted features
# xycoord   : Coordinates of samples in the embedded space.
# filenames : Name of the files
# pathnames : Absolute location of the files
# labels    : Cluster labels in the same order as the input

# Get the unique images
unique_samples = cl.unique()
# 
print(unique_samples.keys())
# ['labels', 'idx', 'xycoord_center', 'pathnames']
# 
# Collect the unique images from the input
X[unique_samples['idx'],:]
Plot the unique images.
cl.plot_unique()

Scatter samples based on the embedded space.
# The scatterplot that is coloured on the clusterlabels. The clusterlabels should match the unique labels.
# Cluster 1 contains digit 4
# Cluster 5 contains digit 2
# etc
# 
# No images in scatterplot
cl.scatter(zoom=None)

# Include images scatterplot
cl.scatter(zoom=4)

Plot the clustered images

# Plot all images per cluster
cl.plot(cmap='binary')

# Plot the images in a specific cluster
cl.plot(cmap='binary', labels=[1,5])

Dendrogram

# The dendrogram is based on the high-dimensional feature space.
cl.dendrogram()

Make various other plots

# Plot the explained variance
cl.pca.plot()
# Make scatter plot of PC1 vs PC2
cl.pca.scatter(legend=False, label=False)
# Plot the evaluation of the number of clusters
cl.clusteval.plot()
# Make silhouette plot
cl.clusteval.scatter(cl.results['xycoord'])

Example 2: Flower images.

In this example I will be using flower images for which the path locations are somewhere on disk.

# Load library
from clustimage import Clustimage
# init
cl = Clustimage(method='pca')
# load example with flowers
pathnames = cl.import_example(data='flowers')
# The pathnames are stored in a list
print(pathnames[0:2])
# ['C:\\temp\\flower_images\\0001.png', 'C:\\temp\\flower_images\\0002.png']

# Preprocessing, feature extraction and clustering. Lets set a minimum of 1-
results = cl.fit_transform(pathnames)

# Lets first evaluate the number of detected clusters.
# This looks pretty good because there is a high distinction between the peak for 5 clusters and the number of clusters that subsequently follow.
cl.clusteval.plot()
cl.clusteval.scatter(cl.results['xycoord'])

Scatter

cl.scatter(dotsize=50, zoom=None)
cl.scatter(dotsize=50, zoom=0.5)
cl.scatter(dotsize=50, zoom=0.5, img_mean=False)

Plot the clustered images

# Plot unique images
cl.plot_unique()
cl.plot_unique(img_mean=False)

# Plot all images per cluster
cl.plot()

# Plot the images in a specific cluster
cl.plot(labels=3)

# Plot dendrogram
cl.dendrogram()
# Plot clustered images
cl.plot()

Make prediction for unseen input image.

# Find images that are significanly similar as the unseen input image. 
results_find = cl.find(path_to_imgs[0:2], alpha=0.05)
cl.plot_find()

# Map the unseen images in existing feature-space.
cl.scatter()

Example 3: Cluster the faces on images.

from clustimage import Clustimage
# Initialize with grayscale and extract HOG features.
cl = Clustimage(method='hog', grayscale=True)
# Load example with faces
pathnames = cl.import_example(data='faces')
# First we need to detect and extract the faces from the images
face_results = cl.detect_faces(pathnames)
# The detected faces are extracted and stored in face_resuls. We can now easily provide the pathnames of the faces that are stored in pathnames_face.
results = cl.fit_transform(face_results['pathnames_face'])

# Plot the evaluation of the number of clusters. As you can see, the maximum number of cluster evaluated is 24 can perhaps be too small.
cl.clusteval.plot()
# Lets increase the maximum number and clusters and run solely the clustering. Note that you do not need to fit_transform() anymore. You can only do the clustering now.
cl.cluster(max_clust=35)
# And plot again. As you can see, it keeps increasing which means that it may not found any local maximum anymore.
# When looking at the graph, we see a local maximum at 12 clusters. Lets go for that
cl.cluster(min_clust=12, max_clust=13)

# Lets plot the 12 unique clusters that contain the faces
cl.plot_unique()

# Scatter
cl.scatter(zoom=None)
cl.scatter(zoom=0.2)

# Make plot
cl.plot(show_hog=True, labels=[1,7])

# Plot faces
cl.plot_faces()
# Dendrogram depicts the clustering of the faces
cl.dendrogram()

Maintainers

  • Erdogan Taskesen, github: erdogant
  • https://github.com/erdogant/clustimage
  • Please cite in your publications if this is useful for your research (see citation).
  • All kinds of contributions are welcome!
  • If you wish to buy me a Coffee for this work, it is very appreciated :) See LICENSE for details.

Other interesting stuff

Comments
  • Trainability

    Trainability

    Hello,

    I just want to know if after running the model( with diffrent parameters) on similar datasets, will the model learn from one ruun to another. Like a classical NN when the weights are updated at each iteration?

    if for exemple i run the code with diffrent parameters but i only save the last pkl file, am i only saving the weights of the last run or am i saving the whole thing?

    I hope i explained myself well.

    opened by MalekBezzina 6
  • Trying to retrieve original file path names in the results

    Trying to retrieve original file path names in the results

    Is there a way to be able to get the original pathnames of images used post fit_transform?

    I am uploading images onto google colab, and reading them in by their filepaths as "/content/name_of_image", and then I wish to be able to recover this "/content/name_of_image" post running clustering.

    I tried to extract pathnames per label using the following code, but seemed to be getting the filepaths for images created in a temporary directory as follows:

    CODE Iloc = cl.results['labels']==0 cl.results['pathnames'][Iloc]

    OUTPUT array(['/tmp/clustimage/8732cb41-c72d-4266-b164-ff453d68428a.png', '/tmp/clustimage/440fecd8-8a9c-49a0-b100-ccfb66107425.png', '/tmp/clustimage/3c9c38d8-4da9-4e4f-9130-d3836182b8c6.png', '/tmp/clustimage/85cc4848-1faf-44ea-ae4c-9d9d88bd6323.png', '/tmp/clustimage/6127e4fb-1c25-4ba9-8d68-56ef482e3db4.png', '/tmp/clustimage/abcf85e0-af1a-48f1-8861-122122b64e32.png', '/tmp/clustimage/275bbde0-394d-4ba4-b4d0-1c67da323c8b.png', '/tmp/clustimage/30b62285-2628-45c0-86b2-fea305cb8db3.png', '/tmp/clustimage/c47a6867-3c8f-480c-a7bd-b3e7ec4ba334.png', '/tmp/clustimage/da5c17fc-de2a-4375-b03c-066a0904428a.png'], dtype='<U56')

    I wish to get the output as the original filenames that were in the pathnames list.

    opened by Sid01123 4
  • can not find or create dirpath under linux

    can not find or create dirpath under linux

    Hello, and thank you for clustimage! Running a little script on a linux box generates the following error:

    $ python3.8 clustimg.py [clustimage] >ERROR> [None] does not exists or can not be created. Traceback (most recent call last): File "/home/cpsoz/.local/lib/python3.8/site-packages/clustimage/clustimage.py", line 2554, in _set_tempdir dirpath = os.path.join(tempfile.tempdir, 'clustimage') File "/usr/lib/python3.8/posixpath.py", line 76, in join a = os.fspath(a) TypeError: expected str, bytes or os.PathLike object, not NoneType

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "clustimg.py", line 9, in cl = Clustimage(method='pca',dirpath=None,embedding='tsne',grayscale=False,dim=(128,128),params_pca={'n_components':0.5}) File "/home/cpsoz/.local/lib/python3.8/site-packages/clustimage/clustimage.py", line 208, in init self.params['dirpath'] = _set_tempdir(dirpath) File "/home/cpsoz/.local/lib/python3.8/site-packages/clustimage/clustimage.py", line 2566, in _set_tempdir raise Exception(logger.error('[%s] does not exists or can not be created.', dirpath)) Exception: None

    Script clustimg.py has the following content:

    import sys import os import glob from clustimage import Clustimage cl = Clustimage(method='pca',dirpath=None,embedding='tsne',grayscale=False,dim=(128,128),params_pca={'n_components':0.5}) in_files = input("""Give the absolute path to a directory with your files: \n""") some_files = glob.glob(in_files) print(some_files) results = cl.fit_transform(some_files, cluster='agglomerative', evaluate='silhouette', metric='euclidean', linkage='ward', min_clust=3, max_clust=8, cluster_space='high')

    cl.clusteval.plot() cl.clusteval.scatter(cl.results['xycoord'])

    What I have tried: I can run clustimage if I am changing the line 2554 in /home/cpsoz/.local/lib/python3.8/site-packages/clustimage/clustimage.py like this:

    dirpath = os.path.join(tempfile.tempdir, 'clustimage') --> dirpath = os.path.join('/home/cpsoz', 'clustimage')

    where /home/cpsoz is my user directory. 'tempfile' should default to the user directory if 'dirpath' is None, but it does not. As 'dirpath' is None per default, a workaround could be to prompt the user to enter his own path as 'dirpath'.

    Note: replacing

    cl = Clustimage(method='pca',dirpath=None,embedding='tsne',grayscale=False,dim=(128,128),params_pca={'n_components':0.5})

    with

    cl = Clustimage(method='pca',dirpath='/home/cpsoz',embedding='tsne',grayscale=False,dim=(128,128),params_pca={'n_components':0.5})

    is producing the same error as mentioned. Best wishes cp

    opened by cp1972 3
  • Load the model

    Load the model

    Hello, i have ran and saved the model in a .pkl file and i have been trying to load it again. it says that it is loaded but i keep getting this error and i don't know how to fix it.

    load

    One more thing, if i succeed in loading the model, is it just for displaying the results? can't i run it again with another dataset hoping that it would recognize the shape that it had already clustered before ?

    Thank you for your help!

    opened by MalekBezzina 3
  • Find function errors

    Find function errors

    Hello,

    The find function has stopped working, In fact if i use the "pca" i get this error: image

    If i use "hog" i get another type of error. image

    It was working before and i didn't change anything! could you please help me resolve the problem.

    opened by MalekBezzina 2
  • Unconsistency in scatter function

    Unconsistency in scatter function

    Hello again, I think that we have a little inconsistency in the scatter function, leading the inconsistent behavior -- in my case, two inconsistent behaviors: 1. some scatter plots will show up and suddenly disappear, 2. no scatter plot shows up. Here is a minimal working example to reproduce these inconsistencies:

    import sys import os import glob import numpy as np from clustimage import Clustimage import matplotlib.pyplot as plt

    cl = Clustimage(method='pca',dirpath=None,embedding='tsne',grayscale=False,dim=(128,128),params_pca={'n_components':0.95}) in_files = input("""Give the absolute path to a directory with your files: \n""") some_files = glob.glob(in_files) results = cl.fit_transform(some_files, cluster='agglomerative', evaluate='silhouette', metric='euclidean', linkage='ward', min_clust=3, max_clust=6, cluster_space='high')

    cl.clusteval.plot() cl.clusteval.scatter(cl.results['xycoord']) cl.pca.plot() cl.plot_unique(img_mean=False) cl.plot(cmap='binary') cl.scatter(zoom=1, img_mean=False) cl.scatter(zoom=None, dotsize=200, figsize=(25, 15), args_scatter={'fontsize':24, 'gradient':'#FFFFFF', 'cmap':'Set2', 'legend':True})

    What I have done: your scatter function returns fig, ax which is a tuple of plt.subplot; plotting fig makes the plot showing up and disappearing all of a sudden. As a result, you can not save some scatter plots.

    Workaround:

    cl.scatter(zoom=None, dotsize=200, figsize=(25, 15), args_scatter={'fontsize':24, 'gradient':'#FFFFFF', 'cmap':'Set2', 'legend':True}) plt.show()

    With plt.show(), the scatter plots wait for you to close the gui.

    Best wishes cp

    opened by cp1972 2
  • Error while running clustimage module when embedding='tsne'

    Error while running clustimage module when embedding='tsne'

    An error occurs while running this module, clustimage. The error refers to the 'embedding' setting when "embedding='tsne'". If I run the code while "embedding='none'", it works fine. The concern is that embedding is very practical for visual purposes and should be used. Any ideas why this error occurs and how to resolve it?

    setting

    cl = Clustimage(method='pca',
                    embedding='tsne',
                    grayscale=False,
                    dim=(128,128),
                    params_pca={'n_components':0.95},
                    store_to_disk=True,
                    verbose=50)
    

    error

    File "/Users/name/opt/anaconda3/lib/python3.8/site-packages/sklearn/manifold/_t_sne.py", line 372, in _gradient_descent
        update = momentum * update - learning_rate * grad
    
    UFuncTypeError: ufunc 'multiply' did not contain a loop with signature matching types (dtype('<U32'), dtype('<U32')) -> dtype('<U32')
    

    embedding references https://erdogant.github.io/clustimage/pages/html/clustimage.clustimage.html?highlight=embedding#clustimage.clustimage.Clustimage.embedding

    opened by spicker22 2
  • Question on CNN

    Question on CNN

    Is it possible to use pre-trained neural network (for example, based on ImageNet) for image features in clustimage? And do you support using U-MAP for dimension reduction?

    opened by karelin 1
  • mini-batch k-means

    mini-batch k-means

    Hello, I have been using your model for a while now and i was thinking of how to use your model to create a clustering model that learns with every iteration, and one of the simple ideas that occured to me is that we can use "mini-batch k-means" instead of using the whole dataset we use a small batch of data at a time.

    Do you think you can update the k-means code to use mini-batch k-means instead?

    opened by MalekBezzina 0
  • Found a bug in

    Found a bug in "imresize"

    When giving to fit_transform() a path as a string, it is not able to take the images due to a try-except control in import_data. This is due to the fact that, if the image has already the same shape of the variable dim, the cv2.resize raise an error... By changing the function imresize it works!

    def imresize(img, dim=(128, 128)): """Resize image.""" if dim is not None and img.shape != dim: img = cv2.resize(img, dim, interpolation=cv2.INTER_AREA) return img

    opened by ricaiu 1
  • Small question about recommanded usage

    Small question about recommanded usage

    Hello, first of all thank you for your work your libraries are amzaing !

    I didnt know how to contact you properly and an Issue is probably the wrong way to do it so feel free to clost it without answering if you feel like it.

    Anyway I just wanted to ask you a question. I need to perform some similar images grouping :images can be faces, screenshots, drawings, memes etc. so very differents kinds, there are though some images with small (light crop, lighting ...) or big variations (bigger crop, text added etc.) and I'm trying to find a way to regroup them. Until now I was using your other library (undouble) which was working fine but sometimes the grouping functionnality was excluding images that were really close (when computing the ahashes manually these images all had the same ahash but they were not grouped by undouble.group which is odd).

    So anyway I started trying to use clustimage and I'm a bit overwhelmed, there seem to be so much functionnalities, ways of computing features, distances, evaluating the clusters etc. etc.

    I've read your medium article on clustimage which helps a bit, and I know you're saying one should choose the parameters according to the research question, but I'm no datascientist and I'm a bit lost. My take right now would to try to make a script that iterates over all the possible parameters of clustimage and compute a score based on the images grouping that i've made manually. But I think there must be a smarter way to proceed.

    So in other words, my question is: do you recommand any particular set of methods and parameters to group variations of images which can be of very different types.

    Thank you in advance and have a good day !

    opened by Tyrannas 2
Releases(1.5.11)
  • 1.5.11(Dec 3, 2022)

  • 1.5.10(Dec 2, 2022)

  • 1.5.9(Jul 9, 2022)

  • 1.5.8(Jul 9, 2022)

  • 1.5.7(Jul 9, 2022)

  • 1.5.6(Jul 9, 2022)

  • 1.5.5(Jul 1, 2022)

  • 1.5.4(Jul 1, 2022)

  • 1.5.3(Jun 29, 2022)

  • 1.5.2(Jun 9, 2022)

    • Added functionality to read pandas dataframe as input matrix and use the index names a filenames.
    from clustimage import Clustimage
    import pandas as pd
    import numpy as np
    
    # Initialize
    cl = Clustimage()
    
    # Import data
    Xraw = cl.import_example(data='mnist')
    
    print(Xraw)
    # array([[ 0.,  0.,  5., ...,  0.,  0.,  0.],
    #        [ 0.,  0.,  0., ..., 10.,  0.,  0.],
    #        [ 0.,  0.,  0., ..., 16.,  9.,  0.],
    #        ...,
    #        [ 0.,  0.,  1., ...,  6.,  0.,  0.],
    #        [ 0.,  0.,  2., ..., 12.,  0.,  0.],
    #        [ 0.,  0., 10., ..., 12.,  1.,  0.]])
    
    filenames = list(map(lambda x: str(x) + '.png', np.arange(0, Xraw.shape[0])))
    Xraw = pd.DataFrame(Xraw, index=filenames)
    
    print(Xraw)
    #            0    1     2     3     4     5   ...   58    59    60    61   62   63
    # 0.png     0.0  0.0   5.0  13.0   9.0   1.0  ...  6.0  13.0  10.0   0.0  0.0  0.0
    # 1.png     0.0  0.0   0.0  12.0  13.0   5.0  ...  0.0  11.0  16.0  10.0  0.0  0.0
    # 2.png     0.0  0.0   0.0   4.0  15.0  12.0  ...  0.0   3.0  11.0  16.0  9.0  0.0
    # 3.png     0.0  0.0   7.0  15.0  13.0   1.0  ...  7.0  13.0  13.0   9.0  0.0  0.0
    # 4.png     0.0  0.0   0.0   1.0  11.0   0.0  ...  0.0   2.0  16.0   4.0  0.0  0.0
    #       ...  ...   ...   ...   ...   ...  ...  ...   ...   ...   ...  ...  ...
    # 1792.png  0.0  0.0   4.0  10.0  13.0   6.0  ...  2.0  14.0  15.0   9.0  0.0  0.0
    # 1793.png  0.0  0.0   6.0  16.0  13.0  11.0  ...  6.0  16.0  14.0   6.0  0.0  0.0
    # 1794.png  0.0  0.0   1.0  11.0  15.0   1.0  ...  2.0   9.0  13.0   6.0  0.0  0.0
    # 1795.png  0.0  0.0   2.0  10.0   7.0   0.0  ...  5.0  12.0  16.0  12.0  0.0  0.0
    # 1796.png  0.0  0.0  10.0  14.0   8.0   1.0  ...  8.0  12.0  14.0  12.0  1.0  0.0
    
    # Or all in one run
    results = cl.fit_transform(Xraw)
    
    print(results['filenames'])
    # array(['0.png', '1.png', '2.png', ..., '1794.png', '1795.png', '1796.png'],
    
    
    Source code(tar.gz)
    Source code(zip)
  • 1.5.1(Jun 9, 2022)

  • 1.5.0(Jun 3, 2022)

  • 1.4.10(May 30, 2022)

  • 1.4.9(May 30, 2022)

  • 1.4.8(May 30, 2022)

  • 1.4.7(May 30, 2022)

  • 1.4.6(May 9, 2022)

  • 1.4.5(May 7, 2022)

  • 1.4.4(May 7, 2022)

  • 1.4.3(Mar 25, 2022)

  • 1.4.2(Mar 14, 2022)

  • 1.4.1(Feb 24, 2022)

  • 1.4.0(Jan 24, 2022)

    • in case of clustering on hash, the image-hash is now directly used in the clustering instead of pre-computing the adjacency matrix.
    • Removal of complicated functionalities regarding hashes
    Source code(tar.gz)
    Source code(zip)
  • 1.3.15(Jan 16, 2022)

  • 1.3.14(Jan 11, 2022)

  • 1.3.13(Jan 9, 2022)

  • 1.3.12(Dec 28, 2021)

  • 1.3.11(Dec 26, 2021)

  • 1.3.10(Dec 22, 2021)

  • 1.3.9(Dec 22, 2021)

Owner
Erdogan Taskesen
Erdogan Taskesen
Implementing yolov4 target detection and tracking based on nao robot

Implementing yolov4 target detection and tracking based on nao robot

6 Apr 19, 2022
A quantum game modeling of pandemic (QHack 2022)

Contributors: @JongheumJung, @YoonjaeChung, @GyunghunKim Abstract In the regime of a global pandemic, leaders around the world need to consider variou

Yoonjae Chung 8 Apr 03, 2022
LSSY量化交易系统

LSSY量化交易系统 该项目是本人3年来研究量化慢慢积累开发的一套系统,属于早期作品慢慢修改而来,仅供学习研究,回测分析,实盘交易部分未公开

55 Oct 04, 2022
EvDistill: Asynchronous Events to End-task Learning via Bidirectional Reconstruction-guided Cross-modal Knowledge Distillation (CVPR'21)

EvDistill: Asynchronous Events to End-task Learning via Bidirectional Reconstruction-guided Cross-modal Knowledge Distillation (CVPR'21) Citation If y

addisonwang 18 Nov 11, 2022
Implementation of ML models like Decision tree, Naive Bayes, Logistic Regression and many other

ML_Model_implementaion Implementation of ML models like Decision tree, Naive Bayes, Logistic Regression and many other dectree_model: Implementation o

Anshuman Dalai 3 Jan 24, 2022
FNet Implementation with TensorFlow & PyTorch

FNet Implementation with TensorFlow & PyTorch. TensorFlow & PyTorch implementation of the paper "FNet: Mixing Tokens with Fourier Transforms". Overvie

Abdelghani Belgaid 1 Feb 12, 2022
Change Detection in SAR Images Based on Multiscale Capsule Network

SAR_CD_MS_CapsNet Code for the paper "Change Detection in SAR Images Based on Multiscale Capsule Network" , IEEE Geoscience and Remote Sensing Letters

Feng Gao 21 Nov 29, 2022
Any-to-any voice conversion using synthetic specific-speaker speeches as intermedium features

MediumVC MediumVC is an utterance-level method towards any-to-any VC. Before that, we propose SingleVC to perform A2O tasks(Xi → Ŷi) , Xi means utter

谷下雨 47 Dec 25, 2022
A treasure chest for visual recognition powered by PaddlePaddle

简体中文 | English PaddleClas 简介 飞桨图像识别套件PaddleClas是飞桨为工业界和学术界所准备的一个图像识别任务的工具集,助力使用者训练出更好的视觉模型和应用落地。 近期更新 2021.11.1 发布PP-ShiTu技术报告,新增饮料识别demo 2021.10.23 发

4.6k Dec 31, 2022
Code for "Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible Neural Networks", CVPR 2021

Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible Neural Networks This repository contains the code that accompanies our CVPR 20

Despoina Paschalidou 161 Dec 20, 2022
Read Like Humans: Autonomous, Bidirectional and Iterative Language Modeling for Scene Text Recognition

Read Like Humans: Autonomous, Bidirectional and Iterative Language Modeling for Scene Text Recognition The official code of ABINet (CVPR 2021, Oral).

334 Dec 31, 2022
Supercharging Imbalanced Data Learning WithCausal Representation Transfer

ECRT: Energy-based Causal Representation Transfer Code for Supercharging Imbalanced Data Learning With Energy-basedContrastive Representation Transfer

Zidi Xiu 11 May 02, 2022
Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)

Skyformer This repository is the official implementation of Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr"om Method (NeurIPS 2021).

Qi Zeng 46 Sep 20, 2022
MoCoGAN: Decomposing Motion and Content for Video Generation

MoCoGAN: Decomposing Motion and Content for Video Generation This repository contains an implementation and further details of MoCoGAN: Decomposing Mo

Sergey Tulyakov 514 Dec 18, 2022
Anonymize BLM Protest Images

Anonymize BLM Protest Images This repository automates @BLMPrivacyBot, a Twitter bot that shows the anonymized images to help keep protesters safe. Us

Stanford Machine Learning Group 40 Oct 13, 2022
DeRF: Decomposed Radiance Fields

DeRF: Decomposed Radiance Fields Daniel Rebain, Wei Jiang, Soroosh Yazdani, Ke Li, Kwang Moo Yi, Andrea Tagliasacchi Links Paper Project Page Abstract

UBC Computer Vision Group 24 Dec 02, 2022
Bio-Computing Platform Featuring Large-Scale Representation Learning and Multi-Task Deep Learning “螺旋桨”生物计算工具集

English | 简体中文 Latest News 2021.10.25 Paper "Docking-based Virtual Screening with Multi-Task Learning" is accepted by BIBM 2021. 2021.07.29 PaddleHeli

633 Jan 04, 2023
The repository contains source code and models to use PixelNet architecture used for various pixel-level tasks. More details can be accessed at .

PixelNet: Representation of the pixels, by the pixels, and for the pixels. We explore design principles for general pixel-level prediction problems, f

Aayush Bansal 196 Aug 10, 2022
The repository is for safe reinforcement learning baselines.

Safe-Reinforcement-Learning-Baseline The repository is for Safe Reinforcement Learning (RL) research, in which we investigate various safe RL baseline

172 Dec 19, 2022
[ICCV2021] Official code for "Channel-wise Topology Refinement Graph Convolution for Skeleton-Based Action Recognition"

CTR-GCN This repo is the official implementation for Channel-wise Topology Refinement Graph Convolution for Skeleton-Based Action Recognition. The pap

Yuxin Chen 148 Dec 16, 2022