MinHash, LSH, LSH Forest, Weighted MinHash, HyperLogLog, HyperLogLog++, LSH Ensemble

Overview

datasketch: Big Data Looks Small

datasketch gives you probabilistic data structures that can process and search very large amount of data super fast, with little loss of accuracy.

This package contains the following data sketches:

Data Sketch Usage
MinHash estimate Jaccard similarity and cardinality
Weighted MinHash estimate weighted Jaccard similarity
HyperLogLog estimate cardinality
HyperLogLog++ estimate cardinality

The following indexes for data sketches are provided to support sub-linear query time:

Index For Data Sketch Supported Query Type
MinHash LSH MinHash, Weighted MinHash Jaccard Threshold
MinHash LSH Forest MinHash, Weighted MinHash Jaccard Top-K
MinHash LSH Ensemble MinHash Containment Threshold

datasketch must be used with Python 2.7 or above and NumPy 1.11 or above. Scipy is optional, but with it the LSH initialization can be much faster.

Note that MinHash LSH and MinHash LSH Ensemble also support Redis and Cassandra storage layer (see MinHash LSH at Scale).

Install

To install datasketch using pip:

pip install datasketch

This will also install NumPy as dependency.

To install with Redis dependency:

pip install datasketch[redis]

To install with Cassandra dependency:

pip install datasketch[cassandra]

To install with Scipy for faster MinHashLSH initialization:

pip install datasketch[scipy]
Comments
  • Redis backend for datasketch LSH

    Redis backend for datasketch LSH

    Summary

    We have been using datasketch extensively in data-driven applications (primarily the MinHashLSH class). The existing implementation requires all data to be stored in memory, which is a severe limitation for very large datasets. We add support for a Redis backend. Default behavior remains in-memory storage.

    Changes

    • Redis backend for MinHashLSH
    • Insertion session for bulk insertion (reduces network calls when inserting with a Redis backend)
    • Functionality to view LSH bucket sizes (useful for diagnosis/debugging)
    • Accompanying docs and tests

    Examples

    For examples, see the additions to the documentation.

    opened by ae-foster 25
  • Add Cassandra storage layer

    Add Cassandra storage layer

    This is a storage layer that uses Cassandra as a backend. It is optimized to take into account Cassandra internals and its (intrinsic) limitations (see for example how all the keys are retrieved and how the list abstraction is implemented). I have also added a new batch of tests to both TestMinHashLSH and TestWeightedMinHashLSH. Since Cassandra is required (Cassandra is not mocked) they can be enabled by flipping the DO_TEST_CASSANDRA variable.

    opened by ostefano 18
  • Implement #123 arbitrary Mongo URL & collection name

    Implement #123 arbitrary Mongo URL & collection name

    I have also updated the documentation a bit.

    The purpose of the function AsyncIOMotorClient.get_default_database is to allow for the Mongo URL to define a database. If the URL contains a database, that database in the URL will be used, otherwise, the calculated database name is used. As for get_collection, I changed it for consistency only.

    I wanted to update the unit tests, but they were skipped… and when I tried to un-skip them everything failed spetacularly. I'm not sure what to do about that.

    Cheers!

    opened by ekevoo 14
  • Easy computation of all duplicates?

    Easy computation of all duplicates?

    Thanks for this library!

    One question: Is there an easy way to get all pairs from a LSH? In other words, instead of a query for a single minhash, I need all pairs of records inside a same bucket.

    I tried to dive into the code, but I didn't understand very well how hashranges/hashtables work. What I need is similar to candidate_duplicates of this other implementation: https://mattilyra.github.io/2017/05/23/document-deduplication-with-lsh.html

    opened by fjsj 14
  • Performance improvements

    Performance improvements

    I've made some performance improvements in the MinHash class and added a helper method for bulk computation.

    1. Precasting _mersenne_prime and _max_hash, they don't need to be cast multiple times.
    2. I've modified the update method to accept a list of bytes so hash values can be computed in batches. This avoids using for loops in python space and packs all those operations into the numpy calls once. Only caveat here is that the matrix operations use more memory and you should do chunking if you're dealing with huge sets.
    3. The bulk helper removes unnecessary initialization overhead of the permutations and initial hashvalues. They are computed once then passed on to new MinHash objects as they are produced.

    These changes give some pretty nice performance improvements, up to 3X for updates, and anywhere from 5-25X for bulk computing. See my benchmark gist here

    opened by Sinusoidal36 10
  • AsyncMinhashLSH doesn't use index key with mongodb backend, and hence is very slow.

    AsyncMinhashLSH doesn't use index key with mongodb backend, and hence is very slow.

    We'd struggled with AsyncMinhashLSH's query performance, eventually finding out it wasn't using the default collection index key ('_id'), and instead querying by it's 'key' field. (see: here )

    This can either be fixed by inserting into the collections with primary key '_id', or else creating indices on the 'key' field for all collections in the database.

    For anyone looking for the quickfix solution, simply run:

    db.lsh_...._bucket_#.createIndex( { key: -1 } )

    on each of the buckets and key collections in the database. We found performance went from 11 seconds/ query to >1ms/ query pretty much instantly.

    I believe this should be added to the documentation somewhere, or fixed in the code (although it'd break existing databases). I'd volunteer to do either!

    opened by oisincar 8
  • add ability to set storage basename

    add ability to set storage basename

    If you are connecting to a redis database and you lose the reference to the connection, you will lose reference to all data in your hash and you also lose the ability to remove it. By having the ability to set the base name, you can alway reference the same keys no matter if you create a new connection or use the old one.

    opened by KevinMann13 8
  • how to restore minhashes from MinHashLSH with redis backend?

    how to restore minhashes from MinHashLSH with redis backend?

    I what to find duplicates,so I need pop a hash then compare left hashes.

    or should I make a copy of original data to redis? or just not use the redis storge wait the dataset feed up then use MinHashLSH

    opened by bung87 8
  • Persistent storage on MongoDB

    Persistent storage on MongoDB

    Hi guys,

    Question, I'd like to store a MinHashLSH on a Mongo-db instance. I've seen you have Redis & Cassandra as storage-types in the code; is there an easy way for me to store in on mongo as well?

    Greets Tb

    opened by blah-crusader 7
  • Combining/Storing LSH with different thresholds

    Combining/Storing LSH with different thresholds

    Hi,

    First of all thanks for this amazing package. I have two LSH, one optimized with a threshold of 0.65 and the other with a threshold of 0.55. I found the results from both these LSH are relevant to me, and I would finally take a union of the results of both of these, and get my duplicate contents.

    In order to store both LSH, I'm using Redis caching mechanism, but later I will have to call both the LSH, run individually and take the union of the results.

    Is there any way I could optimize so that I could do the same with only one LSH if possible or some method to create a special type of indexes in Redis such that I don't have to call the LSH's Individually. This would help me save Memory and Computational Time.

    opened by thakur-nandan 7
  • Question: Setting Expiry Time for Entries in Redis LSH?

    Question: Setting Expiry Time for Entries in Redis LSH?

    Hi,

    First of all thanks for this great library :)

    I wanted to know if there is any way to set an expiry time for entries being added into an LSH connected to Redis? Or any way to keep a default expiry time for all keys?

    Should I instead set a maxmemory policy for my Redis with an allkeys-lru or allkeys-random eviction policy? Is this recommended or will it cause issues with the LSH?

    Regards, Spandan

    opened by SpandanThakur 7
  • Support numpy>=1.20.0

    Support numpy>=1.20.0

    numpy.int was only ever an alias for the built-in, and is now deprecated from version 1.20.0

    See: https://numpy.org/doc/stable/release/1.20.0-notes.html#using-the-aliases-of-builtin-types-like-np-int-is-deprecated

    opened by joehalliwell 0
  • Given a minHash instance or minHash LSH , How to find most unsimilar instance with an existed minHash LSH?

    Given a minHash instance or minHash LSH , How to find most unsimilar instance with an existed minHash LSH?

    In API document , we can learn how to find approximate neighbours with Jaccard similarity more than threshold . On the other hand, how to find most unsimilar instance. Further, given a minHash LSH which initialized by thousands of plain text - set A, and given other thousands of plain text - set B, how to discover most unsimilar text of set B with set A , or how to discover text of set B which jaccard similarity less than threshold . Is there as possible as fast solution to deal with this problem ? Thanks !

    opened by loulianzhang 2
  • Getting `pymongo.errors.ExecutionTimeout`  for using more than 1 instance of AsyncMinHashLSH

    Getting `pymongo.errors.ExecutionTimeout` for using more than 1 instance of AsyncMinHashLSH

    I have a use case where i have to store min hash of (n) different categories of file and the query them. For example if i have documents of category A and I want to store all of them in one db and then query at later point.

    I am initializing this in a microservice as (fastapi) as follows

    @app.on_event("startup")
    async def startup_event():
        LSHService.lsh_js, LSHService.lsh_css, LSHService.lsh_html, = await asyncio.gather(
            *[
                await AsyncMinHashLSH(
                    storage_config={
                        **_storage,
                        "basename": "category-a".encode(),
                    },
                    threshold=0.9,
                    num_perm=256,
                ),
                await AsyncMinHashLSH(
                    storage_config={
                        **_storage,
                        "basename": "category-b".encode(),
                    },
                    threshold=0.9,
                    num_perm=256,
                ),
                await AsyncMinHashLSH(
                    storage_config={
                        **_storage,
                        "basename": "category-b".encode(),
                    },
                    threshold=0.9,
                    num_perm=256,
                ),
            ]
        )
    

    If i initialize like this i get
    Getting pymongo.errors.ExecutionTimeout Error

    However If i just maintain 1 object then I dont get any error.

    What can I do mitigate this? Thanks in advance

    opened by RonaldRegan69 3
  • Question: Why getting a min-hash for a single sample is possible?

    Question: Why getting a min-hash for a single sample is possible?

    Say if we have three documents A, B and C. Each document might contains different words.

    According to the document of data sketch.MinHash, we can get a min-hash for A with

    minxish = Minhash(num_perm=128) minhash.update(A.encode('utf-8')) vector = minhash.digest()

    But isn't that we need to create a vocabulary consisting of all words from A, B and C before getting the vector?

    opened by bdeng3 2
Releases(v1.5.8)
  • v1.5.8(Aug 21, 2022)

    What's Changed

    • Add GitHub URL for PyPi by @andriyor in https://github.com/ekzhu/datasketch/pull/179
    • Support asyncio redis by @long2ice in https://github.com/ekzhu/datasketch/pull/185
    • Fix name construction for all values of b by @SenadI in https://github.com/ekzhu/datasketch/pull/190

    New Contributors

    • @andriyor made their first contribution in https://github.com/ekzhu/datasketch/pull/179
    • @long2ice made their first contribution in https://github.com/ekzhu/datasketch/pull/185
    • @SenadI made their first contribution in https://github.com/ekzhu/datasketch/pull/190

    Full Changelog: https://github.com/ekzhu/datasketch/compare/v1.5.7...v1.5.8

    Source code(tar.gz)
    Source code(zip)
  • v1.5.7(Feb 4, 2022)

    What's Changed

    • Unable to create multiple lsh indices each one in its own keyspace - issue #171 by @ronassa in https://github.com/ekzhu/datasketch/pull/172

    New Contributors

    • @ronassa made their first contribution in https://github.com/ekzhu/datasketch/pull/172

    Full Changelog: https://github.com/ekzhu/datasketch/compare/v1.5.6...v1.5.7

    Source code(tar.gz)
    Source code(zip)
  • v1.5.6(Dec 27, 2021)

  • v1.5.5(Dec 16, 2021)

    What's Changed

    • Adding minhash_many to WeightedMinHashGenerator. by @jroose-jv in https://github.com/ekzhu/datasketch/pull/165
    • Add query buffer by @hguhlich in https://github.com/ekzhu/datasketch/pull/167

    New Contributors

    • @jroose-jv made their first contribution in https://github.com/ekzhu/datasketch/pull/165
    • @hguhlich made their first contribution in https://github.com/ekzhu/datasketch/pull/167

    Full Changelog: https://github.com/ekzhu/datasketch/compare/v1.5.4...v1.5.5

    Source code(tar.gz)
    Source code(zip)
  • v1.5.4(Dec 4, 2021)

    What's Changed

    • Fixes #146; MinhashLSH creates mongo index key. by @oisincar in https://github.com/ekzhu/datasketch/pull/148
    • Add redis_buffer configuration. by @QthCN in https://github.com/ekzhu/datasketch/pull/152
    • minhash: Get rid of deprecation warning by @xkubov in https://github.com/ekzhu/datasketch/pull/156

    New Contributors

    • @oisincar made their first contribution in https://github.com/ekzhu/datasketch/pull/148
    • @QthCN made their first contribution in https://github.com/ekzhu/datasketch/pull/152
    • @xkubov made their first contribution in https://github.com/ekzhu/datasketch/pull/156

    Full Changelog: https://github.com/ekzhu/datasketch/compare/1.5.2...v1.5.4

    Source code(tar.gz)
    Source code(zip)
  • 1.5.2(Dec 15, 2020)

    • Performance improvement for MinHash's update method.
    • Make MinHash updates 4.5X faster by using update_batch method for bulk update on MinHash. [See API doc].(http://ekzhu.com/datasketch/documentation.html#datasketch.MinHash.update_batch)
    • Further performance gain by using bulk generation of MinHash using MinHash.bulk or MinHash.generator. See API doc and pull request.
    • Optional compression for MinHash LSH index by hashing the bucket key produced by MinHashLSH._H. See pull request. This leads to saving of memory/storage space used by the index.

    Thank you @Sinusoidal36!

    Source code(tar.gz)
    Source code(zip)
  • v1.5.0(Nov 26, 2019)

    • Minor bug fixes
    • Cassandra storage layer, thank @ostefano! Now you can specify the Cassandra config just like the Redis one.
    from datasketch import MinHashLSH
    
    lsh = MinHashLSH(
        threashold=0.5, num_perm=128, storage_config={
            'type': 'cassandra',
            'cassandra': {
                'seeds': ['127.0.0.1'],
                'keyspace': 'lsh_test',
                'replication': {
                    'class': 'SimpleStrategy',
                    'replication_factor': '1',
                },
                'drop_keyspace': False,
                'drop_tables': False,
            }
        }
    )
    
    Source code(tar.gz)
    Source code(zip)
  • v1.4.0(Jan 6, 2019)

    Now support hashfunc parameter for MinHash and HyperLogLog. The old parameter hashobj is removed.

    # Let's use MurmurHash3.
    import mmh3
    
    # We need to define a new hash function that outputs an integer that
    # can be encoded in 32 bits.
    def _hash_func(d):
        return mmh3.hash32(d)
    
    # Use this function in MinHash constructor.
    m = MinHash(hashfunc=_hash_func)
    
    Source code(tar.gz)
    Source code(zip)
  • v.1.3.0(Dec 27, 2018)

  • v1.2.10(Nov 2, 2018)

    • Adding batch removal functionality for Async MinHashLSH
    • Because Redis does not support async operation, removed Redis support from Async MinHashLSH

    For details see Pull #70 Thanks @aastafiev for the contribution.

    Source code(tar.gz)
    Source code(zip)
  • v1.2.9(Oct 22, 2018)

  • v1.2.7(Jul 27, 2018)

    • Added Asynchronous MinHash LSH module. Thanks @aastafiev!
    • Added ability to set the base name in storage config. Base name is used as the prefix for generating keys in the underlying storage (e.g., Redis). This change allows client to "reconnect" to an existing LSH index in the storage through its base name.
    Source code(tar.gz)
    Source code(zip)
  • v1.2.6(Jul 21, 2018)

  • v1.2.5(Nov 21, 2017)

  • v1.2.4(Nov 8, 2017)

  • v1.2.3(Sep 8, 2017)

  • v1.2.0(Mar 31, 2017)

  • v1.1.3(Mar 26, 2017)

    MinHash now uses Numpy's random number generator instead of Python's built-in random. This makes MinHash generate consistent hash values across different Python versions.

    The side-effect is that now MinHash created before version 1.1.3 won’t work (i.e., jaccard, merge and union) correctly with those created after.

    Source code(tar.gz)
    Source code(zip)
  • v1.1.2(Mar 15, 2017)

    • LeanMinHash is a subclass of MinHash. It uses less memory and allows faster (de)serialization. See documentation for details.
    • Removed serialize, deserialize, and bytesize methods from MinHash. These are supported in LeanMinHash instead.
    • Serialized MinHash objects before this version will not be deserialized properly. To migrate see here.
    • Documentation now have its own website!
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0(Feb 12, 2017)

    After nearly 2 years working on this project on-and-off, the API is now stable, and the features of MinHash-related sketches are completed.

    I will continue to add more data sketches and indexes.

    Source code(tar.gz)
    Source code(zip)
  • v0.2.6(Jan 7, 2017)

    • MinHash LSH Forest implementation and benchmark using synthetic data
    • Improve existing MinHash LSH benchmark using synthetic data for more tunable data distributions
    • Improve MinHash and LSH performance
    Source code(tar.gz)
    Source code(zip)
  • v0.2.4(Jul 10, 2016)

  • v0.2.3(Apr 12, 2016)

  • v0.2.1(Apr 12, 2016)

This project is the official implementation of our accepted ICLR 2021 paper BiPointNet: Binary Neural Network for Point Clouds.

BiPointNet: Binary Neural Network for Point Clouds Created by Haotong Qin, Zhongang Cai, Mingyuan Zhang, Yifu Ding, Haiyu Zhao, Shuai Yi, Xianglong Li

Haotong Qin 59 Dec 17, 2022
CapsuleVOS: Semi-Supervised Video Object Segmentation Using Capsule Routing

CapsuleVOS This is the code for the ICCV 2019 paper CapsuleVOS: Semi-Supervised Video Object Segmentation Using Capsule Routing. Arxiv Link: https://a

53 Oct 27, 2022
Code accompanying the paper "Knowledge Base Completion Meets Transfer Learning"

Knowledge Base Completion Meets Transfer Learning This code accompanies the paper Knowledge Base Completion Meets Transfer Learning published at EMNLP

14 Nov 27, 2022
Automatic Image Background Subtraction

Automatic Image Background Subtraction This repo contains set of scripts for automatic one-shot image background subtraction task using the following

Oleg Sémery 6 Dec 05, 2022
PyTorch implementation of Higher Order Recurrent Space-Time Transformer

Higher Order Recurrent Space-Time Transformer (HORST) This is the official PyTorch implementation of Higher Order Recurrent Space-Time Transformer. Th

13 Oct 18, 2022
fcn by tensorflow

Update An example on how to integrate this code into your own semantic segmentation pipeline can be found in my KittiSeg project repository. tensorflo

9 May 22, 2022
The implemetation of Dynamic Nerual Garments proposed in Siggraph Asia 2021

DynamicNeuralGarments Introduction This repository contains the implemetation of Dynamic Nerual Garments proposed in Siggraph Asia 2021. ./GarmentMoti

42 Dec 27, 2022
The fastest way to visualize GradCAM with your Keras models.

VizGradCAM VizGradCam is the fastest way to visualize GradCAM in Keras models. GradCAM helps with providing visual explainability of trained models an

58 Nov 19, 2022
Pyramid Grafting Network for One-Stage High Resolution Saliency Detection. CVPR 2022

PGNet Pyramid Grafting Network for One-Stage High Resolution Saliency Detection. CVPR 2022, CVPR 2022 (arXiv 2204.05041) Abstract Recent salient objec

CVTEAM 109 Dec 05, 2022
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.

NNI Doc | 简体中文 NNI (Neural Network Intelligence) is a lightweight but powerful toolkit to help users automate Feature Engineering, Neural Architecture

Microsoft 12.4k Dec 31, 2022
Codes for our paper The Stem Cell Hypothesis: Dilemma behind Multi-Task Learning with Transformer Encoders published to EMNLP 2021.

The Stem Cell Hypothesis Codes for our paper The Stem Cell Hypothesis: Dilemma behind Multi-Task Learning with Transformer Encoders published to EMNLP

Emory NLP 5 Jul 08, 2022
Contrastive unpaired image-to-image translation, faster and lighter training than cyclegan (ECCV 2020, in PyTorch)

Contrastive Unpaired Translation (CUT) video (1m) | video (10m) | website | paper We provide our PyTorch implementation of unpaired image-to-image tra

1.7k Dec 27, 2022
Rank1 Conversation Emotion Detection Task

Rank1-Conversation_Emotion_Detection_Task accuracy macro-f1 recall 0.826 0.7544 0.719 基于预训练模型和时序预测模型的对话情感探测任务 1 摘要 针对对话情感探测任务,本文将其分为文本分类和时间序列预测两个子任务,分

Yuchen Han 2 Nov 28, 2021
SelfAugment extends MoCo to include automatic unsupervised augmentation selection.

SelfAugment extends MoCo to include automatic unsupervised augmentation selection. In addition, we've included the ability to pretrain on several new datasets and included a wandb integration.

Colorado Reed 24 Oct 26, 2022
Multi-query Video Retreival

Multi-query Video Retreival

Princeton Visual AI Lab 17 Nov 22, 2022
Auto Seg-Loss: Searching Metric Surrogates for Semantic Segmentation

Auto-Seg-Loss By Hao Li, Chenxin Tao, Xizhou Zhu, Xiaogang Wang, Gao Huang, Jifeng Dai This is the official implementation of the ICLR 2021 paper Auto

61 Dec 21, 2022
[AAAI 2021] MVFNet: Multi-View Fusion Network for Efficient Video Recognition

MVFNet: Multi-View Fusion Network for Efficient Video Recognition (AAAI 2021) Overview We release the code of the MVFNet (Multi-View Fusion Network).

Wenhao Wu 114 Nov 27, 2022
Trains an agent with stochastic policy gradient ascent to solve the Lunar Lander challenge from OpenAI

Introduction This script trains an agent with stochastic policy gradient ascent to solve the Lunar Lander challenge from OpenAI. In order to run this

Momin Haider 0 Jan 02, 2022
Code for HodgeNet: Learning Spectral Geometry on Triangle Meshes, in SIGGRAPH 2021.

HodgeNet | Webpage | Paper | Video HodgeNet: Learning Spectral Geometry on Triangle Meshes Dmitriy Smirnov, Justin Solomon SIGGRAPH 2021 Set-up To ins

Dima Smirnov 61 Nov 27, 2022
Virtual Dance Reality Stage: a feature that offers you to share a stage with another user virtually

Portrait Segmentation using Tensorflow This script removes the background from an input image. You can read more about segmentation here Setup The scr

291 Dec 24, 2022