A fast python implementation of the SimHash algorithm.

Overview

FLoC SimHash

This Python package provides hashing algorithms for computing cohort ids of users based on their browsing history. As such, it may be used to compute cohort ids of users following Google's Federated Learning of Cohorts (FLoC) proposal.

The FLoC proposal is an important part of The Privacy Sandbox, which is Google's replacement for third-party cookies. FLoC will enable interest-based advertising, thus preserving an important source of monetization for today's web.

The main idea, as outlined in the FLoC whitepaper, is to replace user cookie ids, which enable user-targeting across multiple sites, by cohort ids. A cohort would consist of a set of users sharing similar browsing behaviour. By targeting a given cohort, advertisers can ensure that relevant ads are shown while user privacy is preserved by a hiding in the pack mechanism.

The FLoC whitepaper mentions several mechanisms to map users to cohorts, with varying amounts of centralized information. The algorithms currently being implemented in Google Chrome as a POC are methods based on SimHash, which is a type of locality-sensitive hashing initially introduced for detecting near-duplicate documents.

Contents

Installation

The floc-simhash package is available at PyPI. Install using pip as follows.

pip install floc-simhash

The package requires python>=3.7 and will install scikit-learn as a dependency.

Usage

The package provides two main classes.

  • SimHash, applying the SimHash algorithm on the md5 hashes of tokens in the given document.

  • SimHashTransformer, applying the SimHash algorithm to a document vectorization as part of a scikit-learn pipeline

Finally, there is a third class available:

  • SortingSimHash, which performs the SortingLSH algorithm by first applying SimHash and then clipping the resulting hashes to a given precision.

Individual document-based SimHash

The SimHash class provides a way to calculate the SimHash of any given document, without using any information coming from other documents.

In this case, the document hash is computed by looking at md5 hashes of individual tokens. We use:

  • The implementation of the md5 hashing algorithm available in the hashlib module in the Python standard library.

  • Bitwise arithmetic for fast computations of the document hash from the individual hashed tokens.

The program below, for example, will print the following hexadecimal string: cf48b038108e698418650807001800c5.

from floc_simhash import SimHash

document = "Lorem ipsum dolor sit amet consectetur adipiscing elit"
hashed_document = SimHash(n_bits=128).hash(document)

print(hashed_document)

An example more related to computing cohort ids: the following program computes the cohort id of a user by applying SimHash to the document formed by the pipe-separated list of domains in the user browsing history.

from floc_simhash import SimHash

document = "google.com|hybridtheory.com|youtube.com|reddit.com"
hasher = SimHash(n_bits=128, tokenizer=lambda x: x.split("|"))
hashed_document = hasher.hash(document)

print(hashed_document)

The code above will print the hexadecimal string: 14dd1064800880b40025764cd0014715.

Providing your own tokenizer

The SimHash constructor will split the given document according to white space by default. However, it is possible to pass any callable that parses a string into a list of strings in the tokenizer parameter. We have provided an example above where we pass tokenizer=lambda x: x.split("|").

A good example of a more complex tokenization could be passing the word tokenizer in NLTK. This would be a nice choice if we wished to compute hashes of text documents.

Using the SimHashTransformer in scikit-learn pipelines

The approach to SimHash outlined in the FLoC Whitepaper consists of choosing random unit vectors and working on already vectorized data.

The choice of a random unit vector is equivalent to choosing a random hyperplane in feature space. Choosing p random hyperplanes partitions the feature space into 2^p regions. Then, a p-bit SimHash of a vector encodes the region to which it belongs.

It is reasonable to expect similar documents to have the same hash, provided the vectorization respects the given notion of similarity.

Two vectorizations are discussed in the aforementioned whitepaper: one-hot and tf-idf; they are available in scikit-learn.

The SimHashTransformer supplies a transformer (implementing the fit and transform methods) that can be used directly on the output of any of these two vectorizers in order to obtain hashes.

For example, given a 1d-array X containing strings, each of them corresponding to a concatenation of the domains visited by a given user and separated by "|", the following code will store in y the cohort id of each user, using one-hot encoding and a 32-bit SimHash.

from sklearn.feature_extraction.text import CountVectorizer
from sklearn.pipeline import Pipeline

from floc_simhash import SimHashTransformer


X = [
    "google.com|hybridtheory.com|youtube.com|reddit.com",
    "google.com|youtube.com|reddit.com",
    "github.com",
    "google.com|github.com",
]

one_hot_simhash = Pipeline(
    [
        ("vect", CountVectorizer(tokenizer=lambda x: x.split("|"), binary=True)),
        ("simhash", SimHashTransformer(n_bits=32)),
    ]
)

y = one_hot_simhash.fit_transform(X)

After running this code, the value of y would look similar to the following (expect same lengths; actual hash values depend on the choice of random vectors during fit):

['0xd98c7e93' '0xd10b79b3' '0x1085154d' '0x59cd150d']

Caveats

  • The implementation works on the sparse matrices output by CountVectorizer and TfidfTransformer, in order to manage memory efficiently.

  • At the moment, the choice of precision in the numpy arrays results in overflow errors for p >= 64. While we are waiting for implementation details of the FLoC POCs, the first indications hint at choices around p = 50.

Development

This project uses poetry for managing dependencies.

In order to clone the repository and run the unit tests, execute the following steps on an environment with python>=3.7.

git clone https://github.com/hybridtheory/floc-simhash.git
cd floc-simhash
poetry install
pytest

The unit tests are property-based, using the hypothesis library. This allows for algorithm veritication against hundreds or thousands of random generated inputs.

Since running many examples may lengthen the test suite runtime, we also use pytest-xdist in order to parallelize the tests. For example, the following call will run up to 1000 examples for each test with parallelism 4.

pytest -n 4 --hypothesis-profile=ci
Owner
Hybrid Theory
(formerly Affectv)
Hybrid Theory
implementation of the KNN algorithm on crab biometrics dataset for CS16

crab-knn implementation of the KNN algorithm in Python applied to biometrics data of purple rock crabs (leptograpsus variegatus) to classify the sex o

Andrew W. Chen 1 Nov 18, 2021
Python Package for Reflection Ultrasound Computed Tomography (RUCT) Delay And Sum (DAS) Algorithm

pyruct Python Package for Reflection Ultrasound Computed Tomography (RUCT) Delay And Sum (DAS) Algorithm The imaging setup is explained in these paper

Berkan Lafci 21 Dec 12, 2022
Apriori - An algorithm for frequent item set mining and association rule learning over relational databases

Apriori Apriori is an algorithm for frequent item set mining and association rul

Mohammad Nazari 8 Jan 10, 2022
A Python library for simulating finite automata, pushdown automata, and Turing machines

Automata Copyright 2016-2021 Caleb Evans Released under the MIT license Automata is a Python 3 library which implements the structures and algorithms

Caleb Evans 219 Dec 12, 2022
Greedy Algorithm-Problem Solving

MAX-MIN-Hackrrank-Python-Solution Greedy Algorithm-Problem Solving You will be given a list of integers, , and a single integer . You must create an a

Mahesh Nagargoje 3 Jul 13, 2021
A simple library for implementing common design patterns.

PyPattyrn from pypattyrn.creational.singleton import Singleton class DummyClass(object, metaclass=Singleton): # DummyClass is now a Singleton!

1.7k Jan 01, 2023
A selection of a few algorithms used to sort or search an array

Sort and search algorithms This repository has some common search / sort algorithms written in python, I also included the pseudocode of each algorith

0 Apr 02, 2022
Machine Learning algorithms implementation.

Machine Learning Algorithms Machine Learning algorithms implementation. What can I find here? ML Algorithms KNN K-Means-Clustering SVM (MultiClass) Pe

David Levin 1 Dec 10, 2021
Xor encryption and decryption algorithm

Folosire: Pentru encriptare: python encrypt.py parola fișier pentru criptare fișier encriptat(de tip binar) Pentru decriptare: python decrypt.p

2 Dec 05, 2021
A* (with 2 heuristic functions), BFS , DFS and DFS iterativeA* (with 2 heuristic functions), BFS , DFS and DFS iterative

Descpritpion This project solves the Taquin game (jeu de taquin) problem using different algorithms : A* (with 2 heuristic functions), BFS , DFS and D

Ayari Ahmed 3 May 09, 2022
A simple python implementation of A* and bfs algorithm solving Eight-Puzzle

A simple python implementation of A* and bfs algorithm solving Eight-Puzzle

2 May 22, 2022
Implements (high-dimenstional) clustering algorithm

Description Implements (high-dimenstional) clustering algorithm described in https://arxiv.org/pdf/1804.02624.pdf Dependencies python3 pytorch (=0.4)

Eric Elmoznino 5 Dec 27, 2022
Sorting Algorithm Visualiser using pygame

SortingVisualiser Sorting Algorithm Visualiser using pygame Features Visualisation of some traditional sorting algorithms like quicksort and bubblesor

4 Sep 05, 2021
Gnat - GNAT is NOT Algorithmic Trading

GNAT GNAT is NOT Algorithmic Trading! GNAT is a financial tool with two goals in

Sher Shah 2 Jan 09, 2022
Pathfinding algorithm based on A*

Pathfinding V1 What is pathfindingV1 ? This program is my very first path finding program, using python and turtle for graphic rendering. How is it wo

Yan'D 6 May 26, 2022
An implementation of ordered dithering algorithm in python as multimedia course project

One way of minimizing the size of an image is to simply reduce the number of bits you use to represent each pixel.

7 Dec 02, 2022
Exact algorithm for computing two-sided statistical tolerance intervals under a normal distribution assumption using Python.

norm-tol-int Exact algorithm for computing two-sided statistical tolerance intervals under a normal distribution assumption using Python. Methods The

Jed Ludlow 1 Jan 06, 2022
Distributed algorithms, reimplemented for fun and practice

Distributed Algorithms Playground for reimplementing and experimenting with algorithms for distributed computing. Usage Running the code for Ring-AllR

Mahan Tourkaman 1 Oct 16, 2022
A lightweight, object-oriented finite state machine implementation in Python with many extensions

transitions A lightweight, object-oriented state machine implementation in Python with many extensions. Compatible with Python 2.7+ and 3.0+. Installa

4.7k Jan 01, 2023
An open source algorithm and dataset for finding poop in pictures.

The shitspotter module is where I will be work on the "shitspotter" poop-detection algorithm and dataset. The primary goal of this work is to allow for the creation of a phone app that finds where yo

Jon Crall 29 Nov 29, 2022