Recommendation algorithms for large graphs

Related tags

Deep Learningpygrank
Overview

pygrank

Fast recommendation algorithms for large graphs based on link analysis.

License: Apache Software License
Author: Emmanouil (Manios) Krasanakis
Dependencies: networkx, numpy, scipy, sklearn, wget (required) tensorflow, torch (optional)

build codecov Downloads

Roadmap for 0.2.X

The following roadmap overviews short-term development goals and will be updated appropriately.

✔️ Reach a stable architecture with comprehensive development management (achieved as of 0.2.3, no longer backwards compatible with 0.1.17, most important change to_scipy >> preprocessor.)
✔️ Graph neural network support with dropout, renormalization and tensorflow backend (achieved as of 0.2.3)
✔️ Pytorch backend (achieved as of 0.2.4)
Pytorch gnns
100% code coverage
100% documentation completeness
Automatic download for all related publication datasets
Updated reference docs and automated citation discovery for algorithms
Enable Arnoldi and Lanczos optimizations in non-numpy backends
Transparent handling of float and double precisions (as of 0.2.4 everything is in float32, but this will change)

🛠️ Installation

pygrank is meant to work with Python 3.9 or later. It can be installed with pip per:

pip install pygrank

To automatically use the machine learning backends (e.g. to integrate the package in machine learning projects) tensorflow and pytorch, manually change the automatically created configuration file whose path is displayed in the error console. If you want others to run your code that depends on pygrank with specific backends, add the following recipe at your code's entry point to override other configurations:

import pygrank as pg
pg.load_backend(`pytorch`)

Quickstart

As a quick start, let us construct a networkx graph G and a set of nodes seeds.

>>> import networkx as nx
>>> graph = nx.Graph()
>>> graph.add_edge("A", "B")
>>> graph.add_edge("B", "C")
>>> graph.add_edge("C", "D")
>>> graph.add_edge("D", "E")
>>> graph.add_edge("A", "C")
>>> graph.add_edge("C", "E")
>>> graph.add_edge("B", "E")
>>> seeds = {"A", "B"}

We now run a personalized PageRank graph filter to score the structural relatedness of graph nodes to the ones of the given set. We start by importing the library,

>>> import pygrank as pg

For instructional purposes, we select the PageRank filter. This and more filters can be found in the module pygrank.algorithms.filters, but for ease-of-use can be accessed from the top-level import. We also set the default values of some parameters: the graph diffusion rate alpha required by the algorithm, a numerical tolerance tol at the convergence point and a graph preprocessing strategy "auto" normalization of the garph adjacency matrix to determine between column-based and symmetric normalization depending on whether the graph is undirected (as in this example) or not respectively.

>>> ranker = pg.PageRank(alpha=0.85, tol=1.E-6, normalization="auto")
>>> ranks = ranker(graph, {v: 1 for v in seeds})

Node ranking output is always organized into graph signals which can be used like dictionaries. For example, we can print the scores of some nodes per:

>>> print(ranks["B"], ranks["D"], ranks["E"])
0.25865456609095644 0.12484722044728883 0.17079023174039495

We alter this outcome so that it outputs node order, where higher node scores are assigned lower order. This is achieved by wrapping a postprocessor around the algorithm. There are various postprocessors, including ones to make scores fairness-aware. Again, postprocessors can be found in pygrank.algorithms.postprocess, but for shortcut purposes can be used from the top-level package import.

>>> ordinals = pg.Ordinals(ranker).rank(graph, {v: 1 for v in seeds})
>>> print(ordinals["B"], ordinals["D"], ordinals["E"])
1 5 4

How much time did it take for the base ranker to converge? (Depends on backend and device characteristics.)

>>> print(ranker.convergence)
19 iterations (0.001831000000009908 sec)

Since only the node order is important, we can use a different way to specify convergence:

>>> convergence = pg.RankOrderConvergenceManager(pagerank_alpha=0.85, confidence=0.98) 
>>> early_stop_ranker = pg.PageRank(alpha=0.85, convergence=convergence)
>>> ordinals = pg.Ordinals(early_stop_ranker).rank(graph, {v: 1 for v in seeds})
>>> print(early_stop_ranker.convergence)
2 iterations (0.0006313000000091051 sec)
>>> print(ordinals["B"], ordinals["D"], ordinals["E"])
1 5 4

Close to the previous results at a fraction of the time!! Note that convergence time measurements do not take into account the time needed to preprocess graphs.

🧠 Overview

Analyzing graph edges (links) between nodes can help rank/score graph nodes based on their structural proximity to structural or attribute-based communities of nodes. With the introduction of graph signal processing and decoupled graph neural networks the importance of node ranking has drastically increased, as its ability to perform inductive learning by quickly spreading node information through edges has been theoretically and experimentally corroborated. For example, it can be used to make predictions based on few known node attributes or base predictions outputted by low-quality feature-based machine learning models.

pygrank is a collection of node ranking algorithms and practices that support real-world conditions, such as large graphs and heterogeneous preprocessing and postprocessing requiremenets. Thus, it provides ready-to-use tools that simplify deployment of theoretical advancements and testing of new algorithms.

Some of the library's advantages are:

  1. Compatibility with networkx, tensorflow and pytorch.
  2. Datacentric interfaces that do not require transformations to identifiers.
  3. Large graph support with sparse representations and fast algorithms.
  4. Seamless pipelines, from graph preprocessing up to benchmarking and evaluation.
  5. Modular combination of components.

🔗 Material

Tutorials & Documentation

Quick links
Measures
Graph Filters
Postprocessors
Tuners
Downloadable Datasets

🔥 Features

  • Graph filters
  • Community detection
  • Graph normalization
  • Convergence criteria
  • Postprocessing (e.g. fairness awareness)
  • Evaluation measures
  • Benchmarks
  • Autotuning

👍 Contributing

Feel free to contribute in any way, for example through the issue tracker or by participating in discussions. Please check out the contribution guidelines to bring modifications to the code base. If so, make sure to follow the pull checklist described in the guidelines.

📓 Citation

If pygrank has been useful in your research and you would like to cite it in a scientific publication, please refer to the following paper:

TBD

To publish research that uses provided methods, please cite the appropriate publications.

Comments
  • Performant use of sparse matrices?

    Performant use of sparse matrices?

    I'm trying to use pygrank with larger graphs: 100k-1m nodes, hundreds of millions of edges. My graphs are in sparse matrix format. So far I've just converted to networkx and used those:

    g = nx.from_scipy_sparse_array(A, create_using=nx.DiGraph)
    
    signal_dict = {i: 1.0 for i in seeds}
    
    signal = pg.to_signal(g, signal_dict)
    
    # normalize signal
    signal.np /= signal.np.sum()
    
    result = algorithm(signal).np
    

    Is there a more performant option available?

    opened by deklanw 5
  • Tune on non-seeds?

    Tune on non-seeds?

    Is it possible to run the tuners with non-seed nodes? For example if I have a seed_set and a target_set can I run the tuner diffusions with the signal from the former but optimize for metrics defined with respect to the latter? In this case I have a desired ranking of the nodes in the target_set.

    opened by deklanw 2
  • Seamless verbosity

    Seamless verbosity

    Automatically display verbose progress (e.g. for dataset downloading) that disappears once tasks are complete (e.g. to resume normal benchmarking prints).

    feature 
    opened by maniospas 2
  • Automatically switching between backend interpretations

    Automatically switching between backend interpretations

    Currently, graph signal .np attributes need to be manually converted between backends. Switch to a @property getter interface that automatically performs needed conversions to remove the burden of checking for backend compliance from developers.

    feature 
    opened by maniospas 1
  • Numeric graph signal operations

    Numeric graph signal operations

    Currently, there needs to be clear distinction between graph signal objects and extraction of their .np fields. Reframe code so that, when signal operations are employed, their respective .np fields are used in their place. This can help write comphrehensible high-level code.

    feature 
    opened by maniospas 1
  • Torch GNN support

    Torch GNN support

    Current helper methods for GNNs are centered on tensorflow and keras. Create backend operations to abstract them so that they can also be implemented through torch. This needs to add a new test to make sure everything is working.

    Related tests: tests.test_gnn.test_appnp

    feature 
    opened by maniospas 1
  • Citation discovery for postprocessors

    Citation discovery for postprocessors

    Recommend how to cite graph filters and tuners through their cite() method.

    Related tests: test_autorefs.test_autorefs, test_autorefs.test_postprocessor_citations

    feature 
    opened by maniospas 1
  • Citation discovery for graph filters

    Citation discovery for graph filters

    Recommend how to cite graph filters and tuners through their cite() method.

    Related tests: test_autorefs.test_autorefs, test_autorefs.test_filter_citations

    feature 
    opened by maniospas 1
  • Automatic citation discovery

    Automatic citation discovery

    Since node ranking algorithms can comprise multiple components, some of which are implicitly determined (e.g. through default instantiation or specific argument parameters), create methods that can summarize used components and provide citations.

    Usefulness: This can help streamline citation practices.

    Related tests: None

    feature 
    opened by maniospas 1
  • optimization_dict not improving performance

    optimization_dict not improving performance

    The optimization_dict argument to the ClosedFormGraphFilter class does not seem to produce as an improvement in runnng time. This could indicate either a bug or bottlenecks in other parts of the pipeline, e.g. in graph signal instantiation.

    Version: run with version 2.3 adjusted to run experiments 50 times when measuring time

    Demonstration:

    >>> import pygrank as pg
    >>> optimization_dict = dict()
    >>> pg.benchmark_print(pg.benchmark({"HK": pg.HeatKernel(optimization_dict=optimization_dict)}, pg.load_datasets_all_communities(["bigraph"]), metric="time"))
                   	 HK 
    bigraph0       	 3.06
    bigraph1       	 3.36
    >>> pg.benchmark_print(pg.benchmark({"HK": pg.HeatKernel()}, pg.load_datasets_all_communities(["bigraph"]), metric="time"))
                   	 HK 
    bigraph0       	 2.98
    bigraph1       	 2.96
    

    Related tests: None

    bug fixed 
    opened by maniospas 1
  • Implementing PageRank as a GenericGraphFilter does not seem to work

    Implementing PageRank as a GenericGraphFilter does not seem to work

    In the following code, the two algorithm should be close to equivalent, yet there is a signficant Mabs error between them.

    >>> import pygrank as pg
    >>> graph = next(pg.load_datasets_graph(["graph9"]))
    >>> ranks1 = pg.Normalize(pg.PageRank(0.85, tol=1.E-12, max_iters=1000)).rank(graph, {"A": 1})
    >>> ranks2 = pg.Normalize(pg.GenericGraphFilter([0.85**i for i in range(20)], tol=1.E-12)).rank(graph, {"A": 1})
    >>> print(pg.Mabs(ranks1)(ranks2))
    0.025585056372903574
    

    Related tests: tests.test_filters.test_custom_runs

    bug invalid 
    opened by maniospas 1
  • Potential issue in the GNN demonstrator example with tensorflow backend

    Potential issue in the GNN demonstrator example with tensorflow backend

    During the review process of the library's paper, a reviewer pointed out that the following error occurs in their local system with TensorFlow 3.9.2 and Python 3.10.6.

    TypeError: Sequential.call() got multiple values for argument 'training'
    

    This occurs when running the code of the APPNP example. The issue lies fully with the example and not with any additional library functionality - it will not motivate a hotfix.

    Investigate whether this issue is unique to the version of TensorFlow or whether the latter has yet again updated something that will break the example in all future versions. At the very least, this error should not occur in github actions.

    If this is not the case, investigate whether this issue is platform-dependent.

    bug 
    opened by maniospas 0
  • Possible sparse_dot_mkl integration?

    Possible sparse_dot_mkl integration?

    I saw that you have your own sparse matrix library called matvec which parallelizes sparse-dense multiplication. There is an existing Python library called sparsedot which does the same but with scipy csr/csc matrices https://github.com/flatironinstitute/sparse_dot.

    I benchmarked the two with a matrix of size

    <6357204x6357204 sparse matrix of type '<class 'numpy.float32'>'
    	with 3614017927 stored elements in Compressed Sparse Column format>
    

    With the 32 core/64 thread server CPU I'm testing on the times for 10 matrix-vec multiplications on the right and left are:

    matvec
    right-vec  25.15
    left-vec  19.47
    
    sparse_dot csc
    right-vec  40.17
    left-vec  14.91
    
    sparse_dot csr
    right-vec  10.38
    left-vec  28.53
    

    The times look competitive. I'm not sure if matvec has some other advantages I'm not considering here, but that sparsedot works with the existing scipy types would be a huge benefit (for my usecase, at least). Sparsedot does require installing the mkl library and for giant matrices as above requires the environment variable MKL_INTERFACE_LAYER=ILP64.

    opened by deklanw 4
  • Convergence management tracking

    Convergence management tracking

    IImplement a high-level way of summarizing convergence analysis, for example to help measure running time and iterations when algorithms are wrapped by postprocessors (including iterative schemes). For example, a list of all convergence manager run outcomes could be obtained. Perhaps this could be achieved with some combination of dependent algorithm discovery and keeping convergence manager history on restart.

    Related tests: tests.test_filters.test_convergence_string_conversion

    feature 
    opened by maniospas 0
  • Check FairWalk correctness

    Check FairWalk correctness

    Fairwalk does not achieve the same level of fairness (as high a pRule) as other fairness-aware heuristics during tests. This could arise from erroneous implementation. If the implementation is found to be correct, separate its tests from other heuristics to account for the lower expected improvement.

    Related tests: tests.test_fairness.test_fair_heuristics

    invalid 
    opened by maniospas 0
Releases(0.2.10)
Owner
Multimedia Knowledge and Social Analytics Lab
MKLab is part of the Information Technologies Institute.
Multimedia Knowledge and Social Analytics Lab
Some simple programs built in Python: webcam with cv2 that detects eyes and face, with grayscale filter

Programas en Python Algunos programas simples creados en Python: 📹 Webcam con c

Madirex 1 Feb 15, 2022
[ICCV 2021] Official Tensorflow Implementation for "Single Image Defocus Deblurring Using Kernel-Sharing Parallel Atrous Convolutions"

KPAC: Kernel-Sharing Parallel Atrous Convolutional block This repository contains the official Tensorflow implementation of the following paper: Singl

Hyeongseok Son 50 Dec 29, 2022
Session-aware Item-combination Recommendation with Transformer Network

Session-aware Item-combination Recommendation with Transformer Network 2nd place (0.39224) code and report for IEEE BigData Cup 2021 Track1 Report EDA

Tzu-Heng Lin 6 Mar 10, 2022
Shuffle Attention for MobileNetV3

SA-MobileNetV3 Shuffle Attention for MobileNetV3 Train Run the following command for train model on your own dataset: python train.py --dataset mnist

Sajjad Aemmi 36 Dec 28, 2022
pytorch implementation of openpose including Hand and Body Pose Estimation.

pytorch-openpose pytorch implementation of openpose including Body and Hand Pose Estimation, and the pytorch model is directly converted from openpose

Hzzone 1.4k Jan 07, 2023
A cross-lingual COVID-19 fake news dataset

CrossFake An English-Chinese COVID-19 fake&real news dataset from the ICDMW 2021 paper below: Cross-lingual COVID-19 Fake News Detection. Jiangshu Du,

Yingtong Dou 11 Dec 01, 2022
Do Smart Glasses Dream of Sentimental Visions? Deep Emotionship Analysis for Eyewear Devices

EMOShip This repository contains the EMO-Film dataset described in the paper "Do Smart Glasses Dream of Sentimental Visions? Deep Emotionship Analysis

1 Nov 18, 2022
A curated list and survey of awesome Vision Transformers.

English | 简体中文 A curated list and survey of awesome Vision Transformers. You can use mind mapping software to open the mind mapping source file. You c

OpenMMLab 281 Dec 21, 2022
The repository for freeCodeCamp's YouTube course, Algorithmic Trading in Python

Algorithmic Trading in Python This repository Course Outline Section 1: Algorithmic Trading Fundamentals What is Algorithmic Trading? The Differences

Nick McCullum 1.8k Jan 02, 2023
Meaningful titles for tabs and PDF downloads! Also supports tab search.

arxiv-utils If you are a researcher that reads a lot on ArXiv, you'll benefit a lot from this web extension. Renames the title of PDF page to the pape

Johnson 174 Dec 20, 2022
HyDiff: Hybrid Differential Software Analysis

HyDiff: Hybrid Differential Software Analysis This repository provides the tool and the evaluation subjects for the paper HyDiff: Hybrid Differential

Yannic Noller 22 Oct 20, 2022
PyTorch Implementation of AnimeGANv2

PyTorch implementation of AnimeGANv2

4k Jan 07, 2023
TrackTech: Real-time tracking of subjects and objects on multiple cameras

TrackTech: Real-time tracking of subjects and objects on multiple cameras This project is part of the 2021 spring bachelor final project of the Bachel

5 Jun 17, 2022
MNIST, but with Bezier curves instead of pixels

bezier-mnist This is a work-in-progress vector version of the MNIST dataset. Samples Here are some samples from the training set. Note that, while the

Alex Nichol 15 Jan 16, 2022
YOLOv5 Series Multi-backbone, Pruning and quantization Compression Tool Box.

YOLOv5-Compression Update News Requirements 环境安装 pip install -r requirements.txt Evaluation metric Visdrone Model mAP ZhangYuan 719 Jan 02, 2023

Six - a Python 2 and 3 compatibility library

Six is a Python 2 and 3 compatibility library. It provides utility functions for smoothing over the differences between the Python versions with the g

Benjamin Peterson 919 Dec 28, 2022
TensorFlow 2 implementation of the Yahoo Open-NSFW model

TensorFlow 2 implementation of the Yahoo Open-NSFW model

Bosco Yung 101 Jan 01, 2023
This thesis is mainly concerned with state-space methods for a class of deep Gaussian process (DGP) regression problems

Doctoral dissertation of Zheng Zhao This thesis is mainly concerned with state-space methods for a class of deep Gaussian process (DGP) regression pro

Zheng Zhao 21 Nov 14, 2022
Boundary-aware Transformers for Skin Lesion Segmentation

Boundary-aware Transformers for Skin Lesion Segmentation Introduction This is an official release of the paper Boundary-aware Transformers for Skin Le

Jiacheng Wang 79 Dec 16, 2022
(CVPR 2022) Energy-based Latent Aligner for Incremental Learning

Energy-based Latent Aligner for Incremental Learning Accepted to CVPR 2022 We illustrate an Incremental Learning model trained on a continuum of tasks

Joseph K J 37 Jan 03, 2023