Code for the paper: "On the Bottleneck of Graph Neural Networks and Its Practical Implications"

Overview

On the Bottleneck of Graph Neural Networks and its Practical Implications

This is the official implementation of the paper: On the Bottleneck of Graph Neural Networks and its Practical Implications (ICLR'2021).

By Uri Alon and Eran Yahav. See also the [video], [poster] and [slides].

this repository is divided into three sub-projects:

  1. The subdirectory tf-gnn-samples is a clone of https://github.com/microsoft/tf-gnn-samples by Brockschmidt (ICML'2020). This project can be used to reproduce the QM9 and VarMisuse experiments of Section 4.2 and 4.2 in the paper. This sub-project depends on TensorFlow 1.13. The instructions for our clone are the same as their original code, except that reproducing our experiments (the QM9 dataset and VarMisuse) can be done by running the script tf-gnn-samples/run_qm9_benchs_fa.py or tf-gnn-samples/run_varmisuse_benchs_fa.py instead of their original scripts. For additional dependencies and instructions, see their original README: https://github.com/microsoft/tf-gnn-samples/blob/master/README.md. The main modification that we performed is using a Fully-Adjacent layer as the last GNN layer and we describe in our paper.
  2. The subdirectory gnn-comparison is a clone of https://github.com/diningphil/gnn-comparison by Errica et al. (ICLR'2020). This project can be used to reproduce the biological experiments (Section 4.3, the ENZYMES and NCI1 datasets). This sub-project depends on PyTorch 1.4 and Pytorch-Geometric. For additional dependencies and instructions, see their original README: https://github.com/diningphil/gnn-comparison/blob/master/README.md. The instructions for our clone are the same, except that we added an additional flag to every config_*.yml file, called last_layer_fa, which is set to True by default, and reproduces our experiments. The main modification that we performed is using a Fully-Adjacent layer as the last GNN layer.
  3. The main directory (in which this file resides) can be used to reproduce the experiments of Section 4.1 in the paper, for the "Tree-NeighborsMatch" problem. The rest of this README file includes the instructions for this main directory. This repository can be used to reproduce the experiments of

This project was designed to be useful in experimenting with new GNN architectures and new solutions for the over-squashing problem.

Feel free to open an issue with any questions.

The Tree-NeighborsMatch problem

alt text

Requirements

Dependencies

This project is based on PyTorch 1.4.0 and the PyTorch Geometric library.

pip install -r requirements.txt

The requirements.txt file lists the additional requirements. However, PyTorch Geometric might requires manual installation, and we thus recommend to use the requirements.txt file only afterward.

Verify that importing the dependencies goes without errors:

python -c 'import torch; import torch_geometric'

Hardware

Training on large trees (depth=8) might require ~60GB of RAM and about 10GB of GPU memory. GPU memory can be compromised by using a smaller batch size and using the --accum_grad flag.

For example, instead of running:

python main.py --batch_size 1024 --type GGNN

The following uses gradient accumulation, and takes less GPU memory:

python main.py --batch_size 512 --accum_grad 2 --type GGNN

Reproducing Experiments

To run a single experiment from the paper, run:

python main.py --help

And see the available flags. For example, to train a GGNN with depth=4, run:

python main.py --task DICTIONARY --eval_every 1000 --depth 4 --num_layers 5 --batch_size 1024 --type GGNN

To train a GNN across all depths, run one of the following:

python run-gcn-2-8.py
python run-gat-2-8.py
python run-ggnn-2-8.py
python run-gin-2-8.py

Results

The results of running the above scripts are (Section 4.1 in the paper):

alt text

r: 2 3 4 5 6 7 8
GGNN 1.0 1.0 1.0 0.60 0.38 0.21 0.16
GAT 1.0 1.0 1.0 0.41 0.21 0.15 0.11
GIN 1.0 1.0 0.77 0.29 0.20
GCN 1.0 1.0 0.70 0.19 0.14 0.09 0.08

Experiment with other GNN types

To experiment with other GNN types:

  • Add the new GNN type to the GNN_TYPE enum here, for example: MY_NEW_TYPE = auto()
  • Add another elif self is GNN_TYPE.MY_NEW_TYPE: to instantiate the new GNN type object here
  • Use the new type as a flag for the main.py file:
python main.py --type MY_NEW_TYPE ...

Citation

If you want to cite this work, please use this bibtex entry:

@inproceedings{
    alon2021on,
    title={On the Bottleneck of Graph Neural Networks and its Practical Implications},
    author={Uri Alon and Eran Yahav},
    booktitle={International Conference on Learning Representations},
    year={2021},
    url={https://openreview.net/forum?id=i80OPhOCVH2}
}
This library contains a Tensorflow implementation of the paper Stability Analysis of Unfolded WMMSE for Power Allocation

UWMMSE-stability Tensorflow implementation of Stability Analysis of UWMMSE Overview This library contains a Tensorflow implementation of the paper Sta

Arindam Chowdhury 1 Nov 16, 2022
source code the paper Fast and Robust Iterative Closet Point.

Fast-Robust-ICP This repository includes the source code the paper Fast and Robust Iterative Closet Point. Authors: Juyong Zhang, Yuxin Yao, Bailin De

yaoyuxin 320 Dec 28, 2022
FewBit — a library for memory efficient training of large neural networks

FewBit FewBit — a library for memory efficient training of large neural networks. Its efficiency originates from storage optimizations applied to back

24 Oct 22, 2022
This is the second place solution for : UmojaHack Africa 2022: African Snake Antivenom Binding Challenge

UmojaHack-Africa-2022-African-Snake-Antivenom-Binding-Challenge This is the second place solution for : UmojaHack Africa 2022: African Snake Antivenom

Mami Mokhtar 10 Dec 03, 2022
audioLIME: Listenable Explanations Using Source Separation

audioLIME This repository contains the Python package audioLIME, a tool for creating listenable explanations for machine learning models in music info

Institute of Computational Perception 27 Dec 01, 2022
A Quick and Dirty Progressive Neural Network written in TensorFlow.

prog_nn .▄▄ · ▄· ▄▌ ▐ ▄ ▄▄▄· ▐ ▄ ▐█ ▀. ▐█▪██▌•█▌▐█▐█ ▄█▪ •█▌▐█ ▄▀▀▀█▄▐█▌▐█▪▐█▐▐▌ ██▀

SynPon 53 Dec 12, 2022
Object Detection Projekt in GKI WS2021/22

tfObjectDetection Object Detection Projekt with tensorflow in GKI WS2021/22 Docker Container: docker run -it --name --gpus all -v path/to/project:p

Tim Eggers 1 Jul 18, 2022
Deep learning image registration library for PyTorch

TorchIR: Pytorch Image Registration TorchIR is a image registration library for deep learning image registration (DLIR). I have integrated several ide

Bob de Vos 40 Dec 16, 2022
"Exploring Vision Transformers for Fine-grained Classification" at CVPRW FGVC8

FGVC8 Exploring Vision Transformers for Fine-grained Classification paper presented at the CVPR 2021, The Eight Workshop on Fine-Grained Visual Catego

Marcos V. Conde 19 Dec 06, 2022
Pytorch Lightning 1.2k Jan 06, 2023
Continuous Augmented Positional Embeddings (CAPE) implementation for PyTorch

PyTorch implementation of Continuous Augmented Positional Embeddings (CAPE), by Likhomanenko et al. Enhance your Transformer positional embeddings with easy-to-use augmentations!

Guillermo Cámbara 26 Dec 13, 2022
CCAFNet: Crossflow and Cross-scale Adaptive Fusion Network for Detecting Salient Objects in RGB-D Images

Code and result about CCAFNet(IEEE TMM) 'CCAFNet: Crossflow and Cross-scale Adaptive Fusion Network for Detecting Salient Objects in RGB-D Images' IEE

zyrant丶 14 Dec 29, 2021
Face and Body Tracking for VRM 3D models on the web.

Kalidoface 3D - Face and Full-Body tracking for Vtubing on the web! A sequal to Kalidoface which supports Live2D avatars, Kalidoface 3D is a web app t

Rich 257 Jan 02, 2023
Space-event-trace - Tracing service for spaceteam events

space-event-trace Tracing service for TU Wien Spaceteam events. This service is

TU Wien Space Team 2 Jan 04, 2022
[2021][ICCV][FSNet] Full-Duplex Strategy for Video Object Segmentation

Full-Duplex Strategy for Video Object Segmentation (ICCV, 2021) Authors: Ge-Peng Ji, Keren Fu, Zhe Wu, Deng-Ping Fan*, Jianbing Shen, & Ling Shao This

Daniel-Ji 55 Dec 22, 2022
This implements one of result networks from Large-scale evolution of image classifiers

Exotic structured image classifier This implements one of result networks from Large-scale evolution of image classifiers by Esteban Real, et. al. Req

54 Nov 25, 2022
Code and data for ImageCoDe, a contextual vison-and-language benchmark

ImageCoDe This repository contains code and data for ImageCoDe: Image Retrieval from Contextual Descriptions. Data All collected descriptions for the

McGill NLP 27 Dec 02, 2022
A novel Engagement Detection with Multi-Task Training (ED-MTT) system

A novel Engagement Detection with Multi-Task Training (ED-MTT) system which minimizes MSE and triplet loss together to determine the engagement level of students in an e-learning environment.

Onur Çopur 12 Nov 11, 2022
Fader Networks: Manipulating Images by Sliding Attributes - NIPS 2017

FaderNetworks PyTorch implementation of Fader Networks (NIPS 2017). Fader Networks can generate different realistic versions of images by modifying at

Facebook Research 753 Dec 23, 2022
A Flow-based Generative Network for Speech Synthesis

WaveGlow: a Flow-based Generative Network for Speech Synthesis Ryan Prenger, Rafael Valle, and Bryan Catanzaro In our recent paper, we propose WaveGlo

NVIDIA Corporation 2k Dec 26, 2022