[Preprint] "Bag of Tricks for Training Deeper Graph Neural Networks A Comprehensive Benchmark Study" by Tianlong Chen*, Kaixiong Zhou*, Keyu Duan, Wenqing Zheng, Peihao Wang, Xia Hu, Zhangyang Wang

Overview

Bag of Tricks for Training Deeper Graph Neural Networks: A Comprehensive Benchmark Study

License: MIT

Codes for [Preprint] Bag of Tricks for Training Deeper Graph Neural Networks: A Comprehensive Benchmark Study

Tianlong Chen*, Kaixiong Zhou*, Keyu Duan, Wenqing Zheng, Peihao Wang, Xia Hu, Zhangyang Wang

Introduction

This is the first fair and reproducible benchmark dedicated to assessing the "tricks" of training deep GNNs. We categorize existing approaches, investigate their hyperparameter sensitivity, and unify the basic configuration. Comprehensive evaluations are then conducted on tens of representative graph datasets including the recent large-scale Open Graph Benchmark (OGB), with diverse deep GNN backbones. Based on synergistic studies, we discover the transferable combo of superior training tricks, that lead us to attain the new state-of-the-art results for deep GCNs, across multiple representative graph datasets.

Requirements

Installation with Conda

conda create -n deep_gcn_benchmark
conda activate deep_gcn_benchmark
pip install -r requirements.txt

Our Installation Notes for PyTorch Geometric.

What env configs that we tried that have succeeded: Mac/Linux + cuda driver 11.2 + Torch with cuda 11.1 + torch_geometric/torch sparse/etc with cuda 11.1.

What env configs that we tried but didn't work: Linux+Cuda 11.1/11.0/10.2 + whatever version of Torch.

In the above case when it did work, we adopted the following installation commands, and it automatically downloaded built wheels, and the installation completes within seconds.

In the case when it did not work, the installation appears to be very slow (ten minutes level for torch sparse/torch scatter). Then the installation did not produce any error, while when import torch_geometric in python code, it reports errors of different types.

Installation codes that we adopted on Linux cuda 11.2 that did work:

pip3 install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
pip install torch-scatter -f https://pytorch-geometric.com/whl/torch-1.9.0+cu111.html
pip install torch-sparse -f https://pytorch-geometric.com/whl/torch-1.9.0+cu111.html
pip install torch-geometric

Project Structure

.
├── Dataloader.py
├── main.py
├── trainer.py
├── models
│   ├── *.py
├── options
│   ├── base_options.py
│   └── configs
│       ├── *.yml
├── tricks
│   ├── tricks
│   │   ├── dropouts.py
│   │   ├── norms.py
│   │   ├── others.py
│   │   └── skipConnections.py
│   └── tricks_comb.py
└── utils.py

How to Use the Benchmark

Train Deep GCN models as your baselines

To train a deep GCN model <model> on dataset <dataset> as your baseline, run:

python main.py --compare_model=1 --cuda_num=0 --type_model=<model> --dataset=<dataset>
# <model>   in  [APPNP, DAGNN, GAT, GCN, GCNII, GPRGNN, JKNet, SGC]
# <dataset> in  [Cora, Citeseer, Pubmed, ogbn-arixv, CoauthorCS, CoauthorPhysics, AmazonComputers, AmazonPhoto, TEXAS, WISCONSIN, CORNELL, ACTOR]

we comprehensively explored the optimal hyperparameters for all models we implemented and train the models under the well-studied hyperparameter settings. For model-specific hyperparameter configs, please refer to options/configs/*.yml

Explore different trick combinations

To explore different trick combinations, we provide a tricks_comb model, which integrates different types of tricks as follows:

dropouts:        DropEdge, DropNode, FastGCN, LADIES
norms:           BatchNorm, PairNorm, NodeNorm, MeanNorm, GroupNorm, CombNorm
skipConnections: Residual, Initial, Jumping, Dense
others:          IdentityMapping

To train a tricks_comb model with specific tricks, run:

python main.py --compare_model=0 --cuda_num=0 --type_trick=<trick_1>+<trick_2>+...+<trick_n> --dataset=<dataset>

, where you can assign type_trick with any number of tricks. For instance, to train a trick_comb model with Initial, EdgeDrop, BatchNorm and IdentityMapping on Cora, run:

python main.py --compare_model=0 --cuda_num=0 --type_trick=Initial+EdgeDrop+BatchNorm+IdentityMapping --dataset=Cora

We provide two backbones --type_model=GCN and --type_tricks=SGC for trick combinations. Specifically, when --type_model=SGC and --type_trick=IdenityMapping co-occur, IdentityMapping has higher priority.

How to Contribute

You are welcome to make any type of contributions. Here we provide a brief guidance to add your own deep GCN models and tricks.

Add your own model

Several simple steps to add your own deep GCN model <DeepGCN>.

  1. Create a python file named <DeepGCN>.py
  2. Implement your own model as a torch.nn.Module, where the class name is recommended to be consistent with your filename <DeepGCN>
  3. Make sure the commonly-used hyperparameters is consistent with ours (listed as follows). To create any new hyperparameter, add it in options/base_options.py.
 --dim_hidden        # hidden dimension
 --num_layers        # number of GCN layers
 --dropout           # rate of dropout for GCN layers
 --lr:               # learning rate
 --weight_decay      # rate of l2 regularization
  1. Register your model in models/__init__.py by add the following codes:
from <DeepGCN> import <DeepGCN>
__all__.append('<DeepGCN>')
  1. You are recommend to use YAML to store your dataset-specific hyperparameter configuration. Create a YAML file <DeepGCN>.yml in options/configs and add the hyperparameters as the following style:
<dataset_1>
  <hyperparameter_1> : value_1
  <hyperparameter_2> : value_2

Now your own model <DeepGCN> should be added successfully into our benchmark framework. To test the performance of <DeepGCN> on <dataset>, run:

python main.py --compare_model=1 --type_model=<DeepGCN> --dataset=<dataset>

Add your own trick

As all implemented tricks are coupled in tricks_comb.py tightly, we do not recommend integrating your own trick to trick_comb to avoid unexpected errors. However, you can use the interfaces we provided in tricks/tricks/ to combine your own trick with ours.

Main Results and Leaderboard

  • Superior performance of our best combo with 32 layers deep GCNs
Model Ranking on Cora Test Accuracy
Ours 85.48
GCNII 85.29
APPNP 83.68
DAGNN 83.39
GPRGNN 83.13
JKNet 73.23
SGC 68.45
Model Ranking on Citeseer Test Accuracy
Ours 73.35
GCNII 73.24
DAGNN 72.59
APPNP 72.13
GPRGNN 71.01
SGC 61.92
JKNet 50.68
Model Ranking on PubMed Test Accuracy
Ours 80.76
DAGNN 80.58
APPNP 80.24
GCNII 79.91
GPRGNN 78.46
SGC 66.61
JKNet 63.77
Model Ranking on OGBN-ArXiv Test Accuracy
Ours 72.70
GCNII 72.60
DAGNN 71.46
GPRGNN 70.18
APPNP 66.94
JKNet 66.31
SGC 34.22
  • Transferability of our best combo with 32 layers deep GCNs
Models Average Ranking on (CS, Physics, Computers, Photo, Texas, Wisconsin, Cornell, Actor)
Ours 1.500
SGC 6.250
DAGNN 4.375
GCNII 3.875
JKNet 4.875
APPNP 4.000
GPRGNN 3.125
  • Takeaways of the best combo

Citation

if you find this repo is helpful, please cite

TBD
Owner
VITA
Visual Informatics Group @ University of Texas at Austin
VITA
Attack on Confidence Estimation algorithm from the paper "Disrupting Deep Uncertainty Estimation Without Harming Accuracy"

Attack on Confidence Estimation (ACE) This repository is the official implementation of "Disrupting Deep Uncertainty Estimation Without Harming Accura

3 Mar 30, 2022
The repository for freeCodeCamp's YouTube course, Algorithmic Trading in Python

Algorithmic Trading in Python This repository Course Outline Section 1: Algorithmic Trading Fundamentals What is Algorithmic Trading? The Differences

Nick McCullum 1.8k Jan 02, 2023
Sky Computing: Accelerating Geo-distributed Computing in Federated Learning

Sky Computing Introduction Sky Computing is a load-balanced framework for federated learning model parallelism. It adaptively allocate model layers to

HPC-AI Tech 72 Dec 27, 2022
Pytorch implementation of the unsupervised object discovery method LOST.

LOST Pytorch implementation of the unsupervised object discovery method LOST. More details can be found in the paper: Localizing Objects with Self-Sup

Valeo.ai 189 Dec 25, 2022
HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis

HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis Jungil Kong, Jaehyeon Kim, Jaekyoung Bae In our paper, we p

Rishikesh (ऋषिकेश) 31 Dec 08, 2022
Graduation Project

Gesture-Detection-and-Depth-Estimation This is my graduation project. (1) In this project, I use the YOLOv3 object detection model to detect gesture i

ChaosAT 1 Nov 23, 2021
code for Grapadora research paper experimentation

Road feature embedding selection method Code for research paper experimentation Abstract Traffic forecasting models rely on data that needs to be sens

Eric López Manibardo 0 May 26, 2022
Lighthouse: Predicting Lighting Volumes for Spatially-Coherent Illumination

Lighthouse: Predicting Lighting Volumes for Spatially-Coherent Illumination Pratul P. Srinivasan, Ben Mildenhall, Matthew Tancik, Jonathan T. Barron,

Pratul Srinivasan 65 Dec 14, 2022
Code For TDEER: An Efficient Translating Decoding Schema for Joint Extraction of Entities and Relations (EMNLP2021)

TDEER (WIP) Code For TDEER: An Efficient Translating Decoding Schema for Joint Extraction of Entities and Relations (EMNLP2021) Overview TDEER is an e

Alipay 6 Dec 17, 2022
SpeechBrain is an open-source and all-in-one speech toolkit based on PyTorch.

The SpeechBrain Toolkit SpeechBrain is an open-source and all-in-one speech toolkit based on PyTorch. The goal is to create a single, flexible, and us

SpeechBrain 5.1k Jan 02, 2023
This repository introduces a short project about Transfer Learning for Classification of MRI Images.

Transfer Learning for MRI Images Classification This repository introduces a short project made during my stay at Neuromatch Summer School 2021. This

Oscar Guarnizo 3 Nov 15, 2022
A mini lib that implements several useful functions binding to PyTorch in C++.

Torch-gather A mini library that implements several useful functions binding to PyTorch in C++. What does gather do? Why do we need it? When dealing w

maxwellzh 8 Sep 07, 2022
Selective Wavelet Attention Learning for Single Image Deraining

SWAL Code for Paper "Selective Wavelet Attention Learning for Single Image Deraining" Prerequisites Python 3 PyTorch Models We provide the models trai

Bobo 9 Jun 17, 2022
alfred-py: A deep learning utility library for **human**

Alfred Alfred is command line tool for deep-learning usage. if you want split an video into image frames or combine frames into a single video, then a

JinTian 800 Jan 03, 2023
Easily Process a Batch of Cox Models

ezcox: Easily Process a Batch of Cox Models The goal of ezcox is to operate a batch of univariate or multivariate Cox models and return tidy result. ⏬

Shixiang Wang 15 May 23, 2022
URIE: Universal Image Enhancementfor Visual Recognition in the Wild

URIE: Universal Image Enhancementfor Visual Recognition in the Wild This is the implementation of the paper "URIE: Universal Image Enhancement for Vis

Taeyoung Son 43 Sep 12, 2022
Python based Advanced AI Assistant

Knick is a virtual artificial intelligence project, fully developed in python. The objective of this project is to develop a virtual assistant that can handle our minor, intermediate as well as heavy

19 Nov 15, 2022
DeceFL: A Principled Decentralized Federated Learning Framework

DeceFL: A Principled Decentralized Federated Learning Framework This repository comprises codes that reproduce experiments in Ye, et al (2021), which

Huazhong Artificial Intelligence Lab (HAIL) 10 May 31, 2022
Exe-to-xlsm - Simple script to create VBscript of exe and inject to xlsm

🎁 Exe To Office Executable file injection to Office documents: .xlsm, .docm, .p

3 Jan 25, 2022