This repository is an implementation of paper : Improving the Training of Graph Neural Networks with Consistency Regularization

Related tags

Deep LearningCRGNN
Overview

CRGNN

Paper : Improving the Training of Graph Neural Networks with Consistency Regularization

Environments

Implementing environment: GeForce RTX™ 3090 24GB (GPU)

Requirements

pytorch>=1.8.1

ogb=1.3.2

numpy=1.21.2

cogdl (latest version)

Training

GAMLP+RLU+SCR

For ogbn-products:

Params: 3335831
python pre_processing.py --num_hops 5 --dataset ogbn-products

python main.py --use-rlu --method R_GAMLP_RLU --stages 400 300 300 300 300 300 --train-num-epochs 0 0 0 0 0 0 --threshold 0.85 --input-drop 0.2 --att-drop 0.5 --label-drop 0 --pre-process --residual --dataset ogbn-products --num-runs 10 --eval 10 --act leaky_relu --batch_size 50000 --patience 300 --n-layers-1 4 --n-layers-2 4 --bns --gama 0.1 --consis --tem 0.5 --lam 0.1 --hidden 512 --ema

GAMLP+MCR

For ogbn-products:

Params: 3335831
python pre_processing.py --num_hops 5 --dataset ogbn-products

python main.py --use-rlu --method R_GAMLP_RLU --stages 800 --train-num-epochs 0 --input-drop 0.2 --att-drop 0.5 --label-drop 0 --pre-process --residual --dataset ogbn-products --num-runs 10 --eval 10 --act leaky_relu --batch_size 100000 --patience 300 --n-layers-1 4 --n-layers-2 4 --bns --gama 0.1 --tem 0.5 --lam 0.5 --ema --mean_teacher --ema_decay 0.999 --lr 0.001 --adap --gap 10 --warm_up 150 --top 0.9 --down 0.8 --kl --kl_lam 0.2 --hidden 512

GIANT-XRT+GAMLP+MCR

Please follow the instruction in GIANT to get the GIANT-XRT node features.

For ogbn-products:

Params: 2144151
python pre_processing.py --num_hops 5 --dataset ogbn-products --giant_path " "

python main.py --use-rlu --method R_GAMLP_RLU --stages 800 --train-num-epochs 0 --input-drop 0.2 --att-drop 0.5 --label-drop 0 --pre-process --residual --dataset ogbn-products --num-runs 10 --eval 10 --act leaky_relu --batch_size 100000 --patience 300 --n-layers-1 4 --n-layers-2 4 --bns --gama 0.1 --tem 0.5 --lam 0.5 --ema --mean_teacher --ema_decay 0.99 --lr 0.001 --adap --gap 10 --warm_up 150 --kl --kl_lam 0.2 --hidden 256 --down 0.7 --top 0.9 --giant

SAGN+MCR

For ogbn-products:

Params: 2179678
python pre_processing.py --num_hops 3 --dataset ogbn-products

python main.py --method SAGN --stages 1000 --train-num-epochs 0 --input-drop 0.2 --att-drop 0.4 --pre-process --residual --dataset ogbn-products --num-runs 10 --eval 10 --batch_size 100000 --patience 300 --tem 0.5 --lam 0.5 --ema --mean_teacher --ema_decay 0.99 --lr 0.001 --adap --gap 20 --warm_up 150 --top 0.85 --down 0.75 --kl --kl_lam 0.01 --hidden 512 --zero-inits --dropout 0.5 --num-heads 1  --label-drop 0.5  --mlp-layer 2 --num_hops 3 --label_num_hops 14

GIANT-XRT+SAGN+MCR

Please follow the instruction in GIANT to get the GIANT-XRT node features.

For ogbn-products:

Params: 1154654
python pre_processing.py --num_hops 3 --dataset ogbn-products --giant_path " "

python main.py --method SAGN --stages 1000 --train-num-epochs 0 --input-drop 0.2 --att-drop 0.4 --pre-process --residual --dataset ogbn-products --num-runs 10 --eval 10 --batch_size 50000 --patience 300 --tem 0.5 --lam 0.5 --ema --mean_teacher --ema_decay 0.99 --lr 0.001 --adap --gap 20 --warm_up 100 --top 0.85 --down 0.75 --kl --kl_lam 0.02 --hidden 256 --zero-inits --dropout 0.5 --num-heads 1  --label-drop 0.5  --mlp-layer 1 --num_hops 3 --label_num_hops 9 --giant

Use Optuna to search for C&S hyperparameters

We searched hyperparameters using Optuna on validation set.

python post_processing.py --file_name --search

GAMLP+RLU+SCR+C&S

python post_processing.py --file_name --correction_alpha 0.4780826957236622 --smoothing_alpha 0.40049734940262954

GIANT-XRT+SAGN+MCR+C&S

python post_processing.py --file_name --correction_alpha 0.42299283241438157 --smoothing_alpha 0.4294212449832242

Node Classification Results:

Performance on ogbn-products(10 runs):

Methods Validation accuracy Test accuracy
SAGN+MCR 0.9325±0.0004 0.8441±0.0005
GAMLP+MCR 0.9319±0.0003 0.8462±0.0003
GAMLP+RLU+SCR 0.9292±0.0005 0.8505±0.0009
GAMLP+RLU+SCR+C&S 0.9304±0.0005 0.8520±0.0008
GIANT-XRT+GAMLP+MCR 0.9402±0.0004 0.8591±0.0008
GIANT-XRT+SAGN+MCR 0.9389±0.0002 0.8651±0.0009
GIANT-XRT+SAGN+MCR+C&S 0.9387±0.0002 0.8673±0.0008

Citation

Our paper:

@misc{zhang2021improving,
      title={Improving the Training of Graph Neural Networks with Consistency Regularization}, 
      author={Chenhui Zhang and Yufei He and Yukuo Cen and Zhenyu Hou and Jie Tang},
      year={2021},
      eprint={2112.04319},
      archivePrefix={arXiv},
      primaryClass={cs.SI}
}

GIANT paper:

@article{chien2021node,
  title={Node Feature Extraction by Self-Supervised Multi-scale Neighborhood Prediction},
  author={Eli Chien and Wei-Cheng Chang and Cho-Jui Hsieh and Hsiang-Fu Yu and Jiong Zhang and Olgica Milenkovic and Inderjit S Dhillon},
  journal={arXiv preprint arXiv:2111.00064},
  year={2021}
}

GAMLP paper:

@article{zhang2021graph,
  title={Graph attention multi-layer perceptron},
  author={Zhang, Wentao and Yin, Ziqi and Sheng, Zeang and Ouyang, Wen and Li, Xiaosen and Tao, Yangyu and Yang, Zhi and Cui, Bin},
  journal={arXiv preprint arXiv:2108.10097},
  year={2021}
}

SAGN paper:

@article{sun2021scalable,
  title={Scalable and Adaptive Graph Neural Networks with Self-Label-Enhanced training},
  author={Sun, Chuxiong and Wu, Guoshi},
  journal={arXiv preprint arXiv:2104.09376},
  year={2021}
}

C&S paper:

@inproceedings{
huang2021combining,
title={Combining Label Propagation and Simple Models out-performs Graph Neural Networks},
author={Qian Huang and Horace He and Abhay Singh and Ser-Nam Lim and Austin Benson},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=8E1-f3VhX1o}
}
Owner
THUDM
Data Mining Research Group at Tsinghua University
THUDM
This is the pytorch implementation for the paper: *Learning Accurate Performance Predictors for Ultrafast Automated Model Compression*, which is in submission to TPAMI

SeerNet This is the pytorch implementation for the paper: Learning Accurate Performance Predictors for Ultrafast Automated Model Compression, which is

3 May 01, 2022
Official implementation for the paper: "Multi-label Classification with Partial Annotations using Class-aware Selective Loss"

Multi-label Classification with Partial Annotations using Class-aware Selective Loss Paper | Pretrained models Official PyTorch Implementation Emanuel

99 Dec 27, 2022
Multi-Horizon-Forecasting-for-Limit-Order-Books

Multi-Horizon-Forecasting-for-Limit-Order-Books This jupyter notebook is used to demonstrate our work, Multi-Horizon Forecasting for Limit Order Books

Zihao Zhang 116 Dec 23, 2022
LieTransformer: Equivariant Self-Attention for Lie Groups

LieTransformer This repository contains the implementation of the LieTransformer used for experiments in the paper LieTransformer: Equivariant Self-At

OxCSML (Oxford Computational Statistics and Machine Learning) 50 Dec 28, 2022
Selective Wavelet Attention Learning for Single Image Deraining

SWAL Code for Paper "Selective Wavelet Attention Learning for Single Image Deraining" Prerequisites Python 3 PyTorch Models We provide the models trai

Bobo 9 Jun 17, 2022
SmallInitEmb - LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence

SmallInitEmb LayerNorm(SmallInit(Embedding)) in a Transformer I find that when t

PENG Bo 11 Dec 25, 2022
PyTorch implementation of DeepLab v2 on COCO-Stuff / PASCAL VOC

DeepLab with PyTorch This is an unofficial PyTorch implementation of DeepLab v2 [1] with a ResNet-101 backbone. COCO-Stuff dataset [2] and PASCAL VOC

Kazuto Nakashima 995 Jan 08, 2023
InterFaceGAN - Interpreting the Latent Space of GANs for Semantic Face Editing

InterFaceGAN - Interpreting the Latent Space of GANs for Semantic Face Editing Figure: High-quality facial attributes editing results with InterFaceGA

GenForce: May Generative Force Be with You 1.3k Jan 09, 2023
Revealing and Protecting Labels in Distributed Training

Revealing and Protecting Labels in Distributed Training

Google Interns 0 Nov 09, 2022
Tool for working with Y-chromosome data from YFull and FTDNA

ycomp ycomp is a tool for working with Y-chromosome data from YFull and FTDNA. Run ycomp -h for information on how to use the program. Installation Th

Alexander Regueiro 2 Jun 18, 2022
The code for MM2021 paper "Multi-Level Counterfactual Contrast for Visual Commonsense Reasoning"

The Code for MM2021 paper "Multi-Level Counterfactual Contrast for Visual Commonsense Reasoning" Setting up and using the repo Get the dataset. Follow

4 Apr 20, 2022
This repository contains the source code of an efficient 1D probabilistic model for music time analysis proposed in ICASSP2022 venue.

Jump Reward Inference for 1D Music Rhythmic State Spaces An implementation of the probablistic jump reward inference model for music rhythmic informat

Mojtaba Heydari 25 Dec 16, 2022
Posterior predictive distributions quantify uncertainties ignored by point estimates.

Posterior predictive distributions quantify uncertainties ignored by point estimates.

DeepMind 177 Dec 06, 2022
Nerf pl - NeRF (Neural Radiance Fields) and NeRF in the Wild using pytorch-lightning

nerf_pl Update: an improved NSFF implementation to handle dynamic scene is open! Update: NeRF-W (NeRF in the Wild) implementation is added to nerfw br

AI葵 1.8k Dec 30, 2022
Implementation of "Generalizable Neural Performer: Learning Robust Radiance Fields for Human Novel View Synthesis"

Generalizable Neural Performer: Learning Robust Radiance Fields for Human Novel View Synthesis Abstract: This work targets at using a general deep lea

163 Dec 14, 2022
[CVPR 2021] Unsupervised Degradation Representation Learning for Blind Super-Resolution

DASR Pytorch implementation of "Unsupervised Degradation Representation Learning for Blind Super-Resolution", CVPR 2021 [arXiv] Overview Requirements

Longguang Wang 318 Dec 24, 2022
Script for getting information in discord

User-info.py Script for getting information in https://discord.com/ Instalação: apt-get update -y apt-get upgrade -y apt-get install git pkg install

Moleey 1 Dec 18, 2021
Expand human face editing via Global Direction of StyleCLIP, especially to maintain similarity during editing.

Oh-My-Face This project is based on StyleCLIP, RIFE, and encoder4editing, which aims to expand human face editing via Global Direction of StyleCLIP, e

AiLin Huang 51 Nov 17, 2022
Pixel-level Crack Detection From Images Of Levee Systems : A Comparative Study

PIXEL-LEVEL CRACK DETECTION FROM IMAGES OF LEVEE SYSTEMS : A COMPARATIVE STUDY G

Manisha Panta 2 Jul 23, 2022
Shōgun

The SHOGUN machine learning toolbox Unified and efficient Machine Learning since 1999. Latest release: Cite Shogun: Develop branch build status: Donat

Shōgun ML 2.9k Jan 04, 2023