Official implementation of Self-supervised Graph Attention Networks (SuperGAT), ICLR 2021.

Overview

SuperGAT

Official implementation of Self-supervised Graph Attention Networks (SuperGAT). This model is presented at How to Find Your Friendly Neighborhood: Graph Attention Design with Self-Supervision, International Conference on Learning Representations (ICLR), 2021.

Notice

The documented SuperGATConv layer with an example has been merged to the PyTorch Geometric's main branch.

This repository is based on torch==1.4.0+cu100 and torch-geometric==1.4.3, which are somewhat outdated at this point (Feb 2021). If you are using recent PyTorch/CUDA/PyG, we would recommend using the PyG's. If you want to run codes in this repository, please follow #installation.

Installation

# In SuperGAT/
bash install.sh ${CUDA, default is cu100}
  • If you have any trouble installing PyTorch Geometric, please install PyG's dependencies manually.
  • Codes are tested with python 3.7.6 and nvidia/cuda:10.0-cudnn7-devel-ubuntu16.04 image.
  • PYG's FAQ might be helpful.

Basics

  • The main train/test code is in SuperGAT/main.py.
  • If you want to see the SuperGAT layer in PyTorch Geometric MessagePassing grammar, refer to SuperGAT/layer.py.
  • If you want to see hyperparameter settings, refer to SuperGAT/args.yaml and SuperGAT/arguments.py.

Run

python3 SuperGAT/main.py \
    --dataset-class Planetoid \
    --dataset-name Cora \
    --custom-key EV13NSO8-ES
 
...

## RESULTS SUMMARY ##
best_test_perf: 0.853 +- 0.003
best_test_perf_at_best_val: 0.851 +- 0.004
best_val_perf: 0.825 +- 0.003
test_perf_at_best_val: 0.849 +- 0.004
## RESULTS DETAILS ##
best_test_perf: [0.851, 0.853, 0.857, 0.852, 0.858, 0.852, 0.847]
best_test_perf_at_best_val: [0.851, 0.849, 0.855, 0.852, 0.858, 0.848, 0.844]
best_val_perf: [0.82, 0.824, 0.83, 0.826, 0.828, 0.824, 0.822]
test_perf_at_best_val: [0.851, 0.844, 0.853, 0.849, 0.857, 0.848, 0.844]
Time for runs (s): 173.85422565042973

The default setting is 7 runs with different random seeds. If you want to change this number, change num_total_runs in the main block of SuperGAT/main.py.

For ogbn-arxiv, use SuperGAT/main_ogb.py.

GPU Setting

There are three arguments for GPU settings (--num-gpus-total, --num-gpus-to-use, --gpu-deny-list). Default values are from the author's machine, so we recommend you modify these values from SuperGAT/args.yaml or by the command line.

  • --num-gpus-total (default 4): The total number of GPUs in your machine.
  • --num-gpus-to-use (default 1): The number of GPUs you want to use.
  • --gpu-deny-list (default: [1, 2, 3]): The ids of GPUs you want to not use.

If you have four GPUs and want to use the first (cuda:0),

python3 SuperGAT/main.py \
    --dataset-class Planetoid \
    --dataset-name Cora \
    --custom-key EV13NSO8-ES \
    --num-gpus-total 4 \
    --gpu-deny-list 1 2 3

Model (--model-name)

Type Model name
GCN GCN
GraphSAGE SAGE
GAT GAT
SuperGATGO GAT
SuperGATDP GAT
SuperGATSD GAT
SuperGATMX GAT

Dataset (--dataset-class, --dataset-name)

Dataset class Dataset name
Planetoid Cora
Planetoid CiteSeer
Planetoid PubMed
PPI PPI
WikiCS WikiCS
WebKB4Univ WebKB4Univ
MyAmazon Photo
MyAmazon Computers
PygNodePropPredDataset ogbn-arxiv
MyCoauthor CS
MyCoauthor Physics
MyCitationFull Cora_ML
MyCitationFull CoraFull
MyCitationFull DBLP
Crocodile Crocodile
Chameleon Chameleon
Flickr Flickr

Custom Key (--custom-key)

Type Custom key (General) Custom key (for PubMed) Custom key (for ogbn-arxiv)
SuperGATGO EV1O8-ES EV1-500-ES -
SuperGATDP EV2O8-ES EV2-500-ES -
SuperGATSD EV3O8-ES EV3-500-ES EV3-ES
SuperGATMX EV13NSO8-ES EV13NSO8-500-ES EV13NS-ES

Other Hyperparameters

See SuperGAT/args.yaml or run $ python3 SuperGAT/main.py --help.

Code Base

This is a simple backtesting framework to help you test your crypto currency trading. It includes a way to download and store historical crypto data and to execute a trading strategy.

You can use this simple crypto backtesting script to ensure your trading strategy is successful Minimal setup required and works well with static TP a

Andrei 154 Sep 12, 2022
FFCV: Fast Forward Computer Vision (and other ML workloads!)

Fast Forward Computer Vision: train models at a fraction of the cost with accele

FFCV 2.3k Jan 03, 2023
Code for "Intra-hour Photovoltaic Generation Forecasting based on Multi-source Data and Deep Learning Methods."

pv_predict_unet-lstm Code for "Intra-hour Photovoltaic Generation Forecasting based on Multi-source Data and Deep Learning Methods." IEEE Transactions

FolkScientistInDL 8 Oct 08, 2022
[NeurIPS-2021] Slow Learning and Fast Inference: Efficient Graph Similarity Computation via Knowledge Distillation

Efficient Graph Similarity Computation - (EGSC) This repo contains the source code and dataset for our paper: Slow Learning and Fast Inference: Effici

23 Nov 11, 2022
This is the pytorch implementation for the paper: *Learning Accurate Performance Predictors for Ultrafast Automated Model Compression*, which is in submission to TPAMI

SeerNet This is the pytorch implementation for the paper: Learning Accurate Performance Predictors for Ultrafast Automated Model Compression, which is

3 May 01, 2022
Knowledge Management for Humans using Machine Learning & Tags

HyperTag HyperTag helps humans intuitively express how they think about their files using tags and machine learning.

Ravn Tech, Inc. 165 Nov 04, 2022
Implementation of DropLoss for Long-Tail Instance Segmentation in Pytorch

[AAAI 2021]DropLoss for Long-Tail Instance Segmentation [AAAI 2021] DropLoss for Long-Tail Instance Segmentation Ting-I Hsieh*, Esther Robb*, Hwann-Tz

Tim 37 Dec 02, 2022
PocketNet: Extreme Lightweight Face Recognition Network using Neural Architecture Search and Multi-Step Knowledge Distillation

PocketNet This is the official repository of the paper: PocketNet: Extreme Lightweight Face Recognition Network using Neural Architecture Search and M

Fadi Boutros 40 Dec 22, 2022
Mining-the-Social-Web-3rd-Edition - The official online compendium for Mining the Social Web, 3rd Edition (O'Reilly, 2018)

Mining the Social Web, 3rd Edition The official code repository for Mining the Social Web, 3rd Edition (O'Reilly, 2019). The book is available from Am

Mikhail Klassen 838 Jan 01, 2023
Much faster than SORT(Simple Online and Realtime Tracking), a little worse than SORT

QSORT QSORT(Quick + Simple Online and Realtime Tracking) is a simple online and realtime tracking algorithm for 2D multiple object tracking in video s

Yonghye Kwon 8 Jul 27, 2022
Multiple paper open-source codes of the Microsoft Research Asia DKI group

📫 Paper Code Collection (MSRA DKI Group) This repo hosts multiple open-source codes of the Microsoft Research Asia DKI Group. You could find the corr

Microsoft 249 Jan 08, 2023
This is the pytorch code for the paper Curious Representation Learning for Embodied Intelligence.

Curious Representation Learning for Embodied Intelligence This is the pytorch code for the paper Curious Representation Learning for Embodied Intellig

19 Oct 19, 2022
NanoDet-Plus⚡Super fast and lightweight anchor-free object detection model. 🔥Only 980 KB(int8) / 1.8MB (fp16) and run 97FPS on cellphone🔥

NanoDet-Plus⚡Super fast and lightweight anchor-free object detection model. 🔥Only 980 KB(int8) / 1.8MB (fp16) and run 97FPS on cellphone🔥

4.8k Jan 07, 2023
deep-table implements various state-of-the-art deep learning and self-supervised learning algorithms for tabular data using PyTorch.

deep-table implements various state-of-the-art deep learning and self-supervised learning algorithms for tabular data using PyTorch.

63 Oct 17, 2022
This is the code for "HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields".

HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields This is the code for "HyperNeRF: A Higher-Dimensional

Google 702 Jan 02, 2023
CS583: Deep Learning

CS583: Deep Learning

Shusen Wang 2.6k Dec 30, 2022
A PyTorch implementation of "Semi-Supervised Graph Classification: A Hierarchical Graph Perspective" (WWW 2019)

SEAL ⠀⠀⠀ A PyTorch implementation of Semi-Supervised Graph Classification: A Hierarchical Graph Perspective (WWW 2019) Abstract Node classification an

Benedek Rozemberczki 202 Dec 27, 2022
The code for our paper submitted to RAL/IROS 2022: OverlapTransformer: An Efficient and Rotation-Invariant Transformer Network for LiDAR-Based Place Recognition.

OverlapTransformer The code for our paper submitted to RAL/IROS 2022: OverlapTransformer: An Efficient and Rotation-Invariant Transformer Network for

HAOMO.AI 136 Jan 03, 2023
Deep Occlusion-Aware Instance Segmentation with Overlapping BiLayers [CVPR 2021]

Deep Occlusion-Aware Instance Segmentation with Overlapping BiLayers [BCNet, CVPR 2021] This is the official pytorch implementation of BCNet built on

Lei Ke 434 Dec 01, 2022
MRI reconstruction (e.g., QSM) using deep learning methods

deepMRI: Deep learning methods for MRI Authors: Yang Gao, Hongfu Sun This repo is devloped based on Pytorch (1.8 or later) and matlab (R2019a or later

Hongfu Sun 17 Dec 18, 2022