A graph adversarial learning toolbox based on PyTorch and DGL.

Related tags

Deep LearningGraphWar
Overview

GraphWar: Arms Race in Graph Adversarial Learning

NOTE: GraphWar is still in the early stages and the API will likely continue to change.

πŸš€ Installation

Please make sure you have installed PyTorch and Deep Graph Library(DGL).

# Comming soon
pip install -U graphwar

or

# Recommended
git clone https://github.com/EdisonLeeeee/GraphWar.git && cd GraphWar
pip install -e . --verbose

where -e means "editable" mode so you don't have to reinstall every time you make changes.

Get Started

Assume that you have a dgl.DGLgraph instance g that describes you dataset.

A simple targeted attack

from graphwar.attack.targeted import RandomAttack
attacker = RandomAttack(g)
attacker.attack(1, num_budgets=3) # attacking target node `1` with `3` edges 
attacked_g = attacker.g()
edge_flips = attacker.edge_flips()

A simple untargeted attack

from graphwar.attack.untargeted import RandomAttack
attacker = RandomAttack(g)
attacker.attack(num_budgets=0.05) # attacking the graph with 5% edges perturbations
attacked_g = attacker.g()
edge_flips = attacker.edge_flips()

Implementations

In detail, the following methods are currently implemented:

Attack

Targeted Attack

Methods Venue
RandomAttack A simple random method that chooses edges to flip randomly.
DICEAttack Marcin Waniek et al. πŸ“ Hiding Individuals and Communities in a Social Network, Nature Human Behavior'16
Nettack Daniel ZΓΌgner et al. πŸ“ Adversarial Attacks on Neural Networks for Graph Data, KDD'18
FGAttack Jinyin Chen et al. πŸ“ Fast Gradient Attack on Network Embedding, arXiv'18
Jinyin Chen et al. πŸ“ Link Prediction Adversarial Attack Via Iterative Gradient Attack, IEEE Trans'20
Hanjun Dai et al. πŸ“ Adversarial Attack on Graph Structured Data, ICML'18
GFAttack Heng Chang et al. πŸ“ A Restricted Black - box Adversarial Framework Towards Attacking Graph Embedding Models, AAAI'20
IGAttack Huijun Wu et al. πŸ“ Adversarial Examples on Graph Data: Deep Insights into Attack and Defense, IJCAI'19
SGAttack Jintang Li et al. πŸ“ Adversarial Attack on Large Scale Graph, TKDE'21

Untargeted Attack

Methods Venue
RandomAttack A simple random method that chooses edges to flip randomly
DICEAttack Marcin Waniek et al. πŸ“ Hiding Individuals and Communities in a Social Network, Nature Human Behavior'16
FGAttack Jinyin Chen et al. πŸ“ Fast Gradient Attack on Network Embedding, arXiv'18
Jinyin Chen et al. πŸ“ Link Prediction Adversarial Attack Via Iterative Gradient Attack, IEEE Trans'20
Hanjun Dai et al. πŸ“ Adversarial Attack on Graph Structured Data, ICML'18
Metattack Daniel ZΓΌgner et al. πŸ“ Adversarial Attacks on Graph Neural Networks via Meta Learning, ICLR'19
PGD, MinmaxAttack Kaidi Xu et al. πŸ“ Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective, IJCAI'19

Defense

Model-Level

Methods Venue
MedianGCN Liang Chen et al. πŸ“ Understanding Structural Vulnerability in Graph Convolutional Networks, IJCAI'21
RobustGCN Dingyuan Zhu et al. πŸ“ Robust Graph Convolutional Networks Against Adversarial Attacks, KDD'19

Data-Level

Methods Venue
JaccardPurification Huijun Wu et al. πŸ“ Adversarial Examples on Graph Data: Deep Insights into Attack and Defense, IJCAI'19

More details of literatures and the official codes can be found in Awesome Graph Adversarial Learning.

Comments
  • Benchmark Results of Attack Performance

    Benchmark Results of Attack Performance

    Hi, thanks for sharing the awesome repo with us! I recently run the attack sample code but the resultpgd_attack.py and random_attack.py under examples/attack/untargeted, but the accuracies of both evasion and poison attack seem not to decrease.

    I'm pretty confused by the attack results. For CV models, pgd attack easily decreases the accuracy to nearly random guesses, but the results of GreatX seem not to consent with CV models. Is it because the number of the perturbed edges is too small?

    Here are the results of pgd_attack.py

    Processing...
    Done!
    Training...
    100/100 [==============================] - Total: 874.37ms - 8ms/step- loss: 0.0524 - acc: 0.996 - val_loss: 0.625 - val_acc: 0.815
    Evaluating...
    1/1 [==============================] - Total: 1.82ms - 1ms/step- loss: 0.597 - acc: 0.843
    Before attack
     Objects in BunchDict:
    ╒═════════╀═══════════╕
    β”‚ Names   β”‚   Objects β”‚
    β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║
    β”‚ loss    β”‚  0.59718  β”‚
    β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
    β”‚ acc     β”‚  0.842555 β”‚
    β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•›
    PGD training...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 200/200 [00:02<00:00, 69.74it/s]
    Bernoulli sampling...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 20/20 [00:00<00:00, 804.86it/s]
    Evaluating...
    1/1 [==============================] - Total: 2.11ms - 2ms/step- loss: 0.603 - acc: 0.842
    After evasion attack
     Objects in BunchDict:
    ╒═════════╀═══════════╕
    β”‚ Names   β”‚   Objects β”‚
    β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║
    β”‚ loss    β”‚  0.603293 β”‚
    β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
    β”‚ acc     β”‚  0.842052 β”‚
    β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•›
    Training...
    100/100 [==============================] - Total: 535.83ms - 5ms/step- loss: 0.124 - acc: 0.976 - val_loss: 0.728 - val_acc: 0.779
    Evaluating...
    1/1 [==============================] - Total: 1.74ms - 1ms/step- loss: 0.766 - acc: 0.827
    After poisoning attack
     Objects in BunchDict:
    ╒═════════╀═══════════╕
    β”‚ Names   β”‚   Objects β”‚
    β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║
    β”‚ loss    β”‚  0.76604  β”‚
    β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
    β”‚ acc     β”‚  0.826962 β”‚
    β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•›
    

    Here are the results of random_attack.py

    Training...
    100/100 [==============================] - Total: 600.92ms - 6ms/step- loss: 0.0615 - acc: 0.984 - val_loss: 0.626 - val_acc: 0.811
    Evaluating...
    1/1 [==============================] - Total: 1.93ms - 1ms/step- loss: 0.564 - acc: 0.832
    Before attack
     Objects in BunchDict:
    ╒═════════╀═══════════╕
    β”‚ Names   β”‚   Objects β”‚
    β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║
    β”‚ loss    β”‚  0.564449 β”‚
    β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
    β”‚ acc     β”‚  0.832495 β”‚
    β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•›
    Peturbing graph...: 253it [00:00, 4588.44it/s]
    Evaluating...
    1/1 [==============================] - Total: 2.14ms - 2ms/step- loss: 0.585 - acc: 0.826
    After evasion attack
     Objects in BunchDict:
    ╒═════════╀═══════════╕
    β”‚ Names   β”‚   Objects β”‚
    β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║
    β”‚ loss    β”‚  0.584646 β”‚
    β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
    β”‚ acc     β”‚  0.826459 β”‚
    β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•›
    Training...
    100/100 [==============================] - Total: 530.04ms - 5ms/step- loss: 0.0767 - acc: 0.98 - val_loss: 0.574 - val_acc: 0.791
    Evaluating...
    1/1 [==============================] - Total: 1.77ms - 1ms/step- loss: 0.695 - acc: 0.813
    After poisoning attack
     Objects in BunchDict:
    ╒═════════╀═══════════╕
    β”‚ Names   β”‚   Objects β”‚
    β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║
    β”‚ loss    β”‚  0.695349 β”‚
    β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
    β”‚ acc     β”‚  0.81338  β”‚
    β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•›
    
    opened by ziqi-zhang 2
  • SG Attack example cannot run as expected on cuda

    SG Attack example cannot run as expected on cuda

    Hello, I got some error when I run SG Attack's example code on cuda device:

    Traceback (most recent call last):
      File "src/test.py", line 50, in <module>
        attacker.attack(target)
      File "/greatx/attack/targeted/sg_attack.py", line 212, in attack
        subgraph = self.get_subgraph(target, target_label, best_wrong_label)
      File "/greatx/attack/targeted/sg_attack.py", line 124, in get_subgraph
        self.label == best_wrong_label)[0].cpu().numpy()
    RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:3 and cpu!
    

    I found self.label is on cuda device, but best_wrong_label is on cpu. https://github.com/EdisonLeeeee/GreatX/blob/73eac351fdae842dbd74967622bd0e573194c765/greatx/attack/targeted/sg_attack.py#L123-L124

    I remove line94 .cpu(), everything is going well and no error report

    https://github.com/EdisonLeeeee/GreatX/blob/73eac351fdae842dbd74967622bd0e573194c765/greatx/attack/targeted/sg_attack.py#L94-L96

    I found there is a commit that adds .cpu end of line 94, so I dont know it's a bug or something else🀨

    opened by beiyanpiki 1
  • problem with metattack

    problem with metattack

    Thanks for this wonderful repo. However, when I run the metattack example ,the result is not promising Here is my result when attack Cora with metattack Training... 100/100 [====================] - Total: 520.68ms - 5ms/step- loss: 0.0713 - acc: 0.996 - val_loss: 0.574 - val_acc: 0.847 Evaluating... 1/1 [====================] - Total: 2.01ms - 2ms/step- loss: 0.522 - acc: 0.847 Before attack ╒═════════╀═══════════╕ β”‚ Names β”‚ Objects β”‚ β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║ β”‚ loss β”‚ 0.521524 β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ acc β”‚ 0.846579 β”‚ β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•› Peturbing graph...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 253/253 [01:00<00:00, 4.17it/s]Evaluating... 1/1 [====================] - Total: 2.08ms - 2ms/step- loss: 0.528 - acc: 0.844 After evasion attack ╒═════════╀═══════════╕ β”‚ Names β”‚ Objects β”‚ β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║ β”‚ loss β”‚ 0.528431 β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ acc β”‚ 0.844064 β”‚ β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•› Training... 32/100 [=====>..............] - ETA: 0s- loss: 0.212 - acc: 0.956 - val_loss: 0.634 - val_acc: 0.807 100/100 [====================] - Total: 407.58ms - 4ms/step- loss: 0.0601 - acc: 0.996 - val_loss: 0.704 - val_acc: 0.787 Evaluating... 1/1 [====================] - Total: 1.66ms - 1ms/step- loss: 0.711 - acc: 0.819 After poisoning attack ╒═════════╀═══════════╕ β”‚ Names β”‚ Objects β”‚ β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║ β”‚ loss β”‚ 0.710625 β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ acc β”‚ 0.818913 β”‚ β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•›

    opened by shanzhiq 1
  • Fix a venue of FeaturePropagation

    Fix a venue of FeaturePropagation

    Rossi's FeaturePropagation paper was submitted to ICLR'21 but was rejected. So how about updating with arXiv? This paper has not yet been accepted by other conferences.

    bug documentation 
    opened by jeongwhanchoi 1
  • Add ratio for `attacker.data()`, `attacker.edge_flips()`, and `attacker.feat_flips()`

    Add ratio for `attacker.data()`, `attacker.edge_flips()`, and `attacker.feat_flips()`

    This PR allows specifying an argument ratio for attacker.edge_flips(), and attacker.feat_flips(), which determines how many generated perturbations were used for further evaluation/visualization. Correspondingly, attacker.data() holds edge_ratio and feat_ratio for these two methods when constructing perturbed graph.

    • Example
    attacker = ...
    attacker.reset()
    attacker.attack(...)
    
    # Case1: Only 50% of generated edge perturbations were used
    trainer.evaluate(attacker.data(edge_ratio=0.5), mask=...)
    
    # Case2: Only 50% of generated feature perturbations were used
    trainer.evaluate(attacker.data(feat_ratio=0.5), mask=...)
    
    # NOTE: both arguments can be used simultaneously
    
    enhancement 
    opened by EdisonLeeeee 0
Releases(0.1.0)
  • 0.1.0(Jun 9, 2022)

    GraphWar 0.1.0 πŸŽ‰

    The first major release, built upon PyTorch and PyTorch Geometric (PyG).

    About GraphWar

    GraphWar is a graph adversarial learning toolbox based on PyTorch and PyTorch Geometric (PyG). It implements a wide range of adversarial attacks and defense methods focused on graph data. To facilitate the benchmark evaluation on graphs, we also provide a set of implementations on popular Graph Neural Networks (GNNs).

    Usages

    For more details, please refer to the documentation and examples.

    How fast can we train and evaluate your own GNN?

    Take GCN as an example:

    from graphwar.nn.models import GCN
    from graphwar.training import Trainer
    from torch_geometric.datasets import Planetoid
    dataset = Planetoid(root='.', name='Cora') # Any PyG dataset is available!
    data = dataset[0]
    model = GCN(dataset.num_features, dataset.num_classes)
    trainer = Trainer(model, device='cuda:0')
    trainer.fit({'data': data, 'mask': data.train_mask})
    trainer.evaluate({'data': data, 'mask': data.test_mask})
    

    A simple targeted manipulation attack

    from graphwar.attack.targeted import RandomAttack
    attacker = RandomAttack(data)
    attacker.attack(1, num_budgets=3) # attacking target node `1` with `3` edges 
    attacked_data = attacker.data()
    edge_flips = attacker.edge_flips()
    
    

    A simple untargeted (non-targeted) manipulation attack

    from graphwar.attack.untargeted import RandomAttack
    attacker = RandomAttack(data)
    attacker.attack(num_budgets=0.05) # attacking the graph with 5% edges perturbations
    attacked_data = attacker.data()
    edge_flips = attacker.edge_flips()
    

    We will continue to develop this project and introduce more state-of-the-art implementations of papers in the field of graph adversarial attacks and defenses.

    Source code(tar.gz)
    Source code(zip)
    graphwar-0.1.0-py3-none-any.whl(155.84 KB)
Owner
Jintang Li
Ph.D. student @ Sun Yat-sen University (SYSU), China.
Jintang Li
Galaxy images labelled by morphology (shape). Aimed at ML development and teaching

Galaxy images labelled by morphology (shape). Aimed at ML debugging and teaching.

Mike Walmsley 14 Nov 28, 2022
FewBit β€” a library for memory efficient training of large neural networks

FewBit FewBit β€” a library for memory efficient training of large neural networks. Its efficiency originates from storage optimizations applied to back

24 Oct 22, 2022
SciKit-Learn Laboratory (SKLL) makes it easy to run machine learning experiments.

SciKit-Learn Laboratory This Python package provides command-line utilities to make it easier to run machine learning experiments with scikit-learn. O

ETS 528 Nov 25, 2022
codes for paper Combining Dynamic Local Context Focus and Dependency Cluster Attention for Aspect-level sentiment classification

DLCF-DCA codes for paper Combining Dynamic Local Context Focus and Dependency Cluster Attention for Aspect-level sentiment classification. submitted t

15 Aug 30, 2022
Code for Max-Margin Contrastive Learning - AAAI 2022

Max-Margin Contrastive Learning This is a pytorch implementation for the paper Max-Margin Contrastive Learning accepted to AAAI 2022. This repository

Anshul Shah 12 Oct 22, 2022
Solution of Kaggle competition: Sartorius - Cell Instance Segmentation

Sartorius - Cell Instance Segmentation https://www.kaggle.com/c/sartorius-cell-instance-segmentation Environment setup Build docker image bash .dev_sc

68 Dec 09, 2022
Neural Ensemble Search for Performant and Calibrated Predictions

Neural Ensemble Search Introduction This repo contains the code accompanying the paper: Neural Ensemble Search for Performant and Calibrated Predictio

AutoML-Freiburg-Hannover 26 Dec 12, 2022
Train an imgs.ai model on your own dataset

imgs.ai is a fast, dataset-agnostic, deep visual search engine for digital art history based on neural network embeddings.

Fabian Offert 5 Dec 21, 2021
[Preprint] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang, Zhangyang Wang

Chasing Sparsity in Vision Transformers: An End-to-End Exploration Codes for [Preprint] Chasing Sparsity in Vision Transformers: An End-to-End Explora

VITA 64 Dec 08, 2022
Tool for working with Y-chromosome data from YFull and FTDNA

ycomp ycomp is a tool for working with Y-chromosome data from YFull and FTDNA. Run ycomp -h for information on how to use the program. Installation Th

Alexander Regueiro 2 Jun 18, 2022
A python/pytorch utility library

A python/pytorch utility library

Jiaqi Gu 5 Dec 02, 2022
Learning Efficient Online 3D Bin Packing on Packing Configuration Trees

Learning Efficient Online 3D Bin Packing on Packing Configuration Trees This repository is being continuously updated, please stay tuned! Any code con

86 Dec 28, 2022
Model serving at scale

Run inference at scale Cortex is an open source platform for large-scale machine learning inference workloads. Workloads Realtime APIs - respond to pr

Cortex Labs 7.9k Jan 06, 2023
Official implementation of VQ-Diffusion

Vector Quantized Diffusion Model for Text-to-Image Synthesis Overview This is the official repo for the paper: [Vector Quantized Diffusion Model for T

Microsoft 592 Jan 03, 2023
Streaming over lightweight data transformations

Description Data augmentation libarary for Deep Learning, which supports images, segmentation masks, labels and keypoints. Furthermore, SOLT is fast a

Research Unit of Medical Imaging, Physics and Technology 256 Jan 08, 2023
The code for the NeurIPS 2021 paper "A Unified View of cGANs with and without Classifiers".

Energy-based Conditional Generative Adversarial Network (ECGAN) This is the code for the NeurIPS 2021 paper "A Unified View of cGANs with and without

sianchen 22 May 28, 2022
Official code of CVPR 2021's PLOP: Learning without Forgetting for Continual Semantic Segmentation

PLOP: Learning without Forgetting for Continual Semantic Segmentation This repository contains all of our code. It is a modified version of Cermelli e

Arthur Douillard 116 Dec 14, 2022
Official implementation of "Dynamic Anchor Learning for Arbitrary-Oriented Object Detection" (AAAI2021).

DAL This project hosts the official implementation for our AAAI 2021 paper: Dynamic Anchor Learning for Arbitrary-Oriented Object Detection [arxiv] [c

ming71 215 Nov 28, 2022
PyTorch Implementation of Backbone of PicoDet

PicoDet-Backbone PyTorch Implementation of Backbone of PicoDet Original Implementation is implemented on PaddlePaddle. Example picodet_l_backbone = ES

Yonghye Kwon 7 Jul 12, 2022
Spherical Confidence Learning for Face Recognition, accepted to CVPR2021.

Sphere Confidence Face (SCF) This repository contains the PyTorch implementation of Sphere Confidence Face (SCF) proposed in the CVPR2021 paper: Shen

Maths 70 Dec 09, 2022