A graph adversarial learning toolbox based on PyTorch and DGL.

Related tags

Deep LearningGraphWar
Overview

GraphWar: Arms Race in Graph Adversarial Learning

NOTE: GraphWar is still in the early stages and the API will likely continue to change.

πŸš€ Installation

Please make sure you have installed PyTorch and Deep Graph Library(DGL).

# Comming soon
pip install -U graphwar

or

# Recommended
git clone https://github.com/EdisonLeeeee/GraphWar.git && cd GraphWar
pip install -e . --verbose

where -e means "editable" mode so you don't have to reinstall every time you make changes.

Get Started

Assume that you have a dgl.DGLgraph instance g that describes you dataset.

A simple targeted attack

from graphwar.attack.targeted import RandomAttack
attacker = RandomAttack(g)
attacker.attack(1, num_budgets=3) # attacking target node `1` with `3` edges 
attacked_g = attacker.g()
edge_flips = attacker.edge_flips()

A simple untargeted attack

from graphwar.attack.untargeted import RandomAttack
attacker = RandomAttack(g)
attacker.attack(num_budgets=0.05) # attacking the graph with 5% edges perturbations
attacked_g = attacker.g()
edge_flips = attacker.edge_flips()

Implementations

In detail, the following methods are currently implemented:

Attack

Targeted Attack

Methods Venue
RandomAttack A simple random method that chooses edges to flip randomly.
DICEAttack Marcin Waniek et al. πŸ“ Hiding Individuals and Communities in a Social Network, Nature Human Behavior'16
Nettack Daniel ZΓΌgner et al. πŸ“ Adversarial Attacks on Neural Networks for Graph Data, KDD'18
FGAttack Jinyin Chen et al. πŸ“ Fast Gradient Attack on Network Embedding, arXiv'18
Jinyin Chen et al. πŸ“ Link Prediction Adversarial Attack Via Iterative Gradient Attack, IEEE Trans'20
Hanjun Dai et al. πŸ“ Adversarial Attack on Graph Structured Data, ICML'18
GFAttack Heng Chang et al. πŸ“ A Restricted Black - box Adversarial Framework Towards Attacking Graph Embedding Models, AAAI'20
IGAttack Huijun Wu et al. πŸ“ Adversarial Examples on Graph Data: Deep Insights into Attack and Defense, IJCAI'19
SGAttack Jintang Li et al. πŸ“ Adversarial Attack on Large Scale Graph, TKDE'21

Untargeted Attack

Methods Venue
RandomAttack A simple random method that chooses edges to flip randomly
DICEAttack Marcin Waniek et al. πŸ“ Hiding Individuals and Communities in a Social Network, Nature Human Behavior'16
FGAttack Jinyin Chen et al. πŸ“ Fast Gradient Attack on Network Embedding, arXiv'18
Jinyin Chen et al. πŸ“ Link Prediction Adversarial Attack Via Iterative Gradient Attack, IEEE Trans'20
Hanjun Dai et al. πŸ“ Adversarial Attack on Graph Structured Data, ICML'18
Metattack Daniel ZΓΌgner et al. πŸ“ Adversarial Attacks on Graph Neural Networks via Meta Learning, ICLR'19
PGD, MinmaxAttack Kaidi Xu et al. πŸ“ Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective, IJCAI'19

Defense

Model-Level

Methods Venue
MedianGCN Liang Chen et al. πŸ“ Understanding Structural Vulnerability in Graph Convolutional Networks, IJCAI'21
RobustGCN Dingyuan Zhu et al. πŸ“ Robust Graph Convolutional Networks Against Adversarial Attacks, KDD'19

Data-Level

Methods Venue
JaccardPurification Huijun Wu et al. πŸ“ Adversarial Examples on Graph Data: Deep Insights into Attack and Defense, IJCAI'19

More details of literatures and the official codes can be found in Awesome Graph Adversarial Learning.

Comments
  • Benchmark Results of Attack Performance

    Benchmark Results of Attack Performance

    Hi, thanks for sharing the awesome repo with us! I recently run the attack sample code but the resultpgd_attack.py and random_attack.py under examples/attack/untargeted, but the accuracies of both evasion and poison attack seem not to decrease.

    I'm pretty confused by the attack results. For CV models, pgd attack easily decreases the accuracy to nearly random guesses, but the results of GreatX seem not to consent with CV models. Is it because the number of the perturbed edges is too small?

    Here are the results of pgd_attack.py

    Processing...
    Done!
    Training...
    100/100 [==============================] - Total: 874.37ms - 8ms/step- loss: 0.0524 - acc: 0.996 - val_loss: 0.625 - val_acc: 0.815
    Evaluating...
    1/1 [==============================] - Total: 1.82ms - 1ms/step- loss: 0.597 - acc: 0.843
    Before attack
     Objects in BunchDict:
    ╒═════════╀═══════════╕
    β”‚ Names   β”‚   Objects β”‚
    β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║
    β”‚ loss    β”‚  0.59718  β”‚
    β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
    β”‚ acc     β”‚  0.842555 β”‚
    β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•›
    PGD training...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 200/200 [00:02<00:00, 69.74it/s]
    Bernoulli sampling...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 20/20 [00:00<00:00, 804.86it/s]
    Evaluating...
    1/1 [==============================] - Total: 2.11ms - 2ms/step- loss: 0.603 - acc: 0.842
    After evasion attack
     Objects in BunchDict:
    ╒═════════╀═══════════╕
    β”‚ Names   β”‚   Objects β”‚
    β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║
    β”‚ loss    β”‚  0.603293 β”‚
    β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
    β”‚ acc     β”‚  0.842052 β”‚
    β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•›
    Training...
    100/100 [==============================] - Total: 535.83ms - 5ms/step- loss: 0.124 - acc: 0.976 - val_loss: 0.728 - val_acc: 0.779
    Evaluating...
    1/1 [==============================] - Total: 1.74ms - 1ms/step- loss: 0.766 - acc: 0.827
    After poisoning attack
     Objects in BunchDict:
    ╒═════════╀═══════════╕
    β”‚ Names   β”‚   Objects β”‚
    β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║
    β”‚ loss    β”‚  0.76604  β”‚
    β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
    β”‚ acc     β”‚  0.826962 β”‚
    β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•›
    

    Here are the results of random_attack.py

    Training...
    100/100 [==============================] - Total: 600.92ms - 6ms/step- loss: 0.0615 - acc: 0.984 - val_loss: 0.626 - val_acc: 0.811
    Evaluating...
    1/1 [==============================] - Total: 1.93ms - 1ms/step- loss: 0.564 - acc: 0.832
    Before attack
     Objects in BunchDict:
    ╒═════════╀═══════════╕
    β”‚ Names   β”‚   Objects β”‚
    β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║
    β”‚ loss    β”‚  0.564449 β”‚
    β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
    β”‚ acc     β”‚  0.832495 β”‚
    β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•›
    Peturbing graph...: 253it [00:00, 4588.44it/s]
    Evaluating...
    1/1 [==============================] - Total: 2.14ms - 2ms/step- loss: 0.585 - acc: 0.826
    After evasion attack
     Objects in BunchDict:
    ╒═════════╀═══════════╕
    β”‚ Names   β”‚   Objects β”‚
    β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║
    β”‚ loss    β”‚  0.584646 β”‚
    β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
    β”‚ acc     β”‚  0.826459 β”‚
    β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•›
    Training...
    100/100 [==============================] - Total: 530.04ms - 5ms/step- loss: 0.0767 - acc: 0.98 - val_loss: 0.574 - val_acc: 0.791
    Evaluating...
    1/1 [==============================] - Total: 1.77ms - 1ms/step- loss: 0.695 - acc: 0.813
    After poisoning attack
     Objects in BunchDict:
    ╒═════════╀═══════════╕
    β”‚ Names   β”‚   Objects β”‚
    β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║
    β”‚ loss    β”‚  0.695349 β”‚
    β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
    β”‚ acc     β”‚  0.81338  β”‚
    β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•›
    
    opened by ziqi-zhang 2
  • SG Attack example cannot run as expected on cuda

    SG Attack example cannot run as expected on cuda

    Hello, I got some error when I run SG Attack's example code on cuda device:

    Traceback (most recent call last):
      File "src/test.py", line 50, in <module>
        attacker.attack(target)
      File "/greatx/attack/targeted/sg_attack.py", line 212, in attack
        subgraph = self.get_subgraph(target, target_label, best_wrong_label)
      File "/greatx/attack/targeted/sg_attack.py", line 124, in get_subgraph
        self.label == best_wrong_label)[0].cpu().numpy()
    RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:3 and cpu!
    

    I found self.label is on cuda device, but best_wrong_label is on cpu. https://github.com/EdisonLeeeee/GreatX/blob/73eac351fdae842dbd74967622bd0e573194c765/greatx/attack/targeted/sg_attack.py#L123-L124

    I remove line94 .cpu(), everything is going well and no error report

    https://github.com/EdisonLeeeee/GreatX/blob/73eac351fdae842dbd74967622bd0e573194c765/greatx/attack/targeted/sg_attack.py#L94-L96

    I found there is a commit that adds .cpu end of line 94, so I dont know it's a bug or something else🀨

    opened by beiyanpiki 1
  • problem with metattack

    problem with metattack

    Thanks for this wonderful repo. However, when I run the metattack example ,the result is not promising Here is my result when attack Cora with metattack Training... 100/100 [====================] - Total: 520.68ms - 5ms/step- loss: 0.0713 - acc: 0.996 - val_loss: 0.574 - val_acc: 0.847 Evaluating... 1/1 [====================] - Total: 2.01ms - 2ms/step- loss: 0.522 - acc: 0.847 Before attack ╒═════════╀═══════════╕ β”‚ Names β”‚ Objects β”‚ β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║ β”‚ loss β”‚ 0.521524 β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ acc β”‚ 0.846579 β”‚ β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•› Peturbing graph...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 253/253 [01:00<00:00, 4.17it/s]Evaluating... 1/1 [====================] - Total: 2.08ms - 2ms/step- loss: 0.528 - acc: 0.844 After evasion attack ╒═════════╀═══════════╕ β”‚ Names β”‚ Objects β”‚ β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║ β”‚ loss β”‚ 0.528431 β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ acc β”‚ 0.844064 β”‚ β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•› Training... 32/100 [=====>..............] - ETA: 0s- loss: 0.212 - acc: 0.956 - val_loss: 0.634 - val_acc: 0.807 100/100 [====================] - Total: 407.58ms - 4ms/step- loss: 0.0601 - acc: 0.996 - val_loss: 0.704 - val_acc: 0.787 Evaluating... 1/1 [====================] - Total: 1.66ms - 1ms/step- loss: 0.711 - acc: 0.819 After poisoning attack ╒═════════╀═══════════╕ β”‚ Names β”‚ Objects β”‚ β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║ β”‚ loss β”‚ 0.710625 β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ acc β”‚ 0.818913 β”‚ β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•›

    opened by shanzhiq 1
  • Fix a venue of FeaturePropagation

    Fix a venue of FeaturePropagation

    Rossi's FeaturePropagation paper was submitted to ICLR'21 but was rejected. So how about updating with arXiv? This paper has not yet been accepted by other conferences.

    bug documentation 
    opened by jeongwhanchoi 1
  • Add ratio for `attacker.data()`, `attacker.edge_flips()`, and `attacker.feat_flips()`

    Add ratio for `attacker.data()`, `attacker.edge_flips()`, and `attacker.feat_flips()`

    This PR allows specifying an argument ratio for attacker.edge_flips(), and attacker.feat_flips(), which determines how many generated perturbations were used for further evaluation/visualization. Correspondingly, attacker.data() holds edge_ratio and feat_ratio for these two methods when constructing perturbed graph.

    • Example
    attacker = ...
    attacker.reset()
    attacker.attack(...)
    
    # Case1: Only 50% of generated edge perturbations were used
    trainer.evaluate(attacker.data(edge_ratio=0.5), mask=...)
    
    # Case2: Only 50% of generated feature perturbations were used
    trainer.evaluate(attacker.data(feat_ratio=0.5), mask=...)
    
    # NOTE: both arguments can be used simultaneously
    
    enhancement 
    opened by EdisonLeeeee 0
Releases(0.1.0)
  • 0.1.0(Jun 9, 2022)

    GraphWar 0.1.0 πŸŽ‰

    The first major release, built upon PyTorch and PyTorch Geometric (PyG).

    About GraphWar

    GraphWar is a graph adversarial learning toolbox based on PyTorch and PyTorch Geometric (PyG). It implements a wide range of adversarial attacks and defense methods focused on graph data. To facilitate the benchmark evaluation on graphs, we also provide a set of implementations on popular Graph Neural Networks (GNNs).

    Usages

    For more details, please refer to the documentation and examples.

    How fast can we train and evaluate your own GNN?

    Take GCN as an example:

    from graphwar.nn.models import GCN
    from graphwar.training import Trainer
    from torch_geometric.datasets import Planetoid
    dataset = Planetoid(root='.', name='Cora') # Any PyG dataset is available!
    data = dataset[0]
    model = GCN(dataset.num_features, dataset.num_classes)
    trainer = Trainer(model, device='cuda:0')
    trainer.fit({'data': data, 'mask': data.train_mask})
    trainer.evaluate({'data': data, 'mask': data.test_mask})
    

    A simple targeted manipulation attack

    from graphwar.attack.targeted import RandomAttack
    attacker = RandomAttack(data)
    attacker.attack(1, num_budgets=3) # attacking target node `1` with `3` edges 
    attacked_data = attacker.data()
    edge_flips = attacker.edge_flips()
    
    

    A simple untargeted (non-targeted) manipulation attack

    from graphwar.attack.untargeted import RandomAttack
    attacker = RandomAttack(data)
    attacker.attack(num_budgets=0.05) # attacking the graph with 5% edges perturbations
    attacked_data = attacker.data()
    edge_flips = attacker.edge_flips()
    

    We will continue to develop this project and introduce more state-of-the-art implementations of papers in the field of graph adversarial attacks and defenses.

    Source code(tar.gz)
    Source code(zip)
    graphwar-0.1.0-py3-none-any.whl(155.84 KB)
Owner
Jintang Li
Ph.D. student @ Sun Yat-sen University (SYSU), China.
Jintang Li
Optimus: the first large-scale pre-trained VAE language model

Optimus: the first pre-trained Big VAE language model This repository contains source code necessary to reproduce the results presented in the EMNLP 2

314 Dec 19, 2022
Code for "Unsupervised Source Separation via Bayesian inference in the latent domain"

LQVAE-separation Code for "Unsupervised Source Separation via Bayesian inference in the latent domain" Paper Samples GT Compressed Separated Drums GT

Michele Mancusi 30 Oct 25, 2022
Convolutional neural network web app trained to track our infant’s sleep schedule using our Google Nest camera.

Machine Learning Sleep Schedule Tracker What is it? Convolutional neural network web app trained to track our infant’s sleep schedule using our Google

g-parki 7 Jul 15, 2022
InDuDoNet+: A Model-Driven Interpretable Dual Domain Network for Metal Artifact Reduction in CT Images

InDuDoNet+: A Model-Driven Interpretable Dual Domain Network for Metal Artifact Reduction in CT Images Hong Wang, Yuexiang Li, Haimiao Zhang, Deyu Men

Hong Wang 4 Dec 27, 2022
GRF: Learning a General Radiance Field for 3D Representation and Rendering

GRF: Learning a General Radiance Field for 3D Representation and Rendering [Paper] [Video] GRF: Learning a General Radiance Field for 3D Representatio

Alex Trevithick 243 Dec 29, 2022
Infrastructure as Code (IaC) for a self-hosted version of Gnosis Safe on AWS

Welcome to Yearn Gnosis Safe! Setting up your local environment Infrastructure Deploying Gnosis Safe Prerequisites 1. Create infrastructure for secret

Numan 16 Jul 18, 2022
Text Summarization - WCN β€” Weighted Contextual N-gram method for evaluation of Text Summarization

Text Summarization WCN β€” Weighted Contextual N-gram method for evaluation of Text Summarization In this project, I fine tune T5 model on Extreme Summa

Aditya Shah 1 Jan 03, 2022
WormMovementSimulation - 3D Simulation of Worm Body Movement with Neurons attached to its body

Generate 3D Locomotion Data This module is intended to create 2D video trajector

1 Aug 09, 2022
An implementation of the AlphaZero algorithm for Gomoku (also called Gobang or Five in a Row)

AlphaZero-Gomoku This is an implementation of the AlphaZero algorithm for playing the simple board game Gomoku (also called Gobang or Five in a Row) f

Junxiao Song 2.8k Dec 26, 2022
Spatial color quantization in Rust

rscolorq Rust port of Derrick Coetzee's scolorq, based on the 1998 paper "On spatial quantization of color images" by Jan Puzicha, Markus Held, Jens K

Collyn O'Kane 37 Dec 22, 2022
BARTScore: Evaluating Generated Text as Text Generation

This is the Repo for the paper: BARTScore: Evaluating Generated Text as Text Generation Updates 2021.06.28 Release online evaluation Demo 2021.06.25 R

NeuLab 196 Dec 17, 2022
(Python, R, C/C++) Isolation Forest and variations such as SCiForest and EIF, with some additions (outlier detection + similarity + NA imputation)

IsoTree Fast and multi-threaded implementation of Extended Isolation Forest, Fair-Cut Forest, SCiForest (a.k.a. Split-Criterion iForest), and regular

141 Dec 29, 2022
Repository containing detailed experiments related to the paper "Memotion Analysis through the Lens of Joint Embedding".

Memotion Analysis Through The Lens Of Joint Embedding This repository contains the experiments conducted as described in the paper 'Memotion Analysis

Nethra Gunti 1 Mar 16, 2022
A curated list of awesome neural radiance fields papers

Awesome Neural Radiance Fields A curated list of awesome neural radiance fields papers, inspired by awesome-computer-vision. How to submit a pull requ

Yen-Chen Lin 3.9k Dec 27, 2022
BOOKSUM: A Collection of Datasets for Long-form Narrative Summarization

BOOKSUM: A Collection of Datasets for Long-form Narrative Summarization Authors: Wojciech KryΕ›ciΕ„ski, Nazneen Rajani, Divyansh Agarwal, Caiming Xiong,

Salesforce 125 Dec 31, 2022
Code of Periodic Activation Functions Induce Stationarity

Periodic Activation Functions Induce Stationarity This repository is the official implementation of the methods in the publication: L. Meronen, M. Tra

AaltoML 12 Jun 07, 2022
High-quality implementations of standard and SOTA methods on a variety of tasks.

Uncertainty Baselines The goal of Uncertainty Baselines is to provide a template for researchers to build on. The baselines can be a starting point fo

Google 1.1k Dec 30, 2022
Lunar is a neural network aimbot that uses real-time object detection accelerated with CUDA on Nvidia GPUs.

Lunar Lunar is a neural network aimbot that uses real-time object detection accelerated with CUDA on Nvidia GPUs. About Lunar can be modified to work

Zeyad Mansour 276 Jan 07, 2023
Predict the latency time of the deep learning models

Deep Neural Network Prediction Step 1. Genernate random parameters and Run them sequentially : $ python3 collect_data.py -gp -ep -pp -pl pooling -num

QAQ 1 Nov 12, 2021
This repository contains implementations and illustrative code to accompany DeepMind publications

DeepMind Research This repository contains implementations and illustrative code to accompany DeepMind publications. Along with publishing papers to a

DeepMind 11.3k Dec 31, 2022