Existing Literature about Machine Unlearning

Overview

Machine Unlearning Papers

2021

Brophy and Lowd. Machine Unlearning for Random Forests. In ICML 2021.

Bourtoule et al. Machine Unlearning. In IEEE Symposium on Security and Privacy 2021.

Gupta et al. Adaptive Machine Unlearning. In Neurips 2021.

Huang et al. Unlearnable Examples: Making Personal Data Unexploitable. In ICLR 2021.

Neel et al. Descent-to-Delete: Gradient-Based Methods for Machine Unlearning. In ALT 2021.

Schelter et al. HedgeCut: Maintaining Randomised Trees for Low-Latency Machine Unlearning. In SIGMOD 2021.

Sekhari et al. Remember What You Want to Forget: Algorithms for Machine Unlearning. In Neurips 2021.

arXiv

Chen et al. Graph Unlearning. In arXiv 2021.

Chen et al. Machine unlearning via GAN. In arXiv 2021.

Fu et al. Bayesian Inference Forgetting. In arXiv 2021.

He et al. DeepObliviate: A Powerful Charm for Erasing Data Residual Memory in Deep Neural Networks. In arXiv 2021.

Khan and Swaroop. Knowledge-Adaptation Priors. In arXiv 2021.

Marchant et al. Hard to Forget: Poisoning Attacks on Certified Machine Unlearning . In arXiv 2021.

Parne et al. Machine Unlearning: Learning, Polluting, and Unlearning for Spam Email. In arXiv 2021.

Tarun et al. Fast Yet Effective Machine Unlearning . In arXiv 2021.

Ullah et al. Machine Unlearning via Algorithmic Stability. In arXiv 2021.

Wang et al. Federated Unlearning via Class-Discriminative Pruning . In arXiv 2021.

Warnecke et al. Machine Unlearning for Features and Labels. In arXiv 2021.

Zeng et al. Learning to Refit for Convex Learning Problems In arXiv 2021.

2020

Guo et al. Certified Data Removal from Machine Learning Models. In ICML 2020.

Golatkar et al. Eternal Sunshine of the Spotless Net: Selective Forgetting in Deep Networks. In CVPR 2020.

Wu et. al DeltaGrad: Rapid Retraining of Machine Learning Models. In ICML 2020.

arXiv

Aldaghri et al. Coded Machine Unlearning. In arXiv 2020.

Baumhauer et al. Machine Unlearning: Linear Filtration for Logit-based Classifiers. In arXiv 2020.

Garg et al. Formalizing Data Deletion in the Context of the Right to be Forgotten. In arXiv 2020.

Chen et al. When Machine Unlearning Jeopardizes Privacy. In arXiv 2020.

Felps et al. Class Clown: Data Redaction in Machine Unlearning at Enterprise Scale. In arXiv 2020.

Golatkar et al. Mixed-Privacy Forgetting in Deep Networks. In arXiv 2020.

Golatkar et al. Forgetting Outside the Box: Scrubbing Deep Networks of Information Accessible from Input-Output Observations. In arXiv 2020.

Izzo et al. Approximate Data Deletion from Machine Learning Models: Algorithms and Evaluations. In arXiv 2020.

Liu et al. Learn to Forget: User-Level Memorization Elimination in Federated Learning. In arXiv 2020.

Sommer et al. Towards Probabilistic Verification of Machine Unlearning. In arXiv 2020.

Yiu et al. Learn to Forget: User-Level Memorization Elimination in Federated Learning. In arXiv 2020.

Yu et al. Membership Inference with Privately Augmented Data Endorses the Benign while Suppresses the Adversary. In arXiv 2020.

2019

Chen et al. A Novel Online Incremental and Decremental Learning Algorithm Based on Variable Support Vector Machine. In Cluster Computing 2019.

Ginart et al. Making AI Forget You: Data Deletion in Machine Learning. In NeurIPS 2019.

Schelter. “Amnesia” – Towards Machine Learning Models That Can Forget User Data Very Fast. In AIDB 2019.

Shintre et al. Making Machine Learning Forget. In APF 2019.

Du et al. Lifelong Anomaly Detection Through Unlearning. In CCS 2019.

Wang et al. Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks. In IEEE Symposium on Security and Privacy 2019.

arXiv

Tople te al. Analyzing Privacy Loss in Updates of Natural Language Models. In arXiv 2019.

2018

Cao et al. Efficient Repair of Polluted Machine Learning Systems via Causal Unlearning. In ASIACCS 2018.

European Union. GDPR, 2018.

State of California. California Consumer Privacy Act, 2018.

Veale et al. Algorithms that remember: model inversion attacks and data protection law. In The Royal Society 2018.

Villaronga et al. Humans Forget, Machines Remember: Artificial Intelligence and the Right to Be Forgotten. In Computer Law & Security Review 2018.

2017

Kwak et al. Let Machines Unlearn--Machine Unlearning and the Right to be Forgotten. In SIGSEC 2017.

Shokri et al. Membership Inference Attacks Against Machine Learning Models. In SP 2017.

Before 2017

Cao and Yang. Towards Making Systems Forget with Machine Unlearning. In IEEE Symposium on Security and Privacy 2015.

Tsai et al. Incremental and decremental training for linear classification. In KDD 2014.

Karasuyama and Takeuchi. Multiple Incremental Decremental Learning of Support Vector Machines. In NeurIPS 2009.

Duan et al. Decremental Learning Algorithms for Nonlinear Langrangian and Least Squares Support Vector Machines. In OSB 2007.

Romero et al. Incremental and Decremental Learning for Linear Support Vector Machines. In ICANN 2007.

Tveit et al. Incremental and Decremental Proximal Support Vector Classification using Decay Coefficients. In DaWaK 2003.

Tveit and Hetland. Multicategory Incremental Proximal Support Vector Classifiers. In KES 2003.

Cauwenberghs and Poggio. Incremental and Decremental Support Vector Machine Learning. In NeurIPS 2001.

Canada. PIPEDA, 2000.

Owner
Jonathan Brophy
PhD student at UO.
Jonathan Brophy
Pytorch implementation of Integrating Tree Path in Transformer for Code Representation

This is an official Pytorch implementation of the approaches proposed in: Han Peng, Ge Li, Wenhan Wang, Yunfei Zhao, Zhi Jin “Integrating Tree Path in

Han Peng 16 Dec 23, 2022
Supplementary materials for ISMIR 2021 LBD paper "Evaluation of Latent Space Disentanglement in the Presence of Interdependent Attributes"

Evaluation of Latent Space Disentanglement in the Presence of Interdependent Attributes Supplementary materials for ISMIR 2021 LBD submission: K. N. W

Karn Watcharasupat 2 Oct 25, 2021
Learning to Predict Gradients for Semi-Supervised Continual Learning

Learning to Predict Gradients for Semi-Supervised Continual Learning Code for project: "Learning to Predict Gradients for Semi-Supervised Continual Le

Yan Luo 2 Mar 05, 2022
Locally Most Powerful Bayesian Test for Out-of-Distribution Detection using Deep Generative Models

LMPBT Supplementary code for the Paper entitled ``Locally Most Powerful Bayesian Test for Out-of-Distribution Detection using Deep Generative Models"

1 Sep 29, 2022
GNNAdvisor: An Efficient Runtime System for GNN Acceleration on GPUs

GNNAdvisor: An Efficient Runtime System for GNN Acceleration on GPUs [Paper, Slides, Video Talk] at USENIX OSDI'21 @inproceedings{GNNAdvisor, title=

YUKE WANG 47 Jan 03, 2023
Generalized Matrix Means for Semi-Supervised Learning with Multilayer Graphs

Generalized Matrix Means for Semi-Supervised Learning with Multilayer Graphs MATLAB implementation of the paper: P. Mercado, F. Tudisco, and M. Hein,

Pedro Mercado 6 May 26, 2022
Lighthouse: Predicting Lighting Volumes for Spatially-Coherent Illumination

Lighthouse: Predicting Lighting Volumes for Spatially-Coherent Illumination Pratul P. Srinivasan, Ben Mildenhall, Matthew Tancik, Jonathan T. Barron,

Pratul Srinivasan 65 Dec 14, 2022
Multi-view 3D reconstruction using neural rendering. Unofficial implementation of UNISURF, VolSDF, NeuS and more.

Volume rendering + 3D implicit surface Showcase What? previous: surface rendering; now: volume rendering previous: NeRF's volume density; now: implici

Jianfei Guo 682 Jan 04, 2023
Official code for "End-to-End Optimization of Scene Layout" -- including VAE, Diff Render, SPADE for colorization (CVPR 2020 Oral)

End-to-End Optimization of Scene Layout Code release for: End-to-End Optimization of Scene Layout CVPR 2020 (Oral) Project site, Bibtex For help conta

Andrew Luo 41 Dec 09, 2022
Temporal-Relational CrossTransformers

Temporal-Relational Cross-Transformers (TRX) This repo contains code for the method introduced in the paper: Temporal-Relational CrossTransformers for

83 Dec 12, 2022
Concept drift monitoring for HA model servers.

{Fast, Correct, Simple} - pick three Easily compare training and production ML data & model distributions Goals Boxkite is an instrumentation library

98 Dec 15, 2022
Align before Fuse: Vision and Language Representation Learning with Momentum Distillation

This is the official PyTorch implementation of the ALBEF paper [Blog]. This repository supports pre-training on custom datasets, as well as finetuning on VQA, SNLI-VE, NLVR2, Image-Text Retrieval on

Salesforce 805 Jan 09, 2023
CLUES: Few-Shot Learning Evaluation in Natural Language Understanding

CLUES: Few-Shot Learning Evaluation in Natural Language Understanding This repo contains the data and source code for baseline models in the NeurIPS 2

Microsoft 29 Dec 29, 2022
Official Implementation of DAFormer: Improving Network Architectures and Training Strategies for Domain-Adaptive Semantic Segmentation

DAFormer: Improving Network Architectures and Training Strategies for Domain-Adaptive Semantic Segmentation [Arxiv] [Paper] As acquiring pixel-wise an

Lukas Hoyer 305 Dec 29, 2022
Official implementation of the NRNS paper: No RL, No Simulation: Learning to Navigate without Navigating

No RL No Simulation (NRNS) Official implementation of the NRNS paper: No RL, No Simulation: Learning to Navigate without Navigating NRNS is a heriarch

Meera Hahn 20 Nov 29, 2022
Emotional conditioned music generation using transformer-based model.

This is the official repository of EMOPIA: A Multi-Modal Pop Piano Dataset For Emotion Recognition and Emotion-based Music Generation. The paper has b

hung anna 96 Nov 09, 2022
BirdCLEF 2021 - Birdcall Identification 4th place solution

BirdCLEF 2021 - Birdcall Identification 4th place solution My solution detail kaggle discussion Inference Notebook (best submission) Environment Use K

tattaka 42 Jan 02, 2023
High-Resolution 3D Human Digitization from A Single Image.

PIFuHD: Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization (CVPR 2020) News: [2020/06/15] Demo with Google Colab (i

Meta Research 8.4k Dec 29, 2022
PyTorch implementation of neural style randomization for data augmentation

README Augment training images for deep neural networks by randomizing their visual style, as described in our paper: https://arxiv.org/abs/1809.05375

84 Nov 23, 2022
Code to run experiments in SLOE: A Faster Method for Statistical Inference in High-Dimensional Logistic Regression.

Code to run experiments in SLOE: A Faster Method for Statistical Inference in High-Dimensional Logistic Regression. Not an official Google product. Me

Google Research 27 Dec 12, 2022